text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to load field value in create method?
I want to load a field value (batch_course_name) into a variable ( loadthefield ) inside a create method in order to use it as a prefix to a sequence field (name). I just don't know how to do that. Can you help me?
class intrac_batches(osv.osv):
def create(self, cr, uid, vals, context=None):
if not vals:
vals = {}
if context is None:
context = {}
loadthefield = what to put here to load the value of the field batch_course_name
vals['name'] = loadthefield + self.pool.get('ir.sequence').get(cr, uid, 'intrac.batches.number')
return super(intrac_batches, self).create(cr, uid, vals, context=context)
_name = 'intrac.batches'
_columns = {
'name': fields.char('Batch Number'),
'batch_course_name': fields.many2one('intrac.courses', 'Course'),
'
}
intrac_batches()
Using self.pool.get and browse method load many2one field value in create method.
if 'batch_course_name' in vals:
course_obj = self.pool.get('intrac.courses')
course_value = course_obj.browse(cr, uid, vals['batch_course_name'])
course_name = course_value.name
else:
course_name = False
vals['name'] = course_name + self.pool.get('ir.sequence').get(cr, uid, 'intrac.batches.number')
Thanks a billion Parkash
Prakash has mentioned it, although indirectly. You get the value from the form (assuming that it is not readonly) within the vals dictionary, i.e. vals['batch_course_name']. If you want to avoid KeyError, use vals.get('batch_course_name', ''). It will return empty string ('') if batch_course_name is not found.
You are right Ivan. Thank you both for your great help.
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/how-to-load-field-value-in-create-method-69494 | CC-MAIN-2017-30 | refinedweb | 292 | 53.98 |
Thank you.
I'm at the Compilation stage. I opened the GLFW.sln file by VS 2017 but no glfw3.lib file was created!
What could the problem be please?
Thank you.
I'm at the Compilation stage. I opened the GLFW.sln file by VS 2017 but no glfw3.lib file was created!
What could the problem be please?
I did these:
1- Added glfw3.lib to this address:
C:\Users\Abbasi\Documents\OpenGL_Projects\Libs
and glad.h and other header files of the include folder to:
C:\Users\Abbasi\Documents\OpenGL_Projects\Includes
2- Then, created a project on VS 2017:
File > New Project > Win32 console application (name testgl1) > Next > Empty project finished.
In the solution explorer: Add New Item > C++ source file (name testgl1) > Add
3- Went to testgl1's properties:
VC++ Directories: Include directories => added: "C:\Users\Abbasi\Documents\OpenGL_Projects\Include s" > OK
Linker > Input > Additional Dependencies => added: "C:\Users\Abbasi\Documents\OpenGL_Projects\Lib s" > OK
4- Now in the project when I write: #include <glad/glad.h>, it doesn't recognized it!
And also when I type this code:
Code :#include <iostream> using namespace std; #include "vgl.h" #include "LoadShaders.h" enum VAO_IDs { Triangles, NumVAOs }; enum Buffer_IDs { ArrayBuffer, NumBuffers }; enum Attrib_IDs { vPosition = 0 }; GLuint VAOs[NumVAOs]; GLuint Buffers[NumBuffers]; const GLuint NumVertices = 6; //-------------------------------------------------------------------- // // init // void init(void) { static const GLfloat vertices[NumVertices][2] = { { -0.90, -0.90 }, // Triangle 1 { 0.85, -0.90 }, { -0.90, 0.85 }, { 0.90, -0.85 }, // Triangle 2 { 0.90, 0.90 }, { -0.85, 0.90 } }; glCreateBuffers(NumBuffers, Buffers); glNamedBufferStorage(Buffers[ArrayBuffer], sizeof(vertices), vertices, 0); ShaderInfo shaders[] = { { GL_VERTEX_SHADER, "triangles.vert" }, { GL_FRAGMENT_SHADER, "triangles.frag" }, { GL_NONE, NULL } }; GLuint program = LoadShaders(shaders); glUseProgram(program); glGenVertexArrays(NumVAOs, VAOs); glBindVertexArray(VAOs[Triangles]); glBindBuffer(GL_ARRAY_BUFFER, Buffers[ArrayBuffer]); glVertexAttribPointer(vPosition, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0)); glEnableVertexAttribArray(vPosition); } //-------------------------------------------------------------------- // // display // void display(void) { static const float black[] = { 0.0f, 0.0f, 0.0f, 0.0f }; glClearBufferfv(GL_COLOR, 0, black); glBindVertexArray(VAOs[Triangles]); glDrawArrays(GL_TRIANGLES, 0, NumVertices); } //-------------------------------------------------------------------- // // main // int main(int argc, char** argv) { glfwInit(); GLFWwindow* window = glfwCreateWindow(640, 480, "Triangles", NULL, NULL); glfwMakeContextCurrent(window); gl3wInit(); init(); while (!glfwWindowShouldClose(window)) { display(); glfwSwapBuffers(window); glfwPollEvents(); } glfwDestroyWindow(window); glfwTerminate(); }
I get this result pointing to many unknown functions!
What is/are the problem(s) please?
How did you run your OpenGL project using Cmake and Visual Studio for the first time?
It seemed that .h and .lib files hadn't been put in the right folder. So I put the contents of both Include and Lib folders of:
C:\Users\Abbasi\Documents\OpenGL_Projects to the include and lib folders of this path:
C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Tools\MSVC\14.10.25017
Then in a Win32 Console Application on Visual Studio 2017 I typed:
<glad/glad.h>
Still not identified by the Visual Studio!!
I follow following strategy might help you.
1. Where ever you .sln soultion file is in same directory create folders with name "include" and "lib".
2. All the .h file needed by your project put them in this include folder.
3. All the .lib files needed by your project put them in .lib folder
4. Open you solution file
5. Go to projects -> settings -> c/c++ -> additional header
In this set following path
"$(SolutionDir)\include
6. Click on apply
7. In same Project->settings ->Linker -> General -> Additional Library Directories
Put here "$(SolutionDir)\lib"
8.Click apply
9. Now go to Projects -> Settings->Linker -> input
Here in right hand side first row you will see all .lib file names.
Add all libs needed by your project here.
example glut.lib;glew.lib
Note : Please be careful while adding libs here. If you are in debuging mode please add debug versions of .libs.
Also if you are developing applications on x64 i.e 64 bit platform please add 64 bit versions of libs.
10.Again click apply -> then click OK
11. After this your project should find includes and libs and it will compile and link and create exe in debug folder if you are in debug mode.
12. Please before running make sure all dll files of the libs that you are using are present in this debug folder or folder where your final exe is generated.
Note : Please remember If what ever settings I have given above are done by you while you were working in debug mode and later on you change to release mode you have to do this setting again but only one time. Similar for changing platform from win32 to x64. Setting across this modes are not persistent.
Hope this will help.
Regards
PixelClear | https://www.opengl.org/discussion_boards/showthread.php/200019-The-first-example-I-see-in-a-book?p=1288748 | CC-MAIN-2018-26 | refinedweb | 766 | 59.8 |
Linux Processes and Signals
The signals and processes control almost every task of the system.
We get the following response at ps -ef:
UID PID PPID C STIME TTY TIME CMD root 1 0 0 2010 ? 00:01:48 init root 21033 1 0 Apr04 ? 00:00:39 crond root 24765 1 0 Apr08 ? 00:00:01 /usr/sbin/httpd
Each process is allocated a unique number, process identifier (PID). It's an integer between 2 and 32,768. When a process is started, the numbers restart from 2, and the number 1 is typically reserved for the init process as shown in the above example. The process #1 manages other processes.
When we run a program, the code that will be executed is stored in a disk file. In general, a linux process can't write to the memory area. The area is for holding the program code so that the code can be loaded into memory as read-only (so, it can be safely shared).
The system libraries can also be shared. Therefore, there need be only one copy of printf() in memory, even if there are many programs calling it.
When we run two programs, there are variables unique to each programs, unlike the shared libraries, these are in separate data space of each process, and usually can't be shared. In other words, a process has its own stack space, used for local variables. It also has its own environment variables which are maintained by each process. A process should also has its own program counter, a record of where it has gotten to in its execution (execution thread - more on linux pthread).
The process table describes all the processes that are currently loaded. The ps command shows the processes. By default, it shows only processes that maintain a connection with a terminal, a console, a serial line, or a pseudo terminal. Other processes that can run without communication with a user on a terminal are system processes that Linux manages shared resources. To see all processes, we use -e option and -f to get full information (ps -ef).
Here is the STAT output from ps:
$ ps -ax PID TTY STAT TIME COMMAND 1 ? Ss 1:48 init [3] 2 ? S< 0:03 [migration/0] 3 ? SN 0:00 [ksoftirqd/0] .... 2981 ? S<sl 10:14 auditd 2983 ? S<sl 3:43 /sbin/audispd .... 3428 ? SLs 0:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g 3464 ? Ss 0:00 rpc.rquotad 3508 ? S< 0:00 [nfsd4] .... 3812 tty1 Ss+ 0:00 /sbin/mingetty tty1 3813 tty2 Ss+ 0:00 /sbin/mingetty tty2 3814 tty3 Ss+ 0:00 /sbin/mingetty tty3 3815 tty4 Ss+ 0:00 /sbin/mingetty tty4 ..... 19874 pts/1 R+ 0:00 ps -ax 19875 pts/1 S+ 0:00 more 21033 ? Ss 0:39 crond 24765 ? Ss 0:01 /usr/sbin/httpd
The meaning of the code is in the table below:
Let's look at the following process:
1 ? Ss 1:48 init [3]
Each child process is started by parent process. When linux starts, it runs a single program, init, with process #1. This is OS process manager, and it's the prime ancestor of all processes. Then, other system processes are started by init or by other processes started by init. The login procedure is one of the example. The init starts the getty program once for each terminal that we can use to log in, and it is shown in the ps as below:
3812 tty1 Ss+ 0:00 /sbin/mingetty tty1
The getty processes wait for activity at the terminal, prompt the user with the login prompt, and then pass control to the login program, which sets up the user environment, and starts a shell. When the user shell exits, init starts another getty process.
The ability to start new processes and to wait for them to finish is fundamental to the system. We can do the same thing within our own programs with the system calls fork(), exec(), and wait().
A system call is a controlled entry point into the kernel, allowing a process to request that the kernel perform some action for the process.
Actually, a system call changes the processor state from user mode to kernel mode, so that the CPU can access protected kernel memory. The kernel makes a range of services accessible to programs via the system call application programming interface (API).
Let's look at the ps STAT output for ps ax itself:
23603 pts/1 R+ 0:00 ps ax
The STAT R indicates the process 23603 is in a run state. In other words, it tells its own state. The indicator shows that the program is ready to run, and is not necessarily running. The R+ indicates that the process is in foreground not waiting for other processes to finish nor waiting for input or output to complete. That's why we may see two such processes listed in ps output.
Linux kernel uses a process scheduler to decide which process will get the next time slice based on the process priority.
Usually, several programs are competing for the same resources. A program that performs short burst of work and pause for input is considered better behaved than the one that hog the processor by continually calculating/querying the system. Well-behaved programs are termed nice programs, and in a sense this niceness can be measured.
Tho OS determines the priority of a process based on a nice value and on the behavior of the program. Program that run for long periods without pausing generally get lower priorities. Programs that pause get rewarded. This helps keep a program that interacts with the user responsive; while it is waiting for some input from the user, the system increases its priority, so that when it's ready to resume, it has a high priority.
A niceness of -20 is the highest priority and 19 or 20 is the lowest priority. The default niceness for processes is inherited from its parent process, usually 0. But we can set the nice value using nice and adjust it using renice. The nice command increases the nice value of a process by 10, giving it a lower priority. Only the superuser (root) may set the niceness to a smaller (higher priority) value. On Linux it is possible to change /etc/security/limits.conf to allow other users or groups to set low nice values.
We can view the nice values of active processes using -l or -f option as in ps -l .
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 0 S 601 12649 12648 0 75 0 - 1135 wait pts/0 00:00:00 bash 0 S 601 12681 12649 0 76 0 - 1122 wait pts/0 00:00:00 myTest.sh 0 S 601 12682 12681 0 76 0 - 929 - pts/0 00:00:00 sleep 0 R 601 12683 12649 0 76 0 - 1054 - pts/0 00:00:00 ps
Here we can see that the myTest.sh program is running with a default nice value 0. If it had been started with the following command:
$ nice ./myTest.sh &it would have been allocated a nice value of +10.
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 0 S 601 9835 9834 0 75 0 - 1135 wait pts/1 00:00:00 bash 0 S 601 12744 12649 0 86 10 - 1122 wait pts/0 00:00:00 myTest.sh 0 S 601 12745 12744 0 86 10 - 929 - pts/0 00:00:00 sleep 0 R 601 12746 12649 0 76 0 - 1054 - pts/0 00:00:00 ps
We have another way of doing it:
$ renice 10 12681 12681: old priority 0, new priority 10
With higher nice value, the program will run less often. As we see below, the status column now also contains N to indicate that the nice value has changed from the default.
$ ps x 12649 pts/0 Ss 0:00 -bash 12744 pts/0 SN 0:00 /bin/bash ./myTest.sh 12745 pts/0 SN 0:00 sleep 100 12867 pts/0 R+ 0:00 ps x
The PPID field of ps output indicates the parent process ID, the PID of either the process that caused this process to start or, if that process is no longer running, init (PID 1).
When we boot the system, the kernel creates a special process called init, the parent of all processes, which is derived from the file /sbin/init.
All processes on the system are created (using fork()) either by init or by one of its descendants. The init process always has the process ID 1 and runs with superuser privileges. The init process can't be killed, and it terminates only when the system is shut down. The main task of init is to create and monitor a range of processes required by a running system.
A daemon (syslogd, httpd, etc.) is a special-purpose process that is created and handled by the system in the same way as other processes. However, it differs from other processes:
- It is long-lived. A daemon process is often started at system boot and remains in existence until the system is shut down.
- It runs in the background, and has no controlling terminal from which it can read input or to which it can write output.
We can make a program to run from inside another program, and create a new process by using the system library function. As an example, the code below is using system to run ps:
// mySysCall.c #include <iostream> int main() { system("ps ax"); std::cout << "Done." << std::endl; exit(0); return 0; }
If we run the program, we get:
$./mySysCall PID TTY STAT TIME COMMAND 1 ? Ss 1:48 init [3] .... 24447 pts/0 S+ 0:00 ./mySysCall 24448 pts/0 R+ 0:00 ps ax Done.
Because the system function uses a shell to start the program, we could put it in by changing the call like this:
system("ps ax &");
When we run the new version, we get:
Done. PID TTY STAT TIME COMMAND 1 ? Ss 1:48 init [3] .... 24849 pts/1 Ss+ 0:00 -bash 25802 pts/1 R 0:00 ps ax
Here, the call to system returns as soon as the shell command finishes. Because it's a request to run a program in the background, the shell returns as soon as the ps program is started. This is the same situation when we use the command at a shell prompt:
$ps ax &
The program, then, prints "Done.", and exits before the ps command has had a chance to finish all of its output. This is quite confusing, and we need control over the behavior of a process.
An exec function replaces the current process with a new process specified by the path or file argument. We can use exec to hand off execution of our program to another.
The picture below is a diagram when we issue ls command on a linux shell. In this case, the shell is a parent process, and at the ls, the shell does fork() to create a child process. The newly created child process does exec() to run ls by replacing itself with the ls.
The exec functions are more efficient than system because the original program will no longer be running after the new one is started.
/* Execute PATH with arguments ARGV and environment from `environ'. */ extern int execv (__const char *__path, char *__const __argv[]) __THROW __nonnull ((1)); /* Execute PATH with all arguments after PATH until a NULL pointer, and the argument after that for environment. */ extern int execle (__const char *__path, __const char *__arg, ...) __THROW __nonnull ((1)); /* Execute PATH with all arguments after PATH until a NULL pointer and environment from `environ'. */ extern int execl (__const char *__path, __const char *__arg, ...) __THROW __nonnull ((1)); /* Execute FILE, searching in the `PATH' environment variable if it contains no slashes, with arguments ARGV and environment from `environ'. */ extern int execvp (__const char *__file, char *__const __argv[]) __THROW __nonnull ((1)); /* Execute FILE, searching in the `PATH' environment variable if it contains no slashes, with all arguments after FILE until a NULL pointer and environment from `environ'. */ extern int execlp (__const char *__file, __const char *__arg, ...) __THROW __nonnull ((1));
These functions are usually implemented using execve. The functions with a p suffix differ in that they will search the PATH environment variable to find the new executable. If the executable is not found, an absolute file name including directories should be passed to the function.
The global variable environ is available to pass a value for the new program environment. As another way, an additional argument to the functions execle and execve is available for passing an array of strings to be used as the new program environment.
Here is an example of using execlp()
// my_ps.c #include <unistd.h> #include <stdio.h> #include <stdlib.h> int main() { printf("ps with execlp\n"); execlp("ps", "ps", 0); printf("Done.\n"); exit(0); }
When we run it, we get the usual ps output without "Done." message at all. Also, there is no reference to a process called my_ps in the output.
$./my_ps ps with execlp PID TTY TIME CMD 12377 pts/0 00:00:00 bash 18304 pts/0 00:00:00 ps
The code prints the first message, "ps with execlp", and then calls execlp(), which searches the directories given by the PATH environmet variable for a program called ps. It then executes ps in place of my_ps, starting it as if we had issued the shell commnad:
$ ps
So, when ps finishes, we get a new shell prompt. We don't return to my_ps. Thus, the second message, "Done.", doesn't get printed. The PID of the new process is the same as the original, as are the parent PID and nice value.
To use processes to perform more than one function at a time, we can either use threads or create an extirely separate process from within a program, as init does, rather than replace the current thread of execution, exec, as shown in the above example.
One way of doing it is using fork().
In the following code, fork() on parent process creates child process, and then the child itself run execv() to replace the parent code with a new code specified in the path.
void main(char *path, char *argv[]) { pid_t pid = fork(); if (pid == 0) { printf("Child\n"); execv(path, argv); } else { printf("Parent %d\n", pid); } printf("Parent prints this line \n"); }
We can create a new process by calling fork(). This system call duplicates the current process, creating a new entry in process table with many of the same attributes as the current process. In other words, the newly created process will be the child of the calling process (parent).
The key point to understanding fork() is to realize that after it has completed its work, two processes exist, and, in each process, execution continues from the point where fork() returns.
The fork() called once but returns twice!
Linux will make an exact copy of the parent's address space and give it to the child. Therefore, the parent and child processes have separate address spaces. Therefore, the new process is almost identical to the original, and executing the same code. However, the child process has its own data space, environment, and file descriptor. So, combined with the exec() functions, fork() is what we need to create a new process.
The fork() returns a process ID, PID so that we can distinguish the two processes via the value returned from fork(). For the parent, fork() returns the process ID of the newly created child. This is useful because the parent may create, and thus need to track, several children (by wait() call). For the child, fork() returns 0. If necessary, the child can obtain its own process ID using getpid(), and the process ID of its parent using getppid(). If fork() fails it returns -1, and this is due to a limit on the number of child processes (CHILD_MAX). In that case, errno will be set to EAGAIN. If there is not enough space for an entry in the process table, or not enough virtual memory, the errno will be set to ENOMEM.
Which one runs first after the fork()?
Parent process or child process?
Well, it's undefined!
Picture from "The Linux Programming Interface"
Here is the summary:
System call fork()():
- If fork() returns a negative value, the creation of a child process was unsuccessful.
- The fork() returns a zero to the newly created child process.
- The fork() returns a positive value, the process ID of the child process, to the parent. The returned process ID is of type pid_t defined in sys/types.h. Normally, the process ID is an integer. Moreover, a process can use function getpid() to retrieve the process ID assigned to this process.
#include <stdio.h> #include <stdlib.h> #include <string.h> #define BUF_SIZE 150 int main() { int pid = fork(); char buf[BUF_SIZE]; int print_count; switch (pid) { case -1: perror("fork failed"); exit(1); case 0: /* When fork() returns 0, we are in the child process. */ print_count = 10; sprintf(buf,"child process: pid = %d", pid); break; default: /* + */ /* When fork() returns a positive number, we are in the parent process * (the fork return value is the PID of the newly created child process) */ print_count = 5; sprintf(buf,"parent process: pid = %d", pid); break; } for(;print_count > 0; print_count--) { puts(buf); sleep(1); } exit(0); }
Output is:
child process: pid = 0 parent process: pid = 13510 child process: pid = 0 parent process: pid = 13510 child process: pid = 0 parent process: pid = 13510 child process: pid = 0 parent process: pid = 13510 child process: pid = 0 parent process: pid = 13510 child process: pid = 0 child process: pid = 0 child process: pid = 0 child process: pid = 0 child process: pid = 0
As we can see from the output, the call to fork() in the parent returns the PID of the new child process. The new process continues to execute just like the old, with the exception that in the child process the call to fork() returns 0.
When we start a child process with fork(), it runs independently. But sometimes, we want to find out when a child process has finished. If the parent finishes ahead of the child, as the case in the example above, we may get confused, and it may not what we want to happen. So, we need to arrange for the parent process to wait until the child finishes by calling wait().
The primary role of wait() is to synchronization with children.
- Suspends current process (the parent) until one of its children terminates.
- Return value is the pid of the child process that terminated, and on a successful return, the child process is reaped by the parent.
- If child_status != NULL, the value of the status will be set to indicate why the child process terminated.
- If parent process has multiple children, wait() will return when any of the children terminates.
- The waitpid() can be used to wait on a specific child process.
A parent process needs to know when one of its child processes changes state, when the child terminates, or is stopped by a signal.The wait() is one of the two techniques used to monitor child processes: along with the SIGCHLD signal.
The system call wait() blocks the calling process until one of its child processes exits or a signal is received. The wait() takes the address of an integer variable and returns the process ID of the completed process.
#include <sys/wait.h> pid_t wait(int *child_status);
Again, one of the main purposes of wait() is to wait for completion of child processes.
The execution of wait() could have two possible situations.
- If there are at least one child processes running when the call to wait() is made, the caller will be blocked until one of its child processes exits. At that moment, the caller resumes its execution.
- If there is no child process running when the call to wait() is made, then this wait() has no effect at all. That is, it is as if no wait() is there.
The wait(&status) system call has two purposes.
- If a child of this process has not yet terminated by calling exit(), then wait() suspends execution of the process until one of its children has terminated.
- The termination status of the child is returned in the status argument of wait().
#include <stdio.h> #include <stdlib.h> #include <string.h> #define BUF_SIZE 150 int main() { int pid = fork(); char buf[BUF_SIZE]; int print_count; switch (pid) { case -1: perror("fork failed"); exit(1); case 0: print_count = 10; sprintf(buf,"child process: pid = %d", pid); break; default: print_count = 5; sprintf(buf,"parent process: pid = %d", pid); break; } if(!pid) { int status; int pid_child = wait(&status;); } for(;print_count > 0; print_count--) puts(buf); exit(0); }
The parent is now waiting for the child process to finish:
child process: pid = 0 child process: pid = 0 child process: pid = 0 child process: pid = 0 child process: pid = 0 child process: pid = 0 child process: pid = 0 child process: pid = 0 child process: pid = 0 child process: pid = 0 parent process: pid = 22652 parent process: pid = 22652 parent process: pid = 22652 parent process: pid = 22652 parent process: pid = 22652
The parent process is now using the wait() system call to suspend its own execution by checking the return value from the call.
The exit(status) library function terminates a process, making all resources (memory, open file descriptors, etc.) used by the process available for subsequent reallocation by the kernel. The status argument is an integer that determines the termination status for the process. Using the wait() system call, the parent can retrieve this status.
Note - "The exit() library function is layered on top of the _exit() system call.
... after a fork(), generally only one of the parent and child terminate by calling exit() the other process should terminate using _exit()" - The Linux Programming Interface.
The lifetimes of parent and child processes are usually not the same: either the parent outlives the child or vice versa.
What happens to a child that terminates before its parent has had a chance to perform a wait()? The point here is that, although the child has finished its work, the parent should still be permitted to perform a wait() at some later time to determine how the child terminated. The kernel deals with this situation by turning the child into a zombie. This means that most of the resources held by the child are released back to the system to be reused by other processes.
In fact, when a process dies on Linux, it isn. -from what-is-a-zombie-process-on-linux.
// file - zombie.c #include <stdio.h> #include <stdlib.h> #include <string.h> #define BUF_SIZE 150 int main() { int pid = fork(); char buf[BUF_SIZE]; int print_count; switch (pid) { case -1: perror("fork failed"); exit(1); case 0: print_count = 2; sprintf(buf,"child process: pid = %d", pid); break; default: print_count = 10; sprintf(buf,"parent process: pid = %d", pid); break; } for(;print_count > 0; print_count--) { puts(buf); sleep(1); } exit(0); }
If we run the code above, the child process will finish its task ahead of parent process, and will exist as a zombie until the parent finishes as shown in the output below:
The top command, and the ps command display zombie processes.
$ ./zombie $ ps -la F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 0 S 601 25350 12377 0 75 0 - 381 - pts/0 00:00:00 zombie 1 Z 601 25351 25350 0 78 0 - 0 exit pts/0 00:00:00 zomb <defunct> 0 R 601 25352 12377 0 77 0 - 1054 - pts/0 00:00:00 ps
The description below is also from what-is-a-zombie-process-on-linux..
If the parent terminates abnormally, the child process gets the process with init as parent. The zombie will remain in the process table until collected by the init process. Though they will stay for short period of time, they consumes resources until init removes them.
We can't kill zombie processes as we can kill normal processes with the SIGKILL signal - zombie processes are already dead. Regarding zombies, UNIX systems imitate the movies - a zombie process can't be killed by a signal, not even the (silver bullet) SIGKILL. Actually, this was the intentional feature to ensure that the parent can always eventually perform a wait(). Bear in mind that we don't need to get rid of zombie processes unless we have a large amount on our system - a few zombies are harmless. However, there are a few ways.. We can restart the parent process after closing it.
If a parent process continues to create zombies, it should be fixed so that it properly calls wait() to reap its zombie children.
A zombie process is not the same as an orphan process. An orphan process is a process that is still executing, but whose parent has died. They do not become zombie processes; instead, they are adopted by init (process ID 1)
In other words, after a child's parent terminates, a call to getppid() will return the value 1. This can be used as a way of determining if a child's true parent is still alive (this assumes a child that was created by a process other than init).
Signal is a notification, a message sent by either operating system or some application to our program. Signals are a mechanism for one-way asynchronous notifications. A signal may be sent from the kernel to a process, from a process to another process, or from a process to itself. Signal typically alert a process to some event, such as a segmentation fault, or the user pressing Ctrl-C.
Linux kernel implements about 30 signals. Each signal identified by a number, from 1 to 31. Signals don't carry any argument and their names are mostly self explanatory. For instance SIGKILL or signal number 9 tells the program that someone tries to kill it, and SIGHUP used to signal that a terminal hangup has occurred, and it has a value of 1 on the i386 architecture.
With the exception of SIGKILL and SIGSTOP which always terminates the process or stops the process, respectively, processes may control what happens when they receive a signal. They can
- accept the default action, which may be to terminate the process, terminate and coredump the process, stop the process, or do nothing, depending on the signal.
- Or, processes can elect to explicitly ignore or handle signals.
- Ignored signals are silently dropped.
- Handled signals cause the execution of a user-supplied signal handler function. The program jumps to this function as soon as the signal is received, and the control of the program resumes at the previously interrupted instructions.
- Raised
- Caught
- Acted upon
- Ignored
-
The term raise is used to indicate the generation of a signal, and the term catch is used to indicate the receipt of a signal.
Signals are raised by error conditions, and they are generated by the shell and terminal handlers to cause interrupts and can also be sent from one process to another to pass information or to modify the behavior.
Signals can be:
If a process receives signals such as SIGFPE, SIGKILL, etc., the process will be terminated immediately, and a core dump file is created. The core file is an image of the process, and we can use it to debug.
Here is an example of the common situation when we use a signal: when we type the interrupt character (Ctrl+C), the ISGINT signal will be sent to the foreground process (the program currently running). This will cause the program to terminate unless it has some arrangement for catching the signal.
The command kill can be used to send a signal to a process other than the current foreground process. To send a hangup signal to a shell running on a different terminal, we can use the following command:
kill -HUP pid_number
There is another useful variant of kill is killall. This allows us to send a signal to all processes running a specified command. For example, to send a reread signal to the inetd program:
$ killall -HUP inetd
The command causes the inetd program to reread its configuration options.
In the following example, the program will reacts to the Ctrl+C rather than terminating foreground task. But if we hit the Ctrl+C again, it will do what it usually does, terminating the program.
#include <stdio.h> #include <unistd.h> #include <signal.h> void my_signal_interrupt(int sig) { printf("I got signal %d\n", sig); (void) signal(SIGINT, SIG_DFL); } int main() { (void) signal(SIGINT,my_signal_interrupt); while(1) { printf("Waiting for interruption...\n"); sleep(1); } }
Output from the run when we typed Ctrl+C two times:
Waiting for interruption... Waiting for interruption... Waiting for interruption... Waiting for interruption... Waiting for interruption... Waiting for interruption... I got signal 2 Waiting for interruption... Waiting for interruption... Waiting for interruption... Waiting for interruption... Waiting for interruption... Waiting for interruption... Waiting for interruption... Waiting for interruption... Waiting for interruption...
The my_signal_interrupt() is called when we give SIGINT signal by typing Ctrl+C. After the interrupt function my_signal_interrupt() has completed, the program moves on, but the signal action is restored to the default. So, when it gets a second SIGINT signal, the program takes the default action, which is terminating the program. | http://www.bogotobogo.com/Linux/linux_process_and_signals.php | CC-MAIN-2017-26 | refinedweb | 4,951 | 71.24 |
Somehow better way than to create a csv reader. Now, I realize there is already an
Import-Csv cmdlet that types your code, but I figured I’d write one from scratch, since apparently that’s what I tend to do (instead of inventing anything new).
My hope was to make it so that it would emit strongly typed objects (which it does), but forwarning, you don’t get intellisense on it in the shell. This is due to the fact that types are generated at runtime and not compile time.
For the lazy, here is a link to the github.
The Plan
At first I thought I’d just wrap the F# csv type provider, but I realized that the type provider needs a template to generate its internal data classes. That won’t do here because the cmdlet needs to accept any arbitrary csv file and strongly type at runtime.
To solve that, I figured I could leverage the F# data csv library which would do the actual csv parsing, and then emit runtime bytecode to create data classes representing the header values.
As emitting bytecode is a pain in the ass, I wanted to keep my data classes simple. If I had a csv like:
Name,Age,Title Anton,30,Sr Engineer Faisal,30,Sr Engineer
Then I wanted to emit a class like
public class Whatever{ public String Name; public String Age; public String Title; public Whatever(String name, String age, String title){ Name = name; Age = age; Title = title; } }
Since that would be the bare minimum that powershell would need to display the type.
Emitting bytecode
First, lets look at the final result of what we need. The best way to do this is to create a sample type in an assembly and then to use
Ildasm (an IL disassembler) to view the bytecode. For example, the following class
using System; namespace Sample { public class Class1 { public String foo; public String bar; public Class1(String f, String b) { foo = f; bar = b; } } }
Decompiles into this:
.method public hidebysig specialname rtspecialname instance void .ctor(string f, string b) cil managed { // Code size 24 (0x18) string Sample.Class1::foo IL_000f: ldarg.0 IL_0010: ldarg.2 IL_0011: stfld string Sample.Class1::bar IL_0016: nop IL_0017: ret } // end of method Class1::.ctor
While I didn’t just divine how to write bytecode by looking at the IL (I followed some other blog posts), when I got an “invalid bytecode” CLR runtime error, it was nice to be able to compare what I was emitting which what I expected to emit. This way simple errors (like forgetting to load something on the stack) became pretty apparent.
To emit the proper bytecode, we need a few boilerplate items: an assembly, a type builder, an assembly builder, a module builder, and a field builder. These are responsible for the metadata you need to finally emit your built type.
let private assemblyName = new AssemblyName("Dynamics") let private assemblyBuilder = AppDomain.CurrentDomain.DefineDynamicAssembly(assemblyName, AssemblyBuilderAccess.RunAndSave) let private moduleBuilder = assemblyBuilder.DefineDynamicModule(assemblyName.Name, assemblyName.Name + ".dll") let private typeBuilder typeName = moduleBuilder.DefineType(typeName, TypeAttributes.Public) let private fieldBuilder (typeBuilder:TypeBuilder) name fieldType : FieldBuilder = typeBuilder.DefineField(name, fieldType, FieldAttributes.Public) let private createConstructor (typeBuilder:TypeBuilder) typeList = typeBuilder.DefineConstructor(MethodAttributes.Public, CallingConventions.Standard, typeList |> List.toArray)
None of this is really all that interesting and hopefully is self explanatory.
The
fieldBuilder is important since that will let us declare our local fields. In fact, once we’ve declared our local fields using the builder, the only bytecode we have to emit is the constructor (which accepts arguments and instantiates fields in them).
Here is the necessary code to build such a constructor.
let private callDefaultConstructor (gen: ILGenerator) = let objType = typeof<obj> gen.Emit(OpCodes.Call, objType.GetConstructor(Type.EmptyTypes)) gen.Emit(OpCodes.Ldarg_0) let private loadThis (gen: ILGenerator) = gen.Emit(OpCodes.Ldarg_0) gen let private emitNewInstanceRef (gen : ILGenerator) = gen |> loadThis |> callDefaultConstructor let private assignField (argIndex : int) (field : FieldBuilder) (gen : ILGenerator) = gen.Emit(OpCodes.Ldarg, argIndex) gen.Emit(OpCodes.Stfld, field) gen let private loadConstructorArg (gen : ILGenerator) ((num, field) : int * FieldBuilder) = gen |> loadThis |> assignField num field let private completeConsructor (gen : ILGenerator) = gen.Emit(OpCodes.Ret) let private build (fields : FieldBuilder list) (cons : ConstructorBuilder) = let generator = cons.GetILGenerator() generator |> emitNewInstanceRef let fieldsWithIndexes = fields |> List.zip [1..(List.length fields)] fieldsWithIndexes |> List.map (loadConstructorArg generator) |> ignore generator |> completeConsructor
A few points of interest.
- Calls that make reference to OpCodes.Ldarp_0 are loading the “this” object to work on.
- OpCodes.Stdfld sets the passed in field to the value previously pushed on the stack.
- Opcodes.Ldarg with the index passed to it is a dynamic way of saying “load argument X onto the stack”
The final piece of the block is to tie it all together. Create field instances, take the target types and create a constructor, then return the type.
type FieldName = string type TypeName = string let make (name : TypeName) (types : (FieldName * Type) list)= let typeBuilder = typeBuilder name let fieldBuilder = fieldBuilder typeBuilder let createConstructor = createConstructor typeBuilder let fields = types |> List.map (fun (name, ``type``) -> fieldBuilder name ``type``) let definedConstructor = types |> List.map snd |> createConstructor definedConstructor |> build fields typeBuilder.CreateType()
Instantiating your type
Lets say we have a record that describes a field, its type, and a target value
type DynamicField = { Name : String; Type : Type; Value: obj; }
Then we can easily instantiate a target type with
let instantiate (typeName : TypeName) (objInfo : DynamicField list) = let values = objInfo |> List.map (fun i -> i.Value) |> List.toArray let types = objInfo |> List.map (fun i -> (i.Name, i.Type)) let t = make typeName types Activator.CreateInstance(t, values)
It’s important to note that
values is an
obj []. Because its an object array we can pass it to the activates overloaded function that wants a
params obj[] and so it’ll treat each object in the object array as another argument to the constructor.
Dynamic static typing of CSV’s
Since there is a way to dynamically create classes at runtime, it should be easy for us to leverage this to do the csv strong typing. In fact, the entire reader is this and emits to you a list of strongly typed entries:
open System open System.Reflection open System.IO open DataEmitter open FSharp.Data.Csv module CsvReader = let rand = System.Random() let randomName() = rand.Next (0, 999999) |> string let defaultHeaders size = [0..size] |> List.map (fun i -> "Unknown Header " + (string i)) let load (stream : Stream) = let csv = CsvFile.Load(stream).Cache() let headers = match csv.Headers with | Some(h) -> h |> Array.toList | None -> csv.NumberOfColumns |> defaultHeaders let fields = headers |> List.map (fun fieldName -> (fieldName, typeof<string>)) let typeData = make (randomName()) fields [ for item in csv.Data do let paramsArr = item.Columns |> Array.map (fun i -> i :> obj) yield Activator.CreateInstance(typeData, paramsArr) ]
The
randomName() is a silly workaround to make sure I don’t create the same
Type in an assembly. Each time you run the csv reader it’ll create a new random type representing that csv’s data. I could maybe have optimized this that if someone calls in for a type with the same list of headers that another type had then to re-use that type instead of creating a duplicate, oh well.
Using the reader from the cmdlet
Like I mentioned in the beginning, there is a major flaw here. The issue is that since my types are generated at runtime (which was really fun to do), it doesn’t help me at all. Cmdlet’s need to expose their output types via an
OutputType attribute, and since its an attribute I can’t expose the type dynamically.
Either way, here is the entire csv cmdlet
namespace CsvHandler open DataEmitter open System.Management.Automation open System.Reflection open System open System.IO [<Cmdlet("Read", "Csv")>] type CsvParser() = inherit PSCmdlet() [<Parameter(Position = 0)>] member val File : string = null with get, set override this.ProcessRecord() = let (fileNames, _) = this.GetResolvedProviderPathFromPSPath this.File for file in fileNames do use fileStream = File.OpenRead file fileStream |> CsvReader.load |> List.toArray |> this.WriteObject
This reads an implicit file name (or file with wildcards) and leverages the inherited
PsCmdlet class to resolve the path from the passed in file (or expand any splat’d files like
some*). All we do now is pass each file stream to the reader, convert to an array, and pass it to the next item in the powershell pipe.
See it in action
Maybe this whole exercise was overkill, but let’s finish it out anyways. Let’s say we have a csv like this:
We can do the following
And filter on items
Cleanup
After getting draft one done, I thought about the handling of the IL generator in the Data Emitter. There are two things I wanted to accomplish:
1. Clean up having to seed the generator reference to all the functions
2. Clean up passing an auto incremented index to the field initializer
After some mulling I realized that implementing a computation expression to handle the seeded state would be perfect for both scenarios. We can create an IlBuilder computation expression that will hold onto the reference of the generator and pass it to any function that uses
do! syntax. We can do the same for the auto incremented index with a different builder. Let me show you the final result and then the builders:
let private build (fields : FieldBuilder list) (cons : ConstructorBuilder) = let generator = cons.GetILGenerator() let ilBuilder = new ILGenBuilder(generator) let forNextIndex = new IncrementingCounterBuilder() ilBuilder { do! loadThis do! callDefaultConstructor do! loadThis for field in fields do do! loadThis do! forNextIndex { return loadArgToStack } do! field |> setFieldFromStack do! emitReturn }
And both builders:
(* encapsulates an incrementable index *) type IncrementingCounterBuilder () = let mutable start = 0 member this.Return(expr) = start <- start + 1 expr start (* Handles automatically passing the il generator through the requested calls *) type ILGenBuilder (gen: ILGenerator) = member this.Bind(expr, func)= expr gen func () |> ignore member this.Return(v) = () member this.Zero () = () member this.For(col, func) = for item in col do func item member this.Combine expr1 expr2 = () member this.Delay expr = expr()
Now all mutability and state is contained in the expression. I think this is a much cleaner implementation and the functions I used in the builder workflow didn’t have to have their function signatures changed!
Conclusion
Sometimes you just jump in and don’t realize the end goal won’t work, but I did learn a whole lot figuring this out so the time wasn’t wasted.
1 thought on “Strongly typed powershell csv parser” | https://onoffswitch.net/2014/03/22/strongly-typed-powershell-csv-parser/ | CC-MAIN-2020-16 | refinedweb | 1,751 | 56.86 |
Sometime ago I was contacted by Patrick Smacchia, who is CEO and a lead developer in NDepend. In one sentence, NDepend is a tool that analyses your code and tells what is wrong and can be improved to avoid technical debt. Patrick offered my a free pro license for NDepend. A few years ago I was using it so it was an interesting proposition. We agreed that if I find NDepend useful, I will write a post about it and share my experience. As for me it's a win-win situation.
So far I used NDepend in 2 projects. The first one is a new project that I've started recently. The second one is a code written by someone else but I was asked to make an audit of it. I'm still not an expert of NDepend, nonetheless I have some thoughts to share with you. At the same time this is not a tutorial how to use NDepend. If you need one go for example here.
Case 1
When I started NDepend for the first time, after a long brake, I was a little but overwhelmed by the number of available options. However, quite quickly I learned how to use it. The first thing that struck me was the number of code smells detected by NDepend in my new project. I and my college spare no pains to write high quality code and despite that there are many things to improve. I think that when you work with some code for a while, you simply start to ignore some issues like too long methods, too big classes, methods with too many parameters. You need someone or something that will point you what is wrong. NDepend is good in doing that.
NDepend detects issues based on the rules. These rules are defined in so called Code Query LINQ which seems to be powerful. If you are familiar with "normal" LINQ, rules should be easy to understand and modify for you. I haven't tried yet but it also seems to be a great extension point and if you want you can define your own rules. It's worth mentioning that not every rule was obvious to me so it was very helpful that each rule has a description and an explanation how to fix it. If you don't see it, just click View Description button. Initially, I missed this button and I was annoyed that I don't understand everything. For guys like me, in the version of 2017.3 of NDepend team will introduce a few guided tour tooltips like the one below:
I also like that NDepend tracks changes from analysis to analysis. Thanks to that you can observe trends i.e. how many issues have been added/fixed since the last analysis. Really useful!
They are also little things that make you happy. For example, NDepend doesn't analyse directly the source code but assemblies. So, it's nice that we're informed that the source code is out of sync with PDB files. It can happen easily if you switch from branch to branch in Git. What is natural NDepend integrates with Visual Studio but you can also use its standalone version.
Suggestions
What would I improve? There are some minor things. When you start a new analyses, by default NDepend will analyse all the projects in the solution, including unit tests, acceptance tests etc. I think that they should be excluded from analysis by default. Of course you can exclude them manually or use the filter function (see the screenshot below) but it's easy to forget.
I also noticed that NDepend reports SpecFlow *.feature files as having invalid extension. I also miss an easy option to ignore/suppress some warnings reported by NDepend. I know that I can potentially modify a definition of a rule but it's not a perfect solution for me. According to Patrick, the next year SuppressWarning attribute will be supported by NDepend what should help.
NDepend has a rule Avoid methods with too many parameters and I agree that methods should not have too many parameters. However, if we use dependency injections it is quite common that constructors have more parameters than usually. In this case they should not be reported. Here also NDepend team thinks what to do.
This one is not an issue but feature request. I know that NDepend is focused on analysing the code. However, if it's able to detect so many issues, it will be great to provide an option to automatically or semi-automatically fix and refactor them. Of course, it's something completely new, but would be an amazing function ;)
Case 2
When I got some experience with NDepend, I decided to use it to make an audit. Firstly, I roughly analysed the code on my own and then I ask NDepend to do it again. Thanks to that I found a few more problems e.g.:
- There were 4 places in the code where Thread.Sleep was used. Usually it means some threading problems.
- One class with public fields was found.
- The namespaces were not consistent with the structure of folders.
- Some very long methods were found.
- ...
Suggestions
I have one suggestion as to Search View. In order to see issues found for a given element you need to select it, open the context menu and then click Select issues.... A little bit cumbersome. I would like to see issues when hovering a mouse over found elements. For example in a tooltip. As far as I know it'll be improved in 2018.
I didn't find the rule Avoid having different types with same name very useful. It reports classes like Startup, SwaggerConfig... which probably have such names in any project around all the world :) The good information is that it should be changed in the next version of NDepend. Besides, I found manually a few cases where disposable (instances of DirectorySearcher and DirectoryEntry) resources were not disposed. NDepend doesn't detect this issues. However, I think that it's feasible. Patrick says that it's something that definitely will be investigated.
The source code analysed by me contained a lot of magic strings in the code. Many of them were duplicated multiple times. However, I didn't find any rule that was violated by this. I think that there should be a rule suggesting to move this strings to const fields or to introduce static class to store these strings or to move them to resources. Patrick pointed that currently NDepend warns about public const magic strings but so far it's not aware of string values (this will come but later).
Last but not least the navigation function i.e. Navigate forward and Navigate backward buttons will be useful. It's quite easy to get lost when you are analysing results within Visual Studio or in the standalone NDepend. For me it is intuitive to press Navigate forward button a few times. Of course, if you are analysing the generated report (which is collection of HTML sites), the browser supports navigation.
Summary
Without any hesitation I can recommend NDepend. Despite some issues it's very helpful and I think that it can really help keeping your code clean. Besides, many of these issues are in fact minor and will be fixed soon. I also appreciate the contact with Patrick. If you have any problems with NDepend, I'm pretty sure that he or his team will respond quickly. I know that because I checked.
1 comments:
Great Article
Final Year Projects for CSE in Dot Net
FInal Year Project Centers in Chennai
JavaScript Training in Chennai
JavaScript Training in Chennai | https://www.michalkomorowski.com/2017/09/ndepend-my-point-of-view.html | CC-MAIN-2019-13 | refinedweb | 1,287 | 74.39 |
Kubernetes Ingress with NGINX
Modern software architectures often consist of many small units (e.g. Microservices). Some serve internal purposes but others have to be exposed to a broader audience. This could be within a private network or even over the internet. But how do you securely and consistently expose those services? Especially considering that assigning an IP to each service to be exposed, might not be possible, due to the limited number of available IP addresses. And what about security? Should every single service be accessible by the outside world?
Classical system architectures resolved these questions by using firewalls and reverse proxies, that routed requests to the appropriate target, located in a secure private network. But these configurations were often manually maintained. Changing oder adding routing rules often took a long time for those to be applied.
Now imagine, that you could write your own routing rules and those get applied in seconds and that those rules could even be part of the application source code base. Even more, endpoints like REST-APIs, Static content or dynamic web frontends, could be exposed through one IP address, possibly serving content for multiple domain names.
This is exactly what Ingress does and where it shines.
What is an Ingress?
Ingress is a Kubernetes resource type, that can be applied just like other resources. Its purpose is to define routing cluster-external requests to cluster-internal services. An Ingress will map URLs (hostname and path) to cluster-internal services.
Let’s assume that a service named
frontend-svc should be made available under the domain
sample-service.example.com . This would require an Ingress resource definition like the following:
At the first glance, the example above might seem relatively complex. But once you understand the structure of the Ingress resource, it is very simple. Let’s split the example into smaller sections:
Header (Lines 1–4)
This is the common header for all Kubernetes resources. It identifies the resource as
kind: Ingress and some metadata.
Rules (Lines 6 & 7)
This is the actual core of the Ingress definition. This section defines how incoming requests shall be mapped, based on the requested hostname. In our example, the Ingress definition targets the hostname
sample-service.example.com .
Path Mapping (Lines 9–13)
The path mapping specifies how request paths shall be mapped to the actual backends. Backends are Services, deployed in the cluster, identified by their name and a port.
An Ingress definition might even consist of multiple rules and rules with multiple paths. Let’s pretend, that we would like to expose the REST API, that the frontend is based on, under
/api, backed by the service
backend-svcon port
8081:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: digitalfrontiers-sample-ingress
spec:
rules:
- host: sample-service.example.com
http:
paths:
- path: /
backend:
serviceName: frontend-svc
servicePort: 80
- path: /api
backend:
serviceName: backend-svc
servicePort: 8081
This Ingress definition will route requests on
/api to. It is important to note, that the full path will be preserved by default. Given path mapping with
path: /api/v1/resources/ and a request for
/api/v1/resources/ab0394a, will result in a call with the full path (
/api/v1/resources/ab0394a) on the backend.
Unlike other resources, Kubernetes has no built-in support for Ingresses. This might sound strange, considering I described Ingress as a Kubernetes native resource. The Ingress is just a description of how routing should be performed. The actual logic has to be performed by an “Ingress Controller”.
Ingress Controller
The Ingress is the definition of how the routing should be done. But the execution of those rules has to be performed by an “Ingress Controller”. Due to this, creating Ingress resources in a Kubernetes cluster won’t have any effect until an Ingress Controller is available.
The Ingress Controller is responsible of routing requests to the appropriate services within the Kubernetes cluster. How an Ingress Controller executes this task, is not explicitly defined by Kubernetes. Thus an Ingress Controller can handle requests in a way that works best for the cluster.
The Ingress Controller is responsible to monitor the Kubernetes cluster for any new Ingress resources. Based on the Ingress resource, the Ingress Controller will setup the required infrastructure to route requests accordingly.
NGINX Ingress Controller
Further on, we’ll focus on the NGINX Ingress Controller, being an Ingress Controller implementation based on the popular NGINX http-engine. Although there are many different Ingress Controller implementations, the NGINX based implementation seems to be the most commonly used one. It is a general purpose implementation that is compatible with most Kubernetes cluster deployments.
This is a simplified representation of the actual deployment, but sufficient to understand the most important concepts. The two most obvious parts are the Ingress Controller itself and an associated Service of type
LoadBalancer. The Service will receive a public IP under which the Ingress Controller will be made available. Requests to this IP will be handled by the Ingress Controller and forwarded to the actual services according to the Ingress resources.
Note that there is no DNS handling in any way within this structure. Mapping domain names to the load balancer IP, is out of scope of the NGINX Ingress Controller. Typically this will be configured outside the cluster. The most common configuration is to create a wildcard mapping for all subdomains to that particular IP. As an example, let’s pretend you’re working on a web application that shall be reached under the domain
my-app.com . In this case, a DNS record should be created that resolves all DNS queries for
my-app.com (like,
services.my-app.com …) to the IP assigned for the Service mentioned above, that points to the NGINX Ingress Controller. Now all requests will be handled by that IP and the NGINX Ingress Controller will perform the routing.
Deploying the NGINX Ingress Controller
The deployment of the NGINX Ingress Controller is pretty straightforward. It consists of two parts:
- Base deployment of the Ingress Controller, required for all Kubernetes clusters
- Provider specific deployment, depending on the actual provider. Here we’ll focus on the generic deployment. (Especially AWS and bare metal clusters require additional configuration, which has been well documented in)
General Process
The deployment is straightforward and uses existing resource definitions that can be applied to the cluster. Even though the resource definitions are created by a trustworthy source (they are part of the github Kubernetes organisation), external resources should always be inspected. This will not only ensure, that no unexpected content will be deployed, but additionally give you a basic understanding of how deployed content is structured.
In all cases where I simply write a
kubectl apply -f https://<some-url>.yaml, you should translate that to the following sequence:
curl https://<some-url>.yaml #1
kubectl apply --dry-run -f https://<some-url>.yaml #2
kubectl apply -f https://<some-url>.yaml #3
- Use
curlto download and inspect the resource
- Before applying the resource, perform a dry run. This will validate the resource and tell you what will be deployed to the cluster.
- Perform the actual deployment
With those general precautions out of the way, let’s stat with the actual deployment.
Step-By-Step Deployment
As stated before, the deployment consists of two parts. We start off by deploying the base components required for all Kubernetes cluster types. The deployment can be done using the following command:
kubectl apply-f
This deploys the biggest part of the whole infrastructure and thus creates a number of resources in the Kubernetes cluster:
limitrange/ingress-nginx created
The most important part is the NGINX Ingress Controller Pod. You can watch the controller becoming available using
kubectl get pods —-all-namespaces —-watch -l “app.kubernetes.io/name=ingress-nginx”
The only missing piece is how the Ingress Controller will be exposed. As stated before, in the generic case this will be achieved by a service of type
LoadBalancer . This is the last step in the deployment of the Ingress Controller:
kubectl apply -f
A a result, the
LoadBalancer service will be deployed and receive the external IP address. To receive the public IP, inspect the service in the
ingress-nginx namespace:
kubectl -n ingress-nginx get service
It might take Kubernetes a moment to request and assign an external IP to your service. But as a result you’ll see an output similar to the following:
Once the
EXTERNAL-IP is assigned, the NGINX Ingress Controller is ready to serve.
Conclusion
This is a short introduction to Ingress and the Ingress Controller, giving you a basic understanding of the concept. Once you’ve wrapped your head around the concept, Ingress resources are an efficient way to expose your services. | https://medium.com/digitalfrontiers/kubernetes-ingress-with-nginx-93bdc1ce5fa9?source=collection_home---4------2----------------------- | CC-MAIN-2020-16 | refinedweb | 1,458 | 55.03 |
Archives
Lost in a sea of consciousness!
File and File Container Wrapper Library on CodePlex
Sometime ago we were talking about file wrappers and testing these things on the ALT.NET mailing list. It's a rather boring task to test say a file system without actually writing files (which you might not want to do). Wouldn't it be nice to have a wrapper around a file system so a) you could forget about writing one yourself and b) you could throw something at Rhino mocks because you really don't want to test writing files.
I've put together a project implementing this and it's now released on CodePlex. It's composed of interfaces (IFile and IFileContainer) that provide abstractions over whatever kind of file or file container you want. This allows you to wrap up a file system (indivdual files or containers of files like folders) and test them accordingly. Concrete implementations can be injected via a Dependency Injection tool and you can sit back and forget about the rigors of testing file access.
There are 3 concrete implementations (found in FileStrategyLib.dll) that implement a file system (IFile), folders (IFileContainer), and zip files (IFileContainer using SharpZipLib). There's also a test project with a whopping 10 unit tests (providing 97% coverage, darn) included. More documentation and sample code is available on the CodePlex site.
You can check out the project here on CodePlex:
1.0 is available and released under the MIT License. It contains the binary and source distributions. I'll check the code into the source repository later tonight.
Note that this is a very simple library concisting of 2 interfaces and about 50 lines of code. Of course being an open source project I encourage you to check it out and enhance it if you want. It's not the be-all system to get all kinds of information out but should get you started. It was originally written just for a simple system to create and add files to a folder and zip file but can be extended. Otherwise, consider it an example of having to wrap and test a system like a file system which you may run into from time to time.
Enjoy!
Update: Source code tree is checked in and posted now.
Big Visible Cruise WPF Enhancement
Having some fun with Ben Carey's Big Visible Cruise app. BVC is a WPF app that looks at CruiseControl (.NET, Ruby, and Java versions) and displays a radiator dashboard of the status of projects. As each project is building, the indicator will turn yellow then green if it suceeds or red if it fails. It's all very cool and I jumped all over this so we could have a visible display of our projects in the office.
Here's the default look that Ben provided:
I submitted a request to be able to control the layout and he reciprocated with a layout option (using skins in WPF). Here's the updated layout he provided:
I had a problem because with only a few (8 in my case) projects, the text was all goofy. The layout was all cool but I wanted something a little flashier and better to read, so some XAML magic later I came up with this:
Here's a single button with some funky reflection:
Okay, here's how you do it. You need to modify two files. First here's the stock LiveStatusBase.xaml file. This file is the base for displaying the bound output from the CruiseStatus for a single entry:
<DataTemplate x: <Border BorderBrush="Black" BorderThickness="1"> <TextBlock TextAlignment="Center" Padding="3" Background="{Binding Path=CurrentBuildStatus, Converter={StaticResource BuildStatusToColorConverter}}" Text="{Binding Path=Name, Converter={StaticResource BuildNameToHumanizedNameConverter}}" /> </Border> </DataTemplate>
It's just a TextBlock bound to the data source displaying the name of the project and using the background color for the status of the build. Here's the modifications I made to make it a little more sexy:
<DataTemplate x:
<Grid Margin="3"> <Grid.BitmapEffect> <DropShadowBitmapEffect /> </Grid.BitmapEffect> <Rectangle Opacity="1" RadiusX="9" RadiusY="9" Fill="{Binding Path=CurrentBuildStatus, Converter={StaticResource BuildStatusToColorConverter}}" StrokeThickness="0.35"> <Rectangle.Stroke> <LinearGradientBrush StartPoint="0,0" EndPoint="0,1"> <GradientStop Color="White" Offset="0" /> <GradientStop Color="#666666" Offset="1" /> </LinearGradientBrush> </Rectangle.Stroke> </Rectangle> <Rectangle Margin="2,2,2,0" VerticalAlignment="Top" RadiusX="6" RadiusY="6" Stroke="Transparent" Height="15px"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0,0" EndPoint="0,1"> <GradientStop Color="#ccffffff" Offset="0" /> <GradientStop Color="transparent" Offset="1" /> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Grid Margin="5"> <TextBlock TextWrapping="Wrap" TextAlignment="Center" HorizontalAlignment="Center" VerticalAlignment="Center" FontSize="32" FontWeight="Bold" Padding="10,10,10,10" Foreground="Black" FontFamily="Segoe Script, Verdana" Background="{Binding Path=CurrentBuildStatus, Converter={StaticResource BuildStatusToColorConverter}}" Text="{Binding Path=Name, Converter={StaticResource BuildNameToHumanizedNameConverter}}"> </TextBlock> </Grid> </Grid> </DataTemplate>
I added a grid. The top rectangle is to define the entire area for each project (filling it in with the build status color) and the next one is the highlight (using a LinearGradientBrush) on the button. Then the TextBlock with the name of the project and it's build status gets filled in.
Now here's the stock BigVisibleCruiseWindow.xaml (the main window):
<DockPanel>
<Border DockPanel. <DockPanel LastChildFill="False"> <TextBlock DockPanel. <Button DockPanel. </DockPanel> </Border> <Viewbox DockPanel. <ItemsControl ItemsSource="{Binding}" Style="{DynamicResource LiveStatusStyle}" /> </Viewbox>
</DockPanel>
The main window used a DockPanel and displayed some addition things (there's a button there for choosing options but it's non-functional). Here's my changes:
<Grid> <ItemsControl ItemsSource="{Binding}" Style="{DynamicResource LiveStatusStyle}" /> </Grid>
I just simply replaced the DockPanel with a Grid.
That's it! Feel free to experiment with different looks and feels and maybe submit them to Ben as he could include them in maybe a set of skins to use.
Note: You need to download the code from the repository as the 0.5 release Ben put out doesn't include the skins.
TreeSurgeon Updates - 2005/2008 support
Just a quick update to TreeSurgeon as we've been working on some plans on updating the tool to have a more flexible framework for adding new tools and updating the output for newer versions of the .NET framework.
Donn Felker added VS2005 support so that's in right now. If you grab the latest from source control or download the latest ChangeSet from CodePlex here, you can get TreeSurgeon spitting out VS2005 and .NET 2.0 solutions. I'm just finishing up VS2008 support now and that'll be in the planned 1.2 release that we're coming out with shortly (probably by the end of the week). In addition, based on votes from the Issue Tracker (which we use as a Product Backlog) I'm looking to add MbUnit support so that will probably get into this release.
The UI is pretty ugly and could use some graphical loving. I'm not going to get all stoked about the UI right now but it does need something. I've been meaning to talk to Jay Flowers as he pinged me awhile back about hooking up and seeing if there's some crossover between CI Factory and TreeSurgeon. I'm not so sure as I still have yet to get my head wrapped around CI Factory (nothing against Jay but if I can't grok an install in 5 minutes I usually move on until I can come back and kill some time figuring out how it works). To me a CI Factory and TreeSurgeon is like Marvel vs. DC (or Mac vs. PC) as I'm not sure I see the synergies but we'll see where that goes.
We're also looking at doing some kind of plugin pattern for creating different type of tree structures. The original structure Mike Roberts came up with is great, but as with anything evolution happens and we move on. I personally use a modified structure that's based on Mikes, but accomodates different features. JP Boodhoo has a blog entry here on his structure, again, slightly different. In addition to the updates on the tree structure, we're looking at adding better support in the generated build file (and options for it in the ugly UI) for choosing what unit testing framework and how to compile using it (straight, through NCover, or NCoverExplorer). Again, some stuff is solid; others are up in the air but feel free to hook up on the forums here with your thoughts and ideas as we're always open to drive this the way you, the community, want it to go.
Like I said, you can grab the 2005 support right now from here and the 2008 support will be up on the server when we do the 1.2 release later this week.
Back to the grind.
Game Studio 2.0, Visual Studio 2005, VisualSVN, and me
If you're like me and dabble in everything, then you might have Visual Studio 2005 installed along with VisualSVN (a plug-in to integrate Subversion source control access with Visual Studio). The latest version of XNA Game Studio (2.0) now allows you to build XNA projects inside of any flavor of Visual Studio. Previously you could only use C# Express Edition, which meant you couldn't use any add-ins (the license for C# Express forbids 3rd party addons) which meant I couldn't use ReSharper. Anyone who's watched my XNA demos knows this peeves me to no end as I stumble pressing Alt+Enter to try to resolve namespaces or Ctrl+N to find some class.
Oh yeah, getting back to the point. If you've installed the latest Game Studio 2.0 you can now use it with Visual Studio 2005 (not 2008 yet). And if you've got the combo I mentioned installed (VS2005, GS2.0, and VisualSVN) you might see this when you create a new XNA project:
It's a crazy bug but I tracked it down via the XNA forums. Apparently there's a conflict if you're running the combination I mentioned. Don't ask me why a source control plugin would affect a project creation template. I just use this stuff.
Anyways, you can fix it with the latest version of VisualSVN (1.3.2). I was running 1.3.1 and having the problem, you may be in the same boat. Here's the conversation on the problem in the XNA forums; here's the bug listed on the connect site; and here's the link to VisualSVN 1.3.2 to correct the problem. All is well in developer land again as we enter the world of ReSharper goodness sprinkled with a topping of XNA.
Happy gaming!
Another Kick at the Can
Look like a new development community is forming, this time around XNA and Game Development. GameDevKicks.com uses the DotNetKicks code and has launched a new site for game development content. There's only 5 articles submitted, but looks like it's off to a good start. A good set of categories to start, a bit of a redesign from the typical DotNetKicks look and feel (something like a cross between Digg and DNK) but it's just starting off.
So if you're into the game development scene feel free to help out and see if this kicks community can grow and prosper. Check out GameDevKicks here and give a whirl.
More game development news to come shortly from your friendly neighborhood SharePoint-Man.
SoakingRequired.
ASP.NET MVC now available
You've read about it on the Internet, you've seen us talking about it, and if you were at DevTeach last week you soaked up Justice's inhuman presentation (and Jeffrey's more than human one) on the tool you'll know what the buzz this week is. Now you can see what the hype is all about.
The ASP.NET MVC addition is now available here. It's part of the ASP.NET 3.5 extensions which not only includes the MVC framework, but also includes some new stuff for AJAX (like back button support), the ASP.NET Entity Framework, and there are two new ASP.NET server controls for Silverlight.
Grab it, try it out, watch the skies for demos and tutorials and all that jazz (or read ScottGu's 20 page post on the subject which is more than anyone will ever need) and start building .NET web apps the smart way! The framework is available here for download and there are some QuickStarts that will help you get up and running here.
Terrarium Anyone?
Anyone out there got a copy of the Terrarium client and server they can flip me? I'm working on something new and need to find a copy of it. It seems to all but vanished from any Microsoft site I can find. For example, the download page is here on the WindowsClient.NET site but doesn't work. It continues to be listed as a Starter Kit for Windows Form (it hasn't been updated since .NET 1.1) but I can't seem to track it down anywhere. If you have a copy let me know via email and if you can send it that would be great, or I can provide a place for you to upload it to. Thanks in advance.
ALT.NET keeps on ticking
I can't say I've seen a community with more spirit, enthusiasm, opinion, views, experience, concealed lizards, logging chains, and gumption than the ALT.NET community.
Stats for the Yahoo! Groups mailing list, which only started 2 months ago on October 7th:
- Almost 2000 posts on October (remember it was about 23 days of posting)
- Over 3200 posts in November
- Already 1300+ posts in December (and we're only 7 days!.
TRUE, True, true, and FALSE, False, false.
I love Sony, I hate Sony. | https://weblogs.asp.net/bsimser/archive/2007/12 | CC-MAIN-2018-30 | refinedweb | 2,313 | 62.68 |
Information on the system used during a build will be recorded in a new control file, a “buildinfo” file with suffix .buildinfo.
Discussion with ftpmasters is happening in 763822.
Contents
uses
The .buildinfo file has several goals which are related to each other:
- It records information about the system environment used during a particular build -- packages installed (toolchain, etc), system architecture, etc. This can be useful for forensics/debugging.
- It can also be used to try to recreate (partially or in full) the system environment when trying to reproduce a particular build.
We want a debian user (or derivative) to be able to find and fetch all .buildinfo files that were known to produce a given binary package so that they can try to reproduce the package themselves.
In the future, there may be more uses for .buildinfo files (or for collections of .buildinfo files related to a single binary artifact). For example:
- A debian user could elect to only install binary packages that have been successfully built by multiple builders
- A debian derivative could attest to the packages it has deterministically rebuilt
- A triage process could identify toolchain changes that have an effect on large numbers of binary packages.
- A cross-building process could demonstrate the correctness of the cross-compiling toolchain by reproducing the exact binary artifacts.
buildinfo example
a .buildinfo file is a UTF-8-encoded textfile, usually clearsigned with OpenPGP.
The following file would be named fweb_1.62-12+b2_brahms-20120530114812.buildinfo:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Format: 1.9 Build-Architecture: i386 Source: fweb (1.62-12) Binary: fweb fweb-doc Architecture: all i386 Version: 1.62-12+b2 Changes: fweb (1.62-12+b2) sid; urgency=low, binary-only=yes . * Binary-only non-maintainer upload for amd64; no source changes. * Rebuild for multiarch sync . -- amd64 / i386 Build Daemon (brahms) <buildd_amd64-brahms@buildd.debian.org> Wed, 30 May 2012 09:48:12 +0200 18:37:51 +0000 Checksums-Sha256: 9921500c4c6159c0019d4b8b600d2d06eef6b1da056abd2f78e66a9f0c3843b9 879 fweb_1.62-12.dsc 3a7492c2013fbeebff08bee0514481ec0f56d2c4d138188d1ef85156d08ded00 436982 fweb-doc_1.62-12_all.deb a916dbb1c63707eaf52a5cdd10769871d2f621848176dc8f7ab4f0dcd999af85 229990 fweb_1.62-12+b2_i386.deb Build-Path: /usr/src/debian/fweb-1.62-12+b2 Build-Environment: acl (= 2.2.52-1), adduser (= 3.113+nmu3), base-files (= 7.5), base-passwd (= 3.5.33), bash (= 4.3-9), binutils (= 2.24.51.20140818-1), bsdmainutils (= 9.0.5), bsdutils (= 1:2.20.1-5.8), build-essential (= 11.7), bzip2 (= 1.0.6-7), coreutils (= 8.21-1.2), cpp (= 4:4.9.1-3), cpp-4.9 (= 4.9.1-9), dash (= 0.5.7-4), debconf (= 1.5.53), debhelper (= 9.20140817), debianutils (= 4.4), dh-buildinfo (= 0.11), diffutils (= 1:3.3-1), dmsetup (= 2:1.02.88-1), dpkg (= 1.17.13), dpkg-dev (= 1.17.13), e2fslibs (= 1.42.11-2), e2fsprogs (= 1.42.11-2), file (= 1:5.19-1), findutils (= 4.4.2-9), g++ (= 4:4.9.1-3), g++-4.9 (= 4.9.1-9), gcc (= 4:4.9.1-3), gcc-4.9 (= 4.9.1-9), gcc-4.9-base (= 4.9.1-9), gettext (= 0.19.2-1), gettext-base (= 0.19.2-1), grep (= 2.20-2), groff-base (= 1.22.2-6), gzip (= 1.6-3), hostname (= 3.15), init (= 1.21), initscripts (= 2.88dsf-53.4), insserv (= 1.14.0-5), intltool-debian (= 0.35.0+20060710.1), libacl1 (= 2.2.52-1), libasan1 (= 4.9.1-9), libasprintf0c2 (= 0.19.2-1), libatomic1 (= 4.9.1-9), libattr1 (= 1:2.4.47-1), libaudit1 (= 1:2.3.7-1), libaudit-common (= 1:2.3.7-1), libblkid1 (= 2.20.1-5.8), libbz2-1.0 (= 1.0.6-7), libc6 (= 2.19-10), libc6-dev (= 2.19-10), libcap2 (= 1:2.24-4), libcap2-bin (= 1:2.24-4), libc-bin (= 2.19-10), libc-dev-bin (= 2.19-10), libcilkrts5 (= 4.9.1-9), libcloog-isl4 (= 0.18.2-1), libcomerr2 (= 1.42.11-2), libcroco3 (= 0.6.8-3), libcryptsetup4 (= 2:1.6.6-1), libdb5.3 (= 5.3.28-6), libdbus-1-3 (= 1.8.6-2), libdebconfclient0 (= 0.191), libdevmapper1.02.1 (= 2:1.02.88-1), libdpkg-perl (= 1.17.13), libffi6 (= 3.1-2), libgcc1 (= 1:4.9.1-9), libgcc-4.9-dev (= 4.9.1-9), libgcrypt11 (= 1.5.4-2), libgcrypt20 (= 1.6.2-2), libgdbm3 (= 1.8.3-13), libglib2.0-0 (= 2.40.0-4), libgmp10 (= 2:6.0.0+dfsg-6), libgomp1 (= 4.9.1-9), libgpg-error0 (= 1.13-3), libintl-perl (= 1.23-1), libisl10 (= 0.12.2-2), libitm1 (= 4.9.1-9), libkmod2 (= 18-1), liblzma5 (= 5.1.1alpha+20120614-2), libmagic1 (= 1:5.19-1), libmount1 (= 2.20.1-5.8), libmpc3 (= 1.0.2-1), libmpfr4 (= 3.1.2-1), libncurses5 (= 5.9+20140712-2), libncurses5-dev (= 5.9+20140712-2), libncursesw5 (= 5.9+20140712-2), libpam0g (= 1.1.8-3.1), libpam-modules (= 1.1.8-3.1), libpam-modules-bin (= 1.1.8-3.1), libpam-runtime (= 1.1.8-3.1), libpcre3 (= 1:8.35-3), libpipeline1 (= 1.3.0-1), libprocps3 (= 1:3.3.9-7), libquadmath0 (= 4.9.1-9), libselinux1 (= 2.3-1), libsemanage1 (= 2.3-1), libsemanage-common (= 2.3-1), libsepol1 (= 2.3-1), libss2 (= 1.42.11-2), libstdc++-4.9-dev (= 4.9.1-9), libstdc++6 (= 4.9.1-9), libsystemd-journal0 (= 208-8), libsystemd-login0 (= 208-8), libtext-unidecode-perl (= 0.04-2), libtimedate-perl (= 2.3000-2), libtinfo5 (= 5.9+20140712-2), libtinfo-dev (= 5.9+20140712-2), libubsan0 (= 4.9.1-9), libudev1 (= 208-8), libunistring0 (= 0.9.3-5.2), libustr-1.0-1 (= 1.0.4-3), libuuid1 (= 2.20.1-5.8), libwrap0 (= 7.6.q-25), libxml2 (= 2.9.1+dfsg1-4), libxml-libxml-perl (= 2.0116+dfsg-1+b1), libxml-namespacesupport-perl (= 1.11-1), libxml-sax-base-perl (= 1.07-1), libxml-sax-perl (= 0.99+dfsg-2), linux-libc-dev (= 3.14.15-2), login (= 1:4.2-2+b1), lsb-base (= 4.1+Debian13), make (= 4.0-8), man-db (= 2.6.7.1-1), mawk (= 1.3.3-17), mount (= 2.20.1-5.8), ncurses-base (= 5.9+20140712-2), ncurses-bin (= 5.9+20140712-2), passwd (= 1:4.2-2+b1), patch (= 2.7.1-6), perl (= 5.20.0-4), perl-base (= 5.20.0-4), perl-modules (= 5.20.0-4), po-debconf (= 1.0.16+nmu3), procps (= 1:3.3.9-7), sed (= 4.2.2-4), sensible-utils (= 0.0.9), startpar (= 0.59-3), systemd (= 208-8), systemd-sysv (= 208-8), sysvinit-utils (= 2.88dsf-53.4), sysv-rc (= 2.88dsf-53.4), tar (= 1.27.1-2), texinfo (= 5.2.0.dfsg.1-4), tzdata (= 2014f-1), ucf (= 3.0030), udev (= 208-8), util-linux (= 2.20.1-5.8), xz-utils (= 5.1.1alpha+20120614-2), zlib1g (= 1:1.2.8.dfsg-2) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQJ8BAEBCgBmBQJWYNoZXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w Dx1xK9OgUkDt+gwh9WK/QrvV7IOjAg/pl6j7px5u6MNKHWPW0tC9M5123Q2KmaGT -----END PGP SIGNATURE-----
buildinfo specification
File name
buildinfo files should be named ${SOURCE}_${DEBIAN_VERSION}_${STRING}.buildinfo.
SOURCE is the source package name.
DEBIAN_VERSION is the full version of the Debian package without the epoch. This is the same as the Version: field in the corresponding .changes file.
STRING is an arbitrary string, unique within all ${SOURCE},${DEBIAN_VERSION} builds. The string should consist only of alphanumeric characters and hyphens. It's likely that debian buildd's would create this string as ${HOSTNAME}-${TIMESTAMP}, where TIMESTAMP is an ISO-8601-formatted numeric string. Debian developers may choose arbitrary strings here. The only requirement from the archive's perspective is that the STRING be unique for source package version, so that multiple .buildinfo files may exist for a given version of a package.
buildinfo field descriptions
- Format
- Build-Architecture
The Debian machine architecture that was used to perform the build. 1
- Source
Same as in `*.changes`. If the source and binary versions differ (e.g. binNMUs), the source version is added between parenthesis.
- Binary
- Architecture
Same as in `*.changes` except source should not be specified: only concrete architectures, no wildcards or any.
- Version
- Changes
Close to the one in `*.changes`. When source and binary versions differ, the field is added with the content of the extra changelog entries.
- Checksums-Sha256
Same format as other control files. Must list the *.dsc file and all files listed in `debian/files`.
- Build-Path
Absolute path of the directory in which the package has been built. See ReproducibleBuilds/History#Giving_up_on_build_paths for a rationale.
- Build-Environment
List of all packages forming the build environment, their architecture if different from build architecture, and their version. This includes Essential packages, build-essential, and Build-Depends and Build-Depends-Indep. For each packages, their dependencies should be recursively listed. The format is the same as Built-Using.
buildinfo signatures
.buildinfo files describe the state of a particular build. The operator having made the build may sign the file by wrapping the entire file in a cleartext OpenPGP signature.
Inclusion of *.buildinfo in the archive
We want an interested user (an individual, organization, or derivative) to be able to find the relevant .buildinfo files from the archive they already have configured. There are several ways this could be done, but we imagine that a relatively easy way is to ship all the .buildinfo files for a given architecture for a suite in a Buildinfos.tgz archive file placed next to Packages.gz. See 763822 for more details.
Previous ideas
.changes files looked like a good place to record the environment as they list the checksums of the build products and are signed by either the maintainer or the buildd operator.
But the meaning of *.changes files is pretty clear: they describe a transactional change operation on the archive. They are not saved directly in the archive: they are equivalent of a log entry. The name of *.changes file is also not specified and multiple operations can have the same name.
(See also 719854 for the first attempt which tried using XC- field in debian/control.)
We thought that .buildinfo files would be all the data required to do a rebuild, as opposed to a description of the state of the build system. However, it's likely that we don't know the actual requirements, and it's likely that the description will be more detailed than is necessary in some cases. This means that two different buildinfo files could attest to the same exact binary artifact.
We thought at some point that the .buildinfo could be referenced in the Packages index, but this does not seem necessary, and might be overkill, since only some users may actually have varied systems.
We do not need to specify the ordering of fields?
things we are not currently including
- order in which the system packages (essential, build-essential, build-deps) were installed.
- the cryptographic digest of the system packages themselves
- the digest of the source code of the system packages
This is, the build architecture in GNU terminology. See dpkg-architecture(1) for a definition. The target architecture for cross compilers is usually encoded in the package name. The host architecture is the binary package architecture (in the Architecture field and file name). Thus, target and host architecture do not need to be encoded here. (1) | https://wiki.debian.org/ReproducibleBuilds/BuildinfoFiles?action=diff&rev1=17&rev2=18 | CC-MAIN-2022-33 | refinedweb | 1,904 | 56.93 |
This guide is intended for publishers who want to use the Google Mobile Ads SDK to load and display ads from MoPub via mediation. It covers how to add MoPub to an ad unit's mediation configuration, and how to integrate the MoPub SDK and adapter into a Unity app.
Supported ad formats and features
The AdMob mediation adapter for MoPub has the following capabilities:
Requirements
- Unity 4 or higher
- To deploy on Android
- Android SDK 4.1 (API level 16) or higher
- Google Play services 17.2.0 or higher
- To deploy on iOS
- Xcode 9.2 or higher
- iOS Deployment target of 8.0 or higher
- Google Mobile Ads SDK 7.42.2 or higher
- A working Unity project configured with Google Mobile Ads SDK. See Get Started for details.
Step 1: Set up MoPub
First, sign up (if you haven't already) and log in to your MoPub UI. Navigate to the Apps page and click the Add a New App button.
Select the Platform for which you would want to set up Unity Ads. If your app supports both platforms (Android and iOS), you need to add your apps separately for each platform.
Enter the Name of your app, Package name of your app, and select a primary and secondary category of your app from the provided list.
MoPub requires you to create your first Ad Unit before completing the registration of<<
Click Save and View Code Integration to get your Ad Unit ID.
We will use this Ad Unit ID to set up your AdMob Ad Unit ID for mediation in the next section.
MoPub Marketplace
To get ads from MoPub, your account needs to be approved for MoPub Marketplace. During your initial account setup, you will be prompted to go through the process for Marketplace approval. Part of this process includes entering your payment information.
See Marketplace setup for more details.
Step 2: Configure AdMob Ad Unit
You need to add MoPub to the mediation configuration for your Ad Unit. First sign into the AdMob UI.
If you're deploying your Unity app to both Android and iOS, you need two AdMob ad units, one for each platform._6<<
Enter your ad format and platform, then click Continue.
Android.
iOS.
_15<<
Enter the Ad Unit ID obtained in the previous section and click Done.
Android
iOS
Finally, click Save.
Using rewarded video ads
In the settings for your rewarded video ad unit, check the Apply to all networks in Mediation groups box so that you provide the same reward to the user no matter which ad network is served.
Download the latest version of the
Google Mobile Ads mediation plugin for MoPub
and extract the
GoogleMobileAdsMoPubMediation.unitypackage from the zip
file.
In your Unity project editor, select Assets > Import Package > Custom Package
and find the
GoogleMobileAdsMoPubMediation.unitypackage
file you downloaded. Make sure that all the files are selected and click
Import.
Step 4: Additional code required
Initialize the MoPub SDK
Before loading ads, have your apps initialize the MoPub SDK. The
Google Mobile Ads mediation plugin for MoPub
version 2.3.1 includes the
MoPub.Initialize() method to initialize the MoPub
SDK with any valid MoPub ad unit ID that you created in
Step 1. This needs to be done only once,
ideally at app launch.
... using GoogleMobileAds.Api; using GoogleMobileAds.Api.Mediation.MoPub; public class GoogleMobileAdsDemoScript : MonoBehaviour { public void Start() { // Initialize the Google Mobile Ads SDK. MobileAds.Initialize(appId); // Initialize the MoPub SDK. MoPub.Initialize("YOUR_MOPUB_AD_UNIT_ID"); ... } }
Android
No additional code required.
iOS_20<<_21<<.
Android: | https://developers.google.com/admob/unity/mediation/mopub | CC-MAIN-2019-26 | refinedweb | 591 | 57.47 |
Agile testing of Launchpoint scripts using a Testsection
Did you ever had the problem that it is sometimes really time consuming when you have to test a more complex script which is called by a script launchpoint? You mostly have to click through the ICD/Maximo GUI to simulate some behaviour so your script gets launched in context of a Mbo.
In this post I will show you a way how you can improve your script with a Testsection and after that you can run it from the Automation Script Application.
What is the base issue we have? Why is it so difficult to just run a script directly from the application editor? In most cases the answer is, that our script requires the context of a Mbo Object which is provided by the implicit variable mbo. So one of our first lines will be normally look like this:
itemMbo = mbo @UndefinedVariable
From this point on we can take the itemMbo to get/set values, to navigate to different MboSet’s and so on.
The easy solution to our issue is, that we need to simulate the mbo variable. If we run the script from the Automation Script Application no implicit variables are defined at all and so no mbo variable is available. We can test on this situation and then initialize the mbo variable our self by getting a specifc Mbo via the MxServer context. The following script will show this in context of the Item Mbo:
from psdi.server import MXServer mxServer = MXServer.getMXServer() userInfo = mxServer.getSystemUserInfo() # Section to test script from Scripteditor try: mbo # @UndefinedVariable except NameError: mbo = None if mbo is None: scriptHomeSet = mxServer.getMboSet("ITEM", userInfo) scriptHomeSet.setWhere("ITEMNUM='ITAMT61'") scriptHomeSet.reset() itemMbo = scriptHomeSet.getMbo(0) else: # This is the normal Entrypoint via a Launchpoint since # implicit variable mbo is defined. itemMbo = mbo # @UndefinedVariable itemNum = itemMbo.getString("ITEMNUM") itemDesc = itemMbo.getString("DESCRIPTION")
You just have to change the selection criteria in line 14 to select the correct item record. After that run the script from the Automation Script Application (see here for details).
You can integrate this technique in basically any script initiated by a launchpoint. Only remember, that no implicit variables can be used in the script!
I am able to catch the exception and write the message to SystemOut.log with a print statement in the ‘except’ code block. I can see them when the script and logger are set to DEBUG. Is there a way to write to the SystemOut.log when the script & logger are at the default ERROR setting? | https://www.maximoscripting.com/agile-testing-of-launchpoints-scripts-using-a-testsection/ | CC-MAIN-2020-45 | refinedweb | 429 | 65.12 |
A tool to analyse and convert data coming from the face analysing software Openface (Cambridge).
Project description
exploface
Author: B.L. de Vries (Netherlands eScience Center)
Introducion
Exploface is a simple python package to work with the output of openface. Openface is a software to analyse faces in images and videos. Please see the website of the authors for more information on openface:
This package works with the output files of openface (csv files). It does some basic inspection and statistics with the data. It also allows to convert the data to a format readable by Elan (a video annotation tool,). Further more it is able to convert the per camera frame format of openface to a format that lists start and end times of a detection.
Installation (from the command-line)
Please follow the general guidelines for installing python packages and use a virtual environment:
- Instructions for installing python packages with pip:
- You can also consider using conda to manage your environments:
When you are in your command-line console and optionally, your virtual environment, install exploface by typing:
pip install exploface
- Test the installation by starting the python shell (type
pythonin your command-line). And then test by running
import exploface
If this works, you are ready to do the tutorial.
Tutorials
In the directory TUTORIALS you find tutorials on how to use exploface.
- Tutorial 1: exploring openface csv files and using the exploface package
- Tutorial 2: underconstruction
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/exploface/ | CC-MAIN-2020-16 | refinedweb | 267 | 50.97 |
Created attachment 329474 [details] testcase (Re-reporting bug 444576, which got a little messy.) The innerHTML in an XHTML document setter accepts "pure XML" entities such as & but rejects HTML-specific entities such as &nsbp;. I think it should accept both. (It would be even better if it went beyond that and took the DTD into consideration; see bug 325581 for an example with MathML entities.) Quick survey of browser behaviors on Tiger: * Firefox trunk: NS_ERROR_INVALID_POINTER * Firefox 2: NS_ERROR_DOM_SYNTAX_ERR * Safari: NO_MODIFICATION_ALLOWED_ERR (clearly a lie, since it allows &) * Opera: Works.
What does HTML5 say here?
That Firefox is correct. basically says to create a new XML parser. The only bits you feed to it are the element start tag including namespace prefixes and the default namespace in scope. No entity declarations and such. | https://bugzilla.mozilla.org/show_bug.cgi?id=445166 | CC-MAIN-2017-17 | refinedweb | 132 | 58.28 |
$ cnpm install @donmhico/wds-create-block
Create Block is an officially supported way to create blocks for registering a block for a WordPress plugin. It offers a modern build setup with no configuration. It generates PHP, JS, CSS code, and everything else you need to start the project.
It is largely inspired by create-react-app. Major kudos to @gaearon, the whole Facebook team, and the React community.
Blocks are the fundamental element of the WordPress block editor. They are the primary way in which plugins and themes can register their own functionality and extend the capabilities of the editor.
Visit the Gutenberg handbook to learn more about Block API.
You just need to provide the
slug which is the target location for scaffolded files and the internal block name.
$ npm init @wordpress/block todo-list $ cd todo-list $ npm start
(requires
node version
10.0.0 or above, and
npm version
6.9.0 or above)
You don’t need to install or configure tools like webpack, Babel or ESLint yourself. They are preconfigured and hidden so that you can focus on the code.
The following command generates PHP, JS and CSS code for registering a block.
$ npm init @wordpress.
Options:
-t, --template <name> template type name, allowed values: "es5", "esnext" (default: "esnext") -V, --version output the version number -h, --help output usage information
Please note that
--version and
--help options don't work with
npm init. You have to use
npx instead, as presented in the examples.
More examples:
$ npm init @wordpress/block
npm start) which enables ESNext and JSX support.
$ npm init @wordpress/block --template es5
npxto output usage information.
$ npx @wordpress/create-block --help
When you scaffold a block, you must provide at least a
slug name, the
namespace which usually corresponds to either the
theme or
plugin name, and the
category. In most cases, we recommended pairing blocks with plugins rather than themes, because only using plugin ensures that all blocks still work when your theme changes.
Inside that bootstrapped directory (it doesn't apply to
es5 template), you can run several commands:
$ npm start
Starts the build for development. Learn more.
$ npm run build
Builds the code for production. Learn more.
$ npm run format:js
Formats JavaScript files. Learn more.
$ npm run lint:css
Lints CSS files. Learn more.
$ npm run lint:js
Lints JavaScript files. Learn more.
$ npm run packages-update
Updates WordPress packages to the latest version. Learn more.
Another way of making a developer’s life easier is to use WP-CLI, which provides a command-line interface for many actions you might perform on the WordPress instance. One of the commands
wp scaffold block was used as the baseline for this tool and ES5 template in particular. | https://developer.aliyun.com/mirror/npm/package/@donmhico/wds-create-block | CC-MAIN-2020-24 | refinedweb | 457 | 64.81 |
In general, a line is a geometrical structure which joins two points on an XY plane.
In JavaFX, a line is represented by a class named Line. This class belongs to the package javafx.scene.shape.
By instantiating this class, you can create a line node in JavaFX.
This class has 4 properties of the double datatype namely −
startX − The x coordinate of the start point of the line.
startY − The y coordinate of the start point of the line.
endX − The x coordinate of the end point of the line.
endY − The y coordinate of the end point of the line.
To draw a line, you need to pass values to these properties, either by passing them to the constructor of this class, in the same order, at the time of instantiation, as follows −
Line line = new Line(startX, startY, endX, endY);
Or, by using their respective setter methods as follows −
setStartX(value); setStartY(value); setEndX(value); setEndY(value);
Follow the steps given below to Draw a Line in JavaFX.
Create a Java class and inherit the Application class of the package javafx.application and implement the start() method of this class as follows.
public class ClassName extends Application { @Override public void start(Stage primaryStage) throws Exception { } }
You can create a line in JavaFX by instantiating the class named Line which belongs to a package javafx.scene.shape, instantiate this class as follows.
//Creating a line object Line line = new Line();
Specify the coordinates to draw the line on an XY plane by setting the properties startX, startY, endX and endY, using their respective setter methods as shown in the following code block.
line.setStartX(100.0); line.setStartY(150.0); line.setEndX(500.0); line.setEndY(150.0);
Create a group object by instantiating the class named Group, which belongs to the package javafx.scene.
Pass the Line (node) object, created in the previous step, as a parameter to the constructor of the Group class, in order to add it to the group as follows generates a straight line using JavaFX. Save this code in a file with the name DrawingLine.java.
import javafx.application.Application; import javafx.scene.Group; import javafx.scene.Scene; import javafx.scene.shape.Line; import javafx.stage.Stage; public class DrawingLine extends Application{ @Override public void start(Stage stage) { //Creating a line object Line line = new Line(); //Setting the properties to a line line.setStartX(100.0); line.setStartY(150.0); line.setEndX(500.0); line.setEndY(150.0); //Creating a Group Group root = new Group(line); //Creating a Scene Scene scene = new Scene(root, 600, 300); //Setting title to the scene stage.setTitle("Sample application"); //Adding the scene to the stage stage.setScene(scene); //Displaying the contents of a scene stage.show(); } public static void main(String args[]){ launch(args); } }
Compile and execute the saved java file from the command prompt using the following commands.
javac DrawingLine.java java DrawingLine
On executing, the above program generates a JavaFX window displaying a straight line as shown below. | https://www.tutorialspoint.com/javafx/2dshapes_line.htm | CC-MAIN-2020-05 | refinedweb | 504 | 54.73 |
import "crawshaw.io/sqlite"
Package sqlite provides a Go interface to SQLite 3.
The semantics of this package are deliberately close to the SQLite3 C API, so it is helpful to be familiar with.
An SQLite connection is represented by a *sqlite.Conn. Connections cannot be used concurrently. A typical Go program will create a pool of connections (using Open to create a *sqlitex.Pool) so goroutines can borrow a connection while they need to talk to the database.
This package assumes SQLite will be used concurrently by the process through several connections, so the build options for SQLite enable multi-threading and the shared cache:
The implementation automatically handles shared cache locking, see the documentation on Stmt.Step for details.
The optional SQLite3 compiled in are: FTS5, RTree, JSON1, Session
This is not a database/sql driver.
Statements are prepared with the Prepare and PrepareTransient methods. When using Prepare, statements are keyed inside a connection by the original query string used to create them. This means long-running high-performance code paths can write:
stmt, err := conn.Prepare("SELECT ...")
After all the connections in a pool have been warmed up by passing through one of these Prepare calls, subsequent calls are simply a map lookup that returns an existing statement.
The sqlite package supports the SQLite incremental I/O interface for streaming blob data into and out of the the database without loading the entire blob into a single []byte. (This is important when working either with very large blobs, or more commonly, a large number of moderate-sized blobs concurrently.)
To write a blob, first use an INSERT statement to set the size of the blob and assign a rowid:
"INSERT INTO blobs (myblob) VALUES (?);"
Use BindZeroBlob or SetZeroBlob to set the size of myblob. Then you can open the blob with:
b, err := conn.OpenBlob("", "blobs", "myblob", conn.LastInsertRowID(), true)
Every connection can have a done channel associated with it using the SetInterrupt method. This is typically the channel returned by a context.Context Done method.
For example, a timeout can be associated with a connection session:
ctx := context.WithTimeout(context.Background(), 100*time.Millisecond) conn.SetInterrupt(ctx.Done())
As database connections are long-lived, the SetInterrupt method can be called multiple times to reset the associated lifetime.
When using pools, the shorthand for associating a context with a connection is:
conn := dbpool.Get(ctx) if conn == nil { // ... handle error } defer dbpool.Put(c)
SQLite transactions have to be managed manually with this package by directly calling BEGIN / COMMIT / ROLLBACK or SAVEPOINT / RELEASE/ ROLLBACK. The sqlitex has a Savepoint function that helps automate this.
Using a Pool to execute SQL in a concurrent HTTP handler.
var dbpool *sqlitex.Pool func main() { var err error dbpool, err = sqlitex.Open("file:memory:?mode=memory", 0, 10) if err != nil { log.Fatal(err) } http.HandleFunc("/", handle) log.Fatal(http.ListenAndServe(":8080", nil)) } func handle(w http.ResponseWriter, r *http.Request) { conn := dbpool.Get(r.Context()) if conn == nil { return } defer dbpool.Put(conn) stmt := conn.Prep("SELECT foo FROM footable WHERE id = $id;") stmt.SetText("$id", "_user_id_") for { if hasRow, err := stmt.Step(); err != nil { // ... handle error } else if !hasRow { break } foo := stmt.GetText("foo") // ... use foo } }
For helper functions that make some kinds of statements easier to write see the sqlitex package.
backup.go blob.go doc.go error.go extension.go func.go incrementor.go session.go snapshot.go sqlite.go static.go
const ( SQLITE_OK = ErrorCode(C.SQLITE_OK) // do not use in Error SQLITE_ERROR = ErrorCode(C.SQLITE_ERROR) SQLITE_INTERNAL = ErrorCode(C.SQLITE_INTERNAL) SQLITE_PERM = ErrorCode(C.SQLITE_PERM) SQLITE_ABORT = ErrorCode(C.SQLITE_ABORT) SQLITE_BUSY = ErrorCode(C.SQLITE_BUSY) SQLITE_LOCKED = ErrorCode(C.SQLITE_LOCKED) SQLITE_NOMEM = ErrorCode(C.SQLITE_NOMEM) SQLITE_READONLY = ErrorCode(C.SQLITE_READONLY) SQLITE_INTERRUPT = ErrorCode(C.SQLITE_INTERRUPT) SQLITE_IOERR = ErrorCode(C.SQLITE_IOERR) SQLITE_CORRUPT = ErrorCode(C.SQLITE_CORRUPT) SQLITE_NOTFOUND = ErrorCode(C.SQLITE_NOTFOUND) SQLITE_FULL = ErrorCode(C.SQLITE_FULL) SQLITE_CANTOPEN = ErrorCode(C.SQLITE_CANTOPEN) SQLITE_PROTOCOL = ErrorCode(C.SQLITE_PROTOCOL) SQLITE_EMPTY = ErrorCode(C.SQLITE_EMPTY) SQLITE_SCHEMA = ErrorCode(C.SQLITE_SCHEMA) SQLITE_TOOBIG = ErrorCode(C.SQLITE_TOOBIG) SQLITE_CONSTRAINT = ErrorCode(C.SQLITE_CONSTRAINT) SQLITE_MISMATCH = ErrorCode(C.SQLITE_MISMATCH) SQLITE_MISUSE = ErrorCode(C.SQLITE_MISUSE) SQLITE_NOLFS = ErrorCode(C.SQLITE_NOLFS) SQLITE_AUTH = ErrorCode(C.SQLITE_AUTH) SQLITE_FORMAT = ErrorCode(C.SQLITE_FORMAT) SQLITE_RANGE = ErrorCode(C.SQLITE_RANGE) SQLITE_NOTADB = ErrorCode(C.SQLITE_NOTADB) SQLITE_NOTICE = ErrorCode(C.SQLITE_NOTICE) SQLITE_WARNING = ErrorCode(C.SQLITE_WARNING) SQLITE_ROW = ErrorCode(C.SQLITE_ROW) // do not use in Error SQLITE_DONE = ErrorCode(C.SQLITE_DONE) // do not use in Error SQLITE_ERROR_MISSING_COLLSEQ = ErrorCode(C.SQLITE_ERROR_MISSING_COLLSEQ) SQLITE_ERROR_RETRY = ErrorCode(C.SQLITE_ERROR_RETRY) SQLITE_ERROR_SNAPSHOT = ErrorCode(C.SQLITE_ERROR_SNAPSHOT) SQLITE_IOERR_READ = ErrorCode(C.SQLITE_IOERR_READ) SQLITE_IOERR_SHORT_READ = ErrorCode(C.SQLITE_IOERR_SHORT_READ) SQLITE_IOERR_WRITE = ErrorCode(C.SQLITE_IOERR_WRITE) SQLITE_IOERR_FSYNC = ErrorCode(C.SQLITE_IOERR_FSYNC) SQLITE_IOERR_DIR_FSYNC = ErrorCode(C.SQLITE_IOERR_DIR_FSYNC) SQLITE_IOERR_TRUNCATE = ErrorCode(C.SQLITE_IOERR_TRUNCATE) SQLITE_IOERR_FSTAT = ErrorCode(C.SQLITE_IOERR_FSTAT) SQLITE_IOERR_UNLOCK = ErrorCode(C.SQLITE_IOERR_UNLOCK) SQLITE_IOERR_RDLOCK = ErrorCode(C.SQLITE_IOERR_RDLOCK) SQLITE_IOERR_DELETE = ErrorCode(C.SQLITE_IOERR_DELETE) SQLITE_IOERR_BLOCKED = ErrorCode(C.SQLITE_IOERR_BLOCKED) SQLITE_IOERR_NOMEM = ErrorCode(C.SQLITE_IOERR_NOMEM) SQLITE_IOERR_ACCESS = ErrorCode(C.SQLITE_IOERR_ACCESS) SQLITE_IOERR_CHECKRESERVEDLOCK = ErrorCode(C.SQLITE_IOERR_CHECKRESERVEDLOCK) SQLITE_IOERR_LOCK = ErrorCode(C.SQLITE_IOERR_LOCK) SQLITE_IOERR_CLOSE = ErrorCode(C.SQLITE_IOERR_CLOSE) SQLITE_IOERR_DIR_CLOSE = ErrorCode(C.SQLITE_IOERR_DIR_CLOSE) SQLITE_IOERR_SHMOPEN = ErrorCode(C.SQLITE_IOERR_SHMOPEN) SQLITE_IOERR_SHMSIZE = ErrorCode(C.SQLITE_IOERR_SHMSIZE) SQLITE_IOERR_SHMLOCK = ErrorCode(C.SQLITE_IOERR_SHMLOCK) SQLITE_IOERR_SHMMAP = ErrorCode(C.SQLITE_IOERR_SHMMAP) SQLITE_IOERR_SEEK = ErrorCode(C.SQLITE_IOERR_SEEK) SQLITE_IOERR_DELETE_NOENT = ErrorCode(C.SQLITE_IOERR_DELETE_NOENT) SQLITE_IOERR_MMAP = ErrorCode(C.SQLITE_IOERR_MMAP) SQLITE_IOERR_GETTEMPPATH = ErrorCode(C.SQLITE_IOERR_GETTEMPPATH) SQLITE_IOERR_CONVPATH = ErrorCode(C.SQLITE_IOERR_CONVPATH) SQLITE_IOERR_VNODE = ErrorCode(C.SQLITE_IOERR_VNODE) SQLITE_IOERR_AUTH = ErrorCode(C.SQLITE_IOERR_AUTH) SQLITE_IOERR_BEGIN_ATOMIC = ErrorCode(C.SQLITE_IOERR_BEGIN_ATOMIC) SQLITE_IOERR_COMMIT_ATOMIC = ErrorCode(C.SQLITE_IOERR_COMMIT_ATOMIC) SQLITE_IOERR_ROLLBACK_ATOMIC = ErrorCode(C.SQLITE_IOERR_ROLLBACK_ATOMIC) SQLITE_LOCKED_SHAREDCACHE = ErrorCode(C.SQLITE_LOCKED_SHAREDCACHE) SQLITE_BUSY_RECOVERY = ErrorCode(C.SQLITE_BUSY_RECOVERY) SQLITE_BUSY_SNAPSHOT = ErrorCode(C.SQLITE_BUSY_SNAPSHOT) SQLITE_CANTOPEN_NOTEMPDIR = ErrorCode(C.SQLITE_CANTOPEN_NOTEMPDIR) SQLITE_CANTOPEN_ISDIR = ErrorCode(C.SQLITE_CANTOPEN_ISDIR) SQLITE_CANTOPEN_FULLPATH = ErrorCode(C.SQLITE_CANTOPEN_FULLPATH) SQLITE_CANTOPEN_CONVPATH = ErrorCode(C.SQLITE_CANTOPEN_CONVPATH) SQLITE_CORRUPT_VTAB = ErrorCode(C.SQLITE_CORRUPT_VTAB) SQLITE_READONLY_RECOVERY = ErrorCode(C.SQLITE_READONLY_RECOVERY) SQLITE_READONLY_CANTLOCK = ErrorCode(C.SQLITE_READONLY_CANTLOCK) SQLITE_READONLY_ROLLBACK = ErrorCode(C.SQLITE_READONLY_ROLLBACK) SQLITE_READONLY_DBMOVED = ErrorCode(C.SQLITE_READONLY_DBMOVED) SQLITE_READONLY_CANTINIT = ErrorCode(C.SQLITE_READONLY_CANTINIT) SQLITE_READONLY_DIRECTORY = ErrorCode(C.SQLITE_READONLY_DIRECTORY) SQLITE_ABORT_ROLLBACK = ErrorCode(C.SQLITE_ABORT_ROLLBACK) SQLITE_CONSTRAINT_CHECK = ErrorCode(C.SQLITE_CONSTRAINT_CHECK) SQLITE_CONSTRAINT_COMMITHOOK = ErrorCode(C.SQLITE_CONSTRAINT_COMMITHOOK) SQLITE_CONSTRAINT_FOREIGNKEY = ErrorCode(C.SQLITE_CONSTRAINT_FOREIGNKEY) SQLITE_CONSTRAINT_FUNCTION = ErrorCode(C.SQLITE_CONSTRAINT_FUNCTION) SQLITE_CONSTRAINT_NOTNULL = ErrorCode(C.SQLITE_CONSTRAINT_NOTNULL) SQLITE_CONSTRAINT_PRIMARYKEY = ErrorCode(C.SQLITE_CONSTRAINT_PRIMARYKEY) SQLITE_CONSTRAINT_TRIGGER = ErrorCode(C.SQLITE_CONSTRAINT_TRIGGER) SQLITE_CONSTRAINT_UNIQUE = ErrorCode(C.SQLITE_CONSTRAINT_UNIQUE) SQLITE_CONSTRAINT_VTAB = ErrorCode(C.SQLITE_CONSTRAINT_VTAB) SQLITE_CONSTRAINT_ROWID = ErrorCode(C.SQLITE_CONSTRAINT_ROWID) SQLITE_NOTICE_RECOVER_WAL = ErrorCode(C.SQLITE_NOTICE_RECOVER_WAL) SQLITE_NOTICE_RECOVER_ROLLBACK = ErrorCode(C.SQLITE_NOTICE_RECOVER_ROLLBACK) SQLITE_WARNING_AUTOINDEX = ErrorCode(C.SQLITE_WARNING_AUTOINDEX) SQLITE_AUTH_USER = ErrorCode(C.SQLITE_AUTH_USER) )
const ( SQLITE_INSERT = OpType(C.SQLITE_INSERT) SQLITE_DELETE = OpType(C.SQLITE_DELETE) SQLITE_UPDATE = OpType(C.SQLITE_UPDATE) )
const ( SQLITE_CHANGESET_DATA = ConflictType(C.SQLITE_CHANGESET_DATA) SQLITE_CHANGESET_NOTFOUND = ConflictType(C.SQLITE_CHANGESET_NOTFOUND) SQLITE_CHANGESET_CONFLICT = ConflictType(C.SQLITE_CHANGESET_CONFLICT) SQLITE_CHANGESET_CONSTRAINT = ConflictType(C.SQLITE_CHANGESET_CONSTRAINT) SQLITE_CHANGESET_FOREIGN_KEY = ConflictType(C.SQLITE_CHANGESET_FOREIGN_KEY) )
const ( SQLITE_CHANGESET_OMIT = ConflictAction(C.SQLITE_CHANGESET_OMIT) SQLITE_CHANGESET_ABORT = ConflictAction(C.SQLITE_CHANGESET_ABORT) SQLITE_CHANGESET_REPLACE = ConflictAction(C.SQLITE_CHANGESET_REPLACE) )
const ( SQLITE_OPEN_READONLY = OpenFlags(C.SQLITE_OPEN_READONLY) SQLITE_OPEN_READWRITE = OpenFlags(C.SQLITE_OPEN_READWRITE) SQLITE_OPEN_CREATE = OpenFlags(C.SQLITE_OPEN_CREATE) SQLITE_OPEN_URI = OpenFlags(C.SQLITE_OPEN_URI) SQLITE_OPEN_MEMORY = OpenFlags(C.SQLITE_OPEN_MEMORY) SQLITE_OPEN_MAIN_DB = OpenFlags(C.SQLITE_OPEN_MAIN_DB) SQLITE_OPEN_TEMP_DB = OpenFlags(C.SQLITE_OPEN_TEMP_DB) SQLITE_OPEN_TRANSIENT_DB = OpenFlags(C.SQLITE_OPEN_TRANSIENT_DB) SQLITE_OPEN_MAIN_JOURNAL = OpenFlags(C.SQLITE_OPEN_MAIN_JOURNAL) SQLITE_OPEN_TEMP_JOURNAL = OpenFlags(C.SQLITE_OPEN_TEMP_JOURNAL) SQLITE_OPEN_SUBJOURNAL = OpenFlags(C.SQLITE_OPEN_SUBJOURNAL) SQLITE_OPEN_MASTER_JOURNAL = OpenFlags(C.SQLITE_OPEN_MASTER_JOURNAL) SQLITE_OPEN_NOMUTEX = OpenFlags(C.SQLITE_OPEN_NOMUTEX) SQLITE_OPEN_FULLMUTEX = OpenFlags(C.SQLITE_OPEN_FULLMUTEX) SQLITE_OPEN_SHAREDCACHE = OpenFlags(C.SQLITE_OPEN_SHAREDCACHE) SQLITE_OPEN_PRIVATECACHE = OpenFlags(C.SQLITE_OPEN_PRIVATECACHE) SQLITE_OPEN_WAL = OpenFlags(C.SQLITE_OPEN_WAL) )
const ( SQLITE_DBCONFIG_DQS_DML = C.int(C.SQLITE_DBCONFIG_DQS_DML) SQLITE_DBCONFIG_DQS_DDL = C.int(C.SQLITE_DBCONFIG_DQS_DDL) )
const ( SQLITE_INTEGER = ColumnType(C.SQLITE_INTEGER) SQLITE_FLOAT = ColumnType(C.SQLITE_FLOAT) SQLITE_TEXT = ColumnType(C.SQLITE3_TEXT) SQLITE_BLOB = ColumnType(C.SQLITE_BLOB) SQLITE_NULL = ColumnType(C.SQLITE_NULL) )
BindIndexStart is the index of the first parameter when using the Stmt.Bind* functions.
ColumnIndexStart is the index of the first column when using the Stmt.Column* functions.
Logger is written to by SQLite. The Logger must be set before any connection is opened. The msg slice is only valid for the duration of the call.
It is very noisy.
ChangesetConcat concatenates two changesets.
ChangesetInvert inverts a changeset.
A Backup copies data between two databases.
It is used to backup file based or in-memory databases.
Equivalent to the sqlite3_backup* C object.
Finish is called to clean up the resources allocated by BackupInit.
PageCount returns the total number of pages in the source database at the conclusion of the most recent b.Step().
Remaining returns the number of pages still to be backed up at the conclusion of the most recent b.Step().
Step is called one or more times to transfer nPage pages at a time between databases.
Use -1 to transfer the entire database at once.
type Blob struct { io.ReadWriteSeeker io.ReaderAt io.WriterAt io.Closer // contains filtered or unexported fields }
Blob provides streaming access to SQLite blobs.
Size returns the total size of a blob.
func NewChangegroup() (*Changegroup, error)
func (cg Changegroup) Add(r io.Reader) error
func (cg Changegroup) Delete()
Delete deletes a Changegroup.
ChangesetIter is an iterator over a changeset.
An iterator is used much like a Stmt over result rows. It is also used in the conflictFn provided to ChangesetApply. To process the changes in a changeset:
iter, err := ChangesetIterStart(r) if err != nil { // ... handle err } for { hasRow, err := iter.Next() if err != nil { // ... handle err } if !hasRow { break } // Use the Op, New, Old method to inspect the change. } if err := iter.Finalize(); err != nil { // ... handle err }
func ChangesetIterStart(r io.Reader) (ChangesetIter, error)
ChangesetIterStart creates an iterator over a changeset.
func (iter ChangesetIter) Conflict(col int) (v Value, err error)
Conflict obtains conflicting row values from an iterator. Only use this in an iterator passed to a ChangesetApply conflictFn.
func (iter ChangesetIter) FKConflicts() (int, error)
FKConflicts reports the number of foreign key constraint violations.
func (iter ChangesetIter) Finalize() error
Finalize deletes a changeset iterator. Do not use in iterators passed to a ChangesetApply conflictFn.
func (iter ChangesetIter) New(col int) (v Value, err error)
New obtains new row values from an iterator.
func (iter ChangesetIter) Next() (rowReturned bool, err error)
Next moves a changeset iterator forward. Do not use in iterators passed to a ChangesetApply conflictFn.
func (iter ChangesetIter) Old(col int) (v Value, err error)
Old obtains old row values from an iterator.
func (iter ChangesetIter) Op() (table string, numCols int, opType OpType, indirect bool, err error)
Op reports details about the current operation in the iterator.
func (iter ChangesetIter) PK() ([]bool, error)
PK reports the columns that make up the primary key.
ColumnType are codes for each of the SQLite fundamental datatypes:
64-bit signed integer 64-bit IEEE floating point number string BLOB NULL
func (t ColumnType) String() string
func (code ConflictAction) String() string
func (code ConflictType) String() string
Conn is an open connection to an SQLite3 database.
A Conn can only be used by goroutine at a time.
OpenConn opens a single SQLite database connection. A flags value of 0 defaults to:
SQLITE_OPEN_READWRITE SQLITE_OPEN_CREATE SQLITE_OPEN_WAL SQLITE_OPEN_URI SQLITE_OPEN_NOMUTEX
BackupInit initializes a new Backup object to copy from src to dst.
If srcDB or dstDB is "", then a default of "main" is used.
BackupToDB creates a complete backup of the srcDB on the src Conn to a new database Conn at dstPath. The resulting dst connection is returned. This will block until the entire backup is complete.
If srcDB is "", then a default of "main" is used.
This is very similar to the first example function implemented on the following page.
Changes reports the number of rows affected by the most recent statement.
func (conn *Conn) ChangesetApply(r io.Reader, filterFn func(tableName string) bool, conflictFn func(ConflictType, ChangesetIter) ConflictAction) error
ChangesetApply applies a changeset to the database.
If a changeset will not apply cleanly then conflictFn can be used to resolve the conflict. See the SQLite documentation for full details.
func (conn *Conn) ChangesetApplyInverse(r io.Reader, filterFn func(tableName string) bool, conflictFn func(ConflictType, ChangesetIter) ConflictAction) error
ChangesetApplyInverse applies the inverse of a changeset to the database.
If a changeset will not apply cleanly then conflictFn can be used to resolve the conflict. See the SQLite documentation for full details.
This is equivalent to inverting a changeset using ChangesetInvert before applying it. It is an error to use a patchset.
CheckReset reports whether any statement on this connection is in the process of returning results.
Close closes the database connection using sqlite3_close and finalizes persistent prepared statements.
func (conn *Conn) CreateFunction(name string, deterministic bool, numArgs int, xFunc, xStep func(Context, ...Value), xFinal func(Context)) error
CreateFunction registers a Go function with SQLite for use in SQL queries.
To define a scalar function, provide a value for xFunc and set xStep/xFinal to nil.
To define an aggregation set xFunc to nil and provide values for xStep and xFinal.
State can be stored across function calls by using the Context UserData/SetUserData methods.
CreateSession creates a new session object. If db is "", then a default of "main" is used.
EnableDoubleQuotedStringLiterals allows fine grained control over whether double quoted string literals are accepted in Data Manipulation Language or Data Definition Language queries.
By default DQS is disabled since double quotes should indicate an identifier.
EnableLoadExtension allows extensions to be loaded via LoadExtension(). The SQL interface is left disabled as recommended.
GetSnapshot attempts to make a new Snapshot that records the current state of the given schema in conn. If successful, a *Snapshot and a func() is returned, and the conn will have an open READ transaction which will continue to reflect the state of the Snapshot until the returned func() is called. No WRITE transaction may occur on conn until the returned func() is called.
The returned *Snapshot is threadsafe for creating additional read transactions that reflect its state with Conn.StartSnapshotRead.
In theory, so long as at least one read transaction is open on the Snapshot, then the WAL file will not be checkpointed past that point, and the Snapshot will continue to be available for creating additional read transactions. However, if no read transaction is open on the Snapshot, then it is possible for the WAL to be checkpointed past the point of the Snapshot. If this occurs then there is no way to start a read on the Snapshot. In order to ensure that a Snapshot remains readable, always maintain at least one open read transaction on the Snapshot.
In practice, this is generally reliable but sometimes the Snapshot can sometimes become unavailable for reads unless automatic checkpointing is entirely disabled from the start.
The returned *Snapshot has a finalizer that calls Free if it has not been called, so it is safe to allow a Snapshot to be garbage collected. However, if you are sure that a Snapshot will never be used again by any thread, you may call Free once to release the memory earlier. No reads will be possible on the Snapshot after Free is called on it, however any open read transactionss will not be interrupted.
See sqlitex.Pool.GetSnapshot for a helper function for automatically keeping an open read transaction on a set aside connection until a Snapshot is GC'd.
The following must be true for this function to succeed:
- The schema of conn must be a WAL mode database.
- There must not be any transaction open on schema of conn.
- At least one transaction must have been written to the current WAL file since it was created on disk (by any connection). You can run the following SQL to ensure that a WAL file has been created.
BEGIN IMMEDIATE; COMMIT;
LastInsertRowID reports the rowid of the most recently successful INSERT.
LoadExtension attempts to load a runtime-loadable extension.
OpenBlob opens a blob in a particular {database,table,column,row}.
Prep returns a persistent SQL statement.
Any error in preparation will panic.
Persistent prepared statements are cached by the query string in a Conn. If Finalize is not called, then subsequent calls to Prepare will return the same statement.
Prepare prepares a persistent SQL statement.
Persistent prepared statements are cached by the query string in a Conn. If Finalize is not called, then subsequent calls to Prepare will return the same statement.
If the query has any unprocessed trailing bytes, Prepare returns an error.
PrepareTransient prepares an SQL statement that is not cached by the Conn. Subsequent calls with the same query will create new Stmts. Finalize must be called by the caller once done with the Stmt.
The number of trailing bytes not consumed from query is returned.
To run a sequence of queries once as part of a script, the sqlitex package provides an ExecScript function built on this.
SetBusyTimeout sets a busy handler that sleeps for up to d to acquire a lock.
By default, a large busy timeout (10s) is set on the assumption that Go programs use a context object via SetInterrupt to control timeouts.
SetInterrupt assigns a channel to control connection execution lifetime.
When doneCh is closed, the connection uses sqlite3_interrupt to stop long-running queries and cancels any *Stmt.Step calls that are blocked waiting for the database write lock.
Subsequent uses of the connection will return SQLITE_INTERRUPT errors until doneCh is reset with a subsequent call to SetInterrupt.
Typically, doneCh is provided by the Done method on a context.Context. For example, a timeout can be associated with a connection session:
ctx := context.WithTimeout(context.Background(), 100*time.Millisecond) conn.SetInterrupt(ctx.Done())
Any busy statements at the time SetInterrupt is called will be reset.
SetInterrupt returns the old doneCh assigned to the connection.
StartSnapshotRead starts a new read transaction on conn such that the read transaction refers to historical Snapshot s, rather than the most recent change to the database.
There must be no open transaction on conn. Free must not have been called on s prior to or during this function call.
If err is nil, then endRead is a function that will end the read transaction and return conn to its original state. Until endRead is called, no writes may occur on conn, and all reads on conn will refer to the Snapshot.
Context is an *sqlite3_context. It is used by custom functions to return result values. An SQLite context is in no way related to a Go context.Context.
type Error struct { Code ErrorCode // SQLite extended error code (SQLITE_OK is an invalid value) Loc string // method name that generated the error Query string // original SQL query text Msg string // value of sqlite3_errmsg, set sqlite.ErrMsg = true }
Error is an error produced by SQLite.
ErrorCode is an SQLite extended error code.
The three SQLite result codes (SQLITE_OK, SQLITE_ROW, and SQLITE_DONE), are not errors so they should not be used in an Error.
ErrCode extracts the SQLite error code from err. If err is not a sqlite Error, SQLITE_ERROR is returned. If err is nil, SQLITE_OK is returned.
This function supports wrapped errors that implement
interface { Cause() error }
for errors from packages like
Incrementor is a closure around a value that returns and increments the value on each call. For example, the boolean statments in the following code snippet would all be true.
i := NewIncrementor(3) i() == 3 i() == 4 i() == 5
This is provided as syntactic sugar for dealing with bind param and column indexes. See BindIncrementor and ColumnIncrementor for small examples.
func BindIncrementor() Incrementor
BindIncrementor returns an Incrementor that starts on 1, the first index used in Stmt.Bind* functions. This is provided as syntactic sugar for binding parameter values to a Stmt. It allows for easily changing query parameters without manually fixing up the bind indexes, which can be error prone. For example,
stmt := conn.Prep(`INSERT INTO test (a, b, c) VALUES (?, ?, ?);`) i := BindIncrementor() stmt.BindInt64(i(), a) // i() == 1 if b > 0 { stmt.BindInt64(i(), b) // i() == 2 } else { // Remember to increment the index even if a param is NULL stmt.BindNull(i()) // i() == 2 } stmt.BindText(i(), c) // i() == 3
func ColumnIncrementor() Incrementor
ColumnIncrementor returns an Incrementor that starts on 0, the first index used in Stmt.Column* functions. This is provided as syntactic sugar for parsing column values from a Stmt. It allows for easily changing queried columns without manually fixing up the column indexes, which can be error prone. For example,
stmt := conn.Prep(`SELECT a, b, c FROM test;`) stmt.Step() i := ColumnIncrementor() a := stmt.ColumnInt64(i()) // i() == 1 b := stmt.ColumnInt64(i()) // i() == 2 c := stmt.ColumnText(i()) // i() == 3
func NewIncrementor(start int) Incrementor
NewIncrementor returns an Incrementor that starts on start.
OpenFlags are flags used when opening a Conn.
A Session tracks database changes made by a Conn.
It is used to build changesets.
Equivalent to the sqlite3_session* C object.
Attach attaches a table to the session object. Changes made to the table will be tracked by the session.
An empty tableName attaches all the tables in the database.
Changeset generates a changeset from a session.
Delete deletes a Session object.
Diff appends the difference between two tables (srcDB and the session DB) to the session. The two tables must have the same name and schema.
Disable disables recording of changes by a Session.
Enable enables recording of changes by a Session. New Sessions start enabled.
Patchset generates a patchset from a session.
A Snapshot records the state of a WAL mode database for some specific point in history.
Equivalent to the sqlite3_snapshot* C object.
CompareAges returns whether s1 is older, newer or the same age as s2. Age refers to writes on the database, not time since creation.
If s is older than s2, a negative number is returned. If s and s2 are the same age, zero is returned. If s is newer than s2, a positive number is returned.
The result is valid only if both of the following are true:
- The two snapshot handles are associated with the same database file.
- Both of the Snapshots were obtained since the last time the wal file was deleted.
Free destroys a Snapshot. Free is not threadsafe but may be called more than once. However, it is not necessary to call Free on a Snapshot returned by conn.GetSnapshot or pool.GetSnapshot as these set a finalizer that calls free which will be run automatically by the GC in a finalizer. However if it is guaranteed that a Snapshot will never be used again, calling Free will allow memory to be freed earlier.
A Snapshot may become unavailable for reads before Free is called if the WAL is checkpointed into the DB past the point of the Snapshot.
Stmt is an SQLite3 prepared statement.
A Stmt is attached to a particular Conn (and that Conn can only be used by a single goroutine).
When a Stmt is no longer needed it should be cleaned up by calling the Finalize method.
BindBool binds value (as an integer 0 or 1) to a numbered stmt parameter.
Parameter indices start at 1.
BindBytes binds value to a numbered stmt parameter.
In-memory copies of value are made using this interface. For large blobs, consider using the streaming Blob object.
Parameter indices start at 1.
BindFloat binds value to a numbered stmt parameter.
Parameter indices start at 1.
BindInt64 binds value to a numbered stmt parameter.
Parameter indices start at 1.
BindNull binds an SQL NULL value to a numbered stmt parameter.
Parameter indices start at 1.
BindParamCount reports the number of parameters in stmt.
BindText binds value to a numbered stmt parameter.
Parameter indices start at 1.
BindNull binds a blob of zeros of length len to a numbered stmt parameter.
Parameter indices start at 1.
ClearBindings clears all bound parameter values on a statement.
ColumnBytes reads a query result into buf. It reports the number of bytes read.
Column indices start at 0.
ColumnCount returns the number of columns in the result set returned by the prepared statement.
ColumnFloat returns a query result as a float64.
Column indices start at 0.
ColumnIndex returns the index of the column with the given name.
If there is no column with the given name ColumnIndex returns -1.
ColumnInt returns a query result value as an int.
Note: this method calls sqlite3_column_int64 and then converts the resulting 64-bits to an int.
Column indices start at 0.
ColumnInt32 returns a query result value as an int32.
Column indices start at 0.
ColumnInt64 returns a query result value as an int64.
Column indices start at 0.
ColumnLen returns the number of bytes in a query result.
Column indices start at 0.
ColumnName returns the name assigned to a particular column in the result set of a SELECT statement.
ColumnReader creates a byte reader for a query result column.
The reader directly references C-managed memory that stops being valid as soon as the statement row resets.
ColumnText returns a query result as a string.
Column indices start at 0.
func (stmt *Stmt) ColumnType(col int) ColumnType
ColumnType returns the datatype code for the initial data type of the result column. The returned value is one of:
SQLITE_INTEGER SQLITE_FLOAT SQLITE_TEXT SQLITE_BLOB SQLITE_NULL
Column indices start at 0.
DataCount returns the number of columns in the current row of the result set of prepared statement.
Finalize deletes a prepared statement.
Be sure to always call Finalize when done with a statement created using PrepareTransient.
Do not call Finalize on a prepared statement that you intend to prepare again in the future.
GetBytes reads a query result for colName into buf. It reports the number of bytes read.
GetFloat returns a query result value for colName as a float64.
GetInt64 returns a query result value for colName as an int64.
GetLen returns the number of bytes in a query result for colName.
GetReader creates a byte reader for colName.
The reader directly references C-managed memory that stops being valid as soon as the statement row resets.
GetText returns a query result value for colName as a string.
Reset resets a prepared statement so it can be executed again.
Note that any parameter values bound to the statement are retained. To clear bound values, call ClearBindings.
SetBool binds a value (as a 0 or 1) to a parameter using a column name.
SetBytes binds bytes to a parameter using a column name. An invalid parameter name will cause the call to Step to return an error.
SetFloat binds a float64 to a parameter using a column name. An invalid parameter name will cause the call to Step to return an error.
SetInt64 binds an int64 to a parameter using a column name.
SetNull binds a null to a parameter using a column name. An invalid parameter name will cause the call to Step to return an error.
SetText binds text to a parameter using a column name. An invalid parameter name will cause the call to Step to return an error.
SetZeroBlob binds a zero blob of length len to a parameter using a column name. An invalid parameter name will cause the call to Step to return an error.
Step moves through the statement cursor using sqlite3_step.
If a row of data is available, rowReturned is reported as true. If the statement has reached the end of the available data then rowReturned is false. Thus the status codes SQLITE_ROW and SQLITE_DONE are reported by the rowReturned bool, and all other non-OK status codes are reported as an error.
If an error value is returned, then the statement has been reset.
As the sqlite package enables shared cache mode by default and multiple writers are common in multi-threaded programs, this Step method uses sqlite3_unlock_notify to handle any SQLITE_LOCKED errors.
Without the shared cache, SQLite will block for several seconds while trying to acquire the write lock. With the shared cache, it returns SQLITE_LOCKED immediately if the write lock is held by another connection in this process. Dealing with this correctly makes for an unpleasant programming experience, so this package does it automatically by blocking Step until the write lock is relinquished.
This means Step can block for a very long time. Use SetInterrupt to control how long Step will block.
For far more details, see:
type Tracer interface { NewTask(name string) TracerTask Push(name string) Pop() }
func (v Value) Type() ColumnType
Package sqlite imports 7 packages (graph) and is imported by 50 packages. Updated 2020-03-11. Refresh now. Tools for package owners. | https://godoc.org/crawshaw.io/sqlite | CC-MAIN-2020-16 | refinedweb | 4,553 | 53.58 |
Menu Shortcuts
Menu Shortcuts
The MenuItem class also provides a feature of menu shortcuts
and speed keys. You must be familiar with the menu shortcuts such as Ctrl-P
which is used to give
Menu Bar in Java
Menu Bar in Java
... at the JMenuBar.We
define and add three dropdown menu in the menubar i.e
File... that we have used in this menu Bar
are-
File
Menu Bar in Java
Menu Bar in Java
... three dropdown menu in the menubar i.e
File
Edit... in this menu Bar
are-
File
Edit
View
menu with scrollbar
java.awt.*;
public class menu extends JFrame
{
MenuBar bar;
Menu file,setting,rate,help...=new MenuBar();
file=new Menu("File");
setting=new Menu("Setting");
rate=new Menu("Rate");
help=new Menu("Help");
setMenuBar(bar);
bar.add(file);
bar.add(rate
Swings Menu Bar - Java Beginners
Swings Menu Bar Hello,
I created a menu bar using Java Swings...
n New Record, Edit Record etc are the menu items.
Now, I want to display the appropriate fields according the menu Item selected..below it..
i.e.
If
creaing a menu - Java Beginners
creating a menu in JavaScript How we can create a 3-level menu in java script
menu drive programm in java
menu drive programm in java calculate area of circle, square,rectangele in menu driven programme in java
menu driven programme
menu driven programme calculate the area of circle, square, rectangle in menu driven programme in java
Java AWT Package Example
Java AWT Package Example
In this section you will learn about the AWT package of the Java. Many
running examples are provided that will help you master AWT package. Example
Add menu - IDE Questions
Add menu sir,i m student and learning netbean in which i want to develop web application but i cant find how to add menu item(not the case for java application where menu can be added from palette)so that when i run
menu Hi,i want write the menu that repeat several time
Creating Menu - MobileApplications
Creating Menu Hi all,
I am developing an application for nokia mobiles and other java enabled phones.
I have downloaded the NetBeans IDE... to create our own menu system?
How to navigate between the screens( eg from 1st
fn for footbar menu - Java Beginners
fn for footbar menu function get_footer_menu() {
$output = '';
$count = 0;
$pages = get_pages();
foreach ($pages as $page) {
if($count > 0) { $output .= ' | '; }
$output .= ''.$page->post_title.'';
$count
Menu Pl make a program which generates the following output.
"Select from this menu"
1. Array
2. Stack
3. Queue.
4. Exit.
If users press 1, then it will go in 1. Array.
Welcome in Array.
press 1. for Insertion
press 2
Create Menu Bar in SWT
Create Menu Bar in SWT
This section illustrates you how to create a menu bar.
In SWT, the classes Menu and MenuItem of package org.eclipse.swt.widgets is
allowed
Creating Menu using GWT
Creating Menu using GWT
This example describes the Basics for building the Menu
using GWT....
Menu Bar File = new MenuBar(true)
Creating a Menu bar named File. A Menu bar can
Menus
= new JMenu("File");// declare and create new menu
menubar.add...
Java: Menus
Types of menus
Think of a menu as a way to arrange buttons... to the menubar.
JMenu
is a vertical list of menu items
menu problem
menu problem Hi to all ,
I am inserting a menu in my jsp page.i have downloaded(free) a menu .In which 4 files are as below.
1... the index.html file according to my need
then i convert it into index.jsp ,I deployed
DropDown Menu
is the problem..
for example;
if i select page 1 in my dropdown menu, and click... menu that has the list of Page 1- Page 5.
if i select page 1 it will display only
Crate a Popup Menu in Java
Create a Popup Menu in Java
Here, you will learn how to create a Popup menu in
Java. Popup menu is the list of menu which is displayed at that
point on the frame where
HTML - menu tag example
HTML - menu tag example
Description :
The <menu> is a tag of HTML. Which is used to create a menu list in our web
page.
Code :
<!DOCTYPE...>HTML -- menu tag.</h1>
<p>Creates a menu list
console application - text-based menu - Java Beginners
console application - text-based menu Im doin a text-based menu... productmain will call.
out.println("Enter menu choice:")
ans = indata.next...);
int menu = 0;
System.out.println("Test");
System.out.println
java keyboard shortcuts - Struts
java keyboard shortcuts hi,
i hav an application already developed using struts framework.now i would like to add some keboard shortcuts to my application, so that the user need not to go for the usage of the mouse every time
Adding pop up menu on headlines.
() to generate mm_menu.jsp file that generates the popup menu.
<?php ...Adding pop up menu on Headlines
Here you will learn sticking a pop up menu... on to pop up menu, it opens the list of the menu automatically. For generating
AWT code for popUpmenu - Swing AWT
for more information.... file from that menu and the 'path of that file' should automatically loaded...AWT code for popUpmenu Respected Sir/Madam,
I am writing a program 1 textfield.when this text field get focus. then a popup menu will appear with 5
creating pop up menu
creating pop up menu how to create a pop up menu when a link in html page is clicked using jquery? and the link should be a text file
Please visit the following links:
swings - Swing AWT
item example"); JMenuBar menu = new JMenuBar(); JMenu filemenu = new JMenu...:// What is Java Swing Technologies? Hi friend,import
Dojo Menu and Menu Item
Dojo Menu and Menu Item
... the menu and how
to create it in dojo.
Menu : This is the widget models a context menu,
otherwise known as a right-click or popup menu, and it appears
how to return to main menu after adding all the info. - Java Beginners
how to return to main menu after adding all the info. import...(String[]args){
Scanner scan = new Scanner(System.in);
int menu = 0;
System.out.println("School Registration System Main Menu
Menus
to create a Menu using Menu m = new Menu("File");.
3. ...
C:\newprgrm>java MainWindow
Download this example.
... also develop an application with a Menu.
As a name indicates a Menu consists
Create a Popup Menus with Nested Menus in Java
a nested popup
menu in Java Swing. When you click the right mouse button on
the frame then you get a popup menu. Here, you will show multiple menu
items like: line...
Create a Popup Menus with Nested Menus in Java
menu driven
menu driven menu driven programm in
decimal to binary
binary to decimal
Menu Control in Flex4
:Application>
In this example you can see how we can use a Menu...Menu Control in Flex4:
The Menu contol is a pop-up control. It containes a
submenu. You can select a indivisual item from menu control. You use only
Simple Editor
Java: Example - Simple Editor
This simple text editor uses a JTextArea with Actions to
implement the menu items. Actions are a good way... buttons and menu items, text fields, etc.
They can be shared so that all
Dynamic Dropdown Menu
Dynamic Dropdown Menu
Put records from your database in a drop down menu/list box. You can apply... and automatically increases by 1 with each
record.
Example:
<HTML>
<
How to Creating Pop Up Menu in PHP?
How to Creating Pop Up Menu in PHP? Hi,
I am a beginner in the PHP Application. Can somebody give an example of how to creating pop up Menu in PHP. Please feel free to suggestion........
Thanks
keyboard shortcuts - Swing AWT
jQuery Drop Down Menu
jQuery Drop Down Menu
In this JQuery tutorial we will develop a
program to make Drop Down menu
Steps to develop the Drop Down menu .
Step 1:
Create
What is AWT in java
What is AWT in java
.../api/java/awt/package-summary.html... available with JDK. AWT stands for Abstract
Windowing Toolkit. It contains all classes
How to backup a selected file from the dropDown Menu in JSP?
How to backup a selected file from the dropDown Menu in JSP? I am trying to create a dropdown menu list for some type of files contained ina... file into the backup directory.
I need the jsp code that can generate
Java Dialogs - Swing AWT
/springlayout.html... visit the following links: Dialogs a) I wish to design a frame whose layout mimics
awt
Java AWT Applet example how to display data using JDBC in awt/applet
want to insert values in drop down menu in struts1.3
want to insert values in drop down menu in struts1.3 I am using DynaValidatorForm.please help me with inserting values in color drop down menu. I...-config file.
- definitions-config: (optional)
Specify configuration
Menu s - JSP-Servlet
Menu s How to create menubar & menus & submenus in jsp
drop down menu
drop down menu drop down menu using html
Menu Bar prob.
Menu Bar prob. I want a menu to be displayed in each page of my swing appl. how to go abt
java swing - Swing AWT
:
Thanks...java swing how to add image in JPanel in Swing? Hi Friend...(new File("C:/rose.jpg"));
} catch (IOException ex
highlight menu item html
highlight menu item html highlight menu item in html or CSS
<body id="index">
<div id="menu">
<ul>
<li class...;
</ul>
</div> <!-- menu -->
</body>
Java Eclipse Shortcuts
tree menu delete node
tree menu delete node I have crated a tree menu having various sub nodes now i want to delete parent node and also want to delete sub nodes when parent node is deleted........i am using servlet and giving nodeid to servlet
Dojo Menu and Menu Item
Dojo Menu and Menu Item
In this section, you will learn about the menu and how
to create it in dojo.
Try Online: Menu
and Menu Item
Menu : This is the widget models
Creating Menu Using Canvas Class
Creating Menu Using Canvas Class
This example shows how to create the menu and call...;). In this example we have used the
following method:
setColor()
fillRect()
getWidth
Drop down menu
Java-awt - Java Beginners
java-awt how to include picture stored on my machine to a java frame?
when i tried to include the path of the file it is showing error.
i have... information,
Thanks
jar File creation - Swing AWT
in java but we can create executable file in java through .jar file..
how can i convert my java file to jar file?
Help me...jar File creation I am creating my swing applications.
How can i
Tree and a desktoppane - Swing AWT
Tree and a desktoppane Hi ,
Iam kind of new to Java... on top, a tree (separate java class outside using JTree and the corresponding... the main program. iam able to do the same this from Menu--> New
having problem with menu ans sub menu items css ...plz help!!
having problem with menu ans sub menu items css ...plz help!! PLZ help ...this is my html menu
> <div id="content"> <div
>...
> <li><a href="#">mother</a><
Text File I/O - DVD.java - Java Beginners
and populate the File menu
JMenu mnuFile =new JMenu("File", true...(this);
//construct and populate the Edit menu
JMenu mnuEdit = new...Text File I/O - DVD.java NEED HELP PLEASE.
The application should
Java Code - Swing AWT
Java Code Write a Program using Swings to Display JFileChooser that Display the Naem of Selected File and Also opens that File
java swing in netbeans
java swing in netbeans how can create sub menu in java swing using... fileMenu = new JMenu("File");
menuBar.add(fileMenu);
JMenu newmenu...[]) {
JFrame frame = new JFrame("MenuSample Example
Java - Swing AWT
Java Hi friend,read for more information,
html menu button drop down
html menu button drop down How to create a menu button in HTML?
<select id="category">
<option value="1">One</option>
<option value="2">Two</option>
<select id
Java AWT Package Example
Building a J2ME sliding menu with text and images(part-2)
Building a J2ME sliding menu with text and images(part-2)
In the given J2ME Menus example, we... these sliding menu in J2ME.
Code to create back and forward button images
query - Swing AWT
java swing awt thread query Hi, I am just looking for a simple example of Java Swing
developed a project in J2SE.I am creating a JAR file of it but I am facing a problem in doing that. My Problem is The Manifest File I have created is not loading by the JVM. I have created the Manifest file which contains the following code
java Hello Sir/Mam,
I am doing my java mini... function & display that selected file name in textfield,but pbm is not display... for upload image in JAVA SWING.... Hi Friend,
Try the following code
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/tutorialhelp/comment/47890 | CC-MAIN-2013-20 | refinedweb | 2,211 | 73.07 |
AWS Developer Tools Blog Project
To get started writing Lambda functions in Visual Studio, you first need to create an AWS Lambda project. You can do this by using the Visual Studio 2015 New Project wizard. Under the Visual C# templates, there is a new category called AWS Lambda. You can choose between two types of project, AWS Lambda Project and AWS Serverless Application, and you also have the option to add a test project. In this post, we’ll focus on the AWS Lambda project and save AWS Serverless Application for a separate post. To begin, choose AWS Lambda Project with Tests (.NET Core), name the project ImageRekognition, and then choose OK.
On the next page, you choose the blueprint you want to get started with. Blueprints provide starting code to help you write your Lambda functions. For this example, choose the Detect Image Labels blueprint. This blueprint provides code for listening to Amazon S3 events and uses the newly released Amazon Rekognition service to detect labels and then add them to the S3 object as tags.
When the project is complete, you will have a solution with two projects, as shown: the source project that contains your Lambda function code that will be deployed to AWS Lambda, and a test project using xUnit for testing your function locally.
You might notice when you first create your projects that Visual Studio does not find all the NuGet references. This happens because these blueprints require dependencies that must be retrieved from NuGet. When new projects are created, Visual Studio only pulls in local references and not remote references from NuGet. You can fix this easily by right-clicking your references and choosing Restore Packages.
Lambda Function Source
Now let’s open the Function.cs file and look at the code that came with the blueprint. The first bit of code is the assembly attribute that is added to the top of the file.
// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializerAttribute(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]
By default, Lambda accepts only input parameters and return types of type System.IO.Stream. To use typed classes for input parameters and return types, we have to register a serializer. This assembly attribute is registering the Lambda JSON serializer, which uses Newtonsoft.Json to convert the streams to typed classes. The serializer can be set at the assembly or method level.
The class has two constructors. The first is a default constructor that is used when Lambda invokes your function. This constructor creates the S3 and Rekognition service clients, and will get the AWS credentials for these clients from the IAM role we’ll assign to the function when we deploy it. The AWS Region for the clients will be set to the region your Lambda function is running in. In this blueprint, we only want to add tags to our S3 object if the Rekognition service has a minimum level of confidence about the label. This constructor will check the environment variable MinConfidence to determine the acceptable confidence level. We can set this environment variable when we deploy the Lambda function.
public Function() { this.S3Client = new AmazonS3Client(); this.RekognitionClient = new AmazonRekognitionClient(); var environmentMinConfidence = System.Environment.GetEnvironmentVariable(MIN_CONFIDENCE_ENVIRONMENT_VARIABLE_NAME); if(!string.IsNullOrWhiteSpace(environmentMinConfidence)) { float value; if(float.TryParse(environmentMinConfidence, out value)) { this.MinConfidence = value; Console.WriteLine($"Setting minimum confidence to {this.MinConfidence}"); } else { Console.WriteLine($"Failed to parse value {environmentMinConfidence} for minimum confidence. Reverting back to default of {this.MinConfidence}"); } } else { Console.WriteLine($"Using default minimum confidence of {this.MinConfidence}"); } }
We can use the second constructor for testing. Our test project configures its own S3 and Rekognition clients and passes them in.
public Function(IAmazonS3 s3Client, IAmazonRekognition rekognitionClient, float minConfidence) { this.S3Client = s3Client; this.RekognitionClient = rekognitionClient; this.MinConfidence = minConfidence; }
FunctionHandler is the method Lambda will call after it constructs the instance. Notice that the input parameter is of type S3Event and not a Stream. We can do this because of our registered serializer. The S3Event contains all the information about the event triggered in S3. The function loops through all the S3 objects that were part of the event and tells Rekognition to detect labels. After the labels are detected, they are added as tags to the S3 object.
public async Task FunctionHandler(S3Event input, ILambdaContext context) { foreach(var record in input.Records) { if(!SupportedImageTypes.Contains(Path.GetExtension(record.S3.Object.Key))) { Console.WriteLine($"Object {record.S3.Bucket.Name}:{record.S3.Object.Key} is not a supported image type"); continue; } Console.WriteLine($"Looking for labels in image {record.S3.Bucket.Name}:{record.S3.Object.Key}"); var detectResponses = await this.RekognitionClient.DetectLabelsAsync(new DetectLabelsRequest { MinConfidence = MinConfidence, Image = new Image { S3Object = new Amazon.Rekognition.Model.S3Object { Bucket = record.S3.Bucket.Name, Name = record.S3.Object.Key } } }); var tags = new List(); foreach(var label in detectResponses.Labels) { if(tags.Count < 10) { Console.WriteLine($"\tFound Label {label.Name} with confidence {label.Confidence}"); tags.Add(new Tag { Key = label.Name, Value = label.Confidence.ToString() }); } else { Console.WriteLine($"\tSkipped label {label.Name} with confidence {label.Confidence} because maximum number of tags reached"); } } await this.S3Client.PutObjectTaggingAsync(new PutObjectTaggingRequest { BucketName = record.S3.Bucket.Name, Key = record.S3.Object.Key, Tagging = new Tagging { TagSet = tags } }); } return; }
Notice that the code contains calls to Console.WriteLine(). When the function is being run in AWS Lambda, all calls to Console.WriteLine() will redirect to Amazon CloudWatch Logs.
Default Settings File
Another file that was created with the blueprint is aws-lambda-tools-defaults.json. This file contains default values that the blueprint has set to help prepopulate some of the fields in the deployment wizard. It is also helpful in setting command line options with our integration with the new .NET Core CLI. We’ll dive deeper into the CLI integration in a later post, but to get started using it, navigate to the function’s project directory and type dotnet lambda help.
{ ":"", "region" : "", "configuration" : "Release", "framework" : "netcoreapp1.0", "function-runtime":"dotnetcore1.0", "function-memory-size" : 256, "function-timeout" : 30, "function-handler" : "ImageRekognition::ImageRekognition.Function::FunctionHandler" }
An important field to understand is the function-handler. This indicates to Lambda the method to call in our code in response to our function being invoked. The format of this field is <assembly-name>::<full-type-name>::<method-name>. Be sure to include the namespace with the type name.
Deploying the Function
To get started deploying the function, right-click the Lambda project and then choose
Publish to AWS Lambda. This starts the deployment wizard. Notice that many of the fields are already set. These values came from the aws-lambda-tools-defaults.json file described earlier. We do need to enter a function name. For this example, let’s name it ImageRekognition, and then choose Next.
On the next page, we need to select an IAM role that gives permission for our code to access S3 and Rekognition. To keep this post short, let’s select the Power User managed policy; the tools create a role for us based on this policy. Note that the Power User managed policy was added to use to create a role in version 1.11.1.0 of the toolkit.
Finally, we set the environment variable MinConfidence to 60, and then choose Publish.
This launches the deployment process, which builds and packages the Lambda project and then creates the Lambda function. Once publishing is complete, the Function view in the AWS Explorer window is displayed. From here, we can invoke a test function, view CloudWatch Logs for the function, and configure event sources.
With our function deployed, we need to configure S3 to send its events to our new function. We do this by going to the event source tab and choosing Add. Then, we choose Amazon S3 and choose the bucket we want to connect to our Lambda function. The bucket must be in the same region as the region where the Lambda function was deployed.
Testing the Function
Now that the function is deployed and an S3 bucket is configured as an event source for it, open the S3 bucket browser from the AWS Explorer for the bucket we selected and upload some images.
When the upload is complete, we can confirm that our function ran by looking at the logs from our function view. Or, we can right-click the images in the bucket browser and select Properties. In the Properties dialog box on the Tags tab, we can view the tags that were applied to our object.
Conclusion
We hope this post gives you a good understanding of how our tooling inside Visual Studio works for developing and creating Lambda functions. We’ll be adding more blueprints over time to help you get started using other AWS services with Lambda. The blueprints are hosted in our new Lambda .NET GitHub repository. If you have any suggestions for new blueprints, open an issue and let us know. | https://aws.amazon.com/blogs/developer/using-the-aws-lambda-project-in-visual-studio/ | CC-MAIN-2022-21 | refinedweb | 1,492 | 51.24 |
Tell us what you think of the site.
I wish somebody could help me out on the very basics of compiling a mudbox plugin.
I’ve been struggling with it for almost a week, non-stop, and I’ve reached a limbo-guessing state…
All reference and documentations for doing this are so minimal when it comes to preparing a compilation
environment,
and make so many assumptions of knowledge by the reader, it’s really frustrating…
What I’m trying to do, is to compile a Qt-ui-based plugin for Mudbox 2011 x64,
using the provided SDK, visual-studio 2008 (pro-tial) and Qt 4.5.3,
which I downloaded in source and compiled via nmake in vs2008 command line,
especially for this task, as there is no viable explanation anywhere on the web on how to compile a qt-plugin for mudbox, I took this suggestion from a maya post I found somewhere…
There is so much I don’t know and don’t understand in the way Qt dlls should be built via vs2008, so much implicitness, that i’m pretty lost by now…
Please help, in any way you can…
I’m new to C++, Qt and VS in general, but I know the concepts, I just lack experience.
I know all about the compile/link/build process, and the moc process that has to occur before that, but
I don’t know how to configure stuff in VS.
1. What files should the builder use, the Qt ones, the SDK ones, or a certain combination?
2. If it’s a combination, how do I tell the builder what to take from where?
3. I managed to get the examples of the SDK to compile/build, but they don’t include Qt.
4. When I try to add a Qt class via the Qt-add-in, it sais that it cannot add it for projects that where not created by the Qt-VS-add-in.
5. When I start a fresh Qt-dll-project, then I have to configure myself all the build configurations to
customarily tailor specifically for making a .mb mudbox-plugin file, and I have no idea how to do that, and could not find anything on the web.
6. What is the makefile file that is in all the examples? It is calling buildgonfig file on the example’s root folder, right? So I need to do something for that to happen?
7. How come that I manage to build the examples in x64 target-platform, but when I try to build the Qt fresh project, for x64 platform, it sais it can’t do that?
8. I managed to compile the source of the TurntableSDKexample that I found somewhere on the web… It does use Qt, and I did manage to get it compiled for x64 platform, it even loaded correctly in mudbox.
It seems to not require/use a makefile somehow…
The problem is I cannot seem to be able to modify/expand this project-file with it’s configuration without breaking it. If I try to redactor the name of the classes from turntable into something else, or change the file-names, the auto-generated moc_*.* files error out somehow saying they can’t find the class/file turntable*.* stuff…
Also, this build configuration does not use the .ui files method of building an interface, so it makes it harder to change the user-interface of this plugin.
9. I need to make my plugin use xml-rpc to interact with an external data-server. I managed to find a Qt-based project that does that, and even download the source, modify for my needs, compile and build it so it interacts with the data-server. But only within it’s own project’s configuration, using the Qt-VS-Add-In kind of project (.pro file), within it’s own environment, which calls for all the Qt-4.5.3 source, not the mudbox-sdk-qt-source…
Now I want to integrate the 2 projects, and I have 3 options to do that, as I don’t want to get into all the project-configuration stuff as it’s way too complicated…
Option 1: Using the turnTable project that compiles the plugin that loads correctly into mudbox, and
extending it to include the source of the QtXmlrpc project, by either copying the files of the QtXmlrpc stuff into the mudbox/sdk/qt-folders and re-orienting the include-directives to using the mudbox/sdk/qt-source-files, or copying the QtXmlrpc sources into the turnTable’s source files into the local folder of the turnTable project. I have tried both, and in both cases - even after dealing with all the pre-processor include directives to satisfy the pre-processor completely, the Linker complains about many “unresolved external symbol"s (LNK2001 - __cdecl stuff...).
Option 2: Using the QtXmlrpc project, and copying all the turnTable project files into it, and re configure the project configuration to satisfy both projects, applying onto the QtXmlrpc project, all the mudbox-plugin-compilation-relevant configurations from the turnTable project. I have no idea where to begin going about doing that…
Option 3: Using a dll-compiled version of the QtXmlrpc project, within the turnTable project. I don’t know how to do that either…
10. I want to change the plugin-window-type of the turnTable for a QDialog to the mudbox-sdk’s WindowPlugin interface, so it would appear in mudbox as a window that could be opened as docked-within the layers/windows panel… That means sub-classing an “interface” and registering a custom window class or whatever as I understand from the sdk’s documentation… I have no idea how to go about doing that....
Any help is appreciated…
Ok....
I see that my suspicions where right…
I’m probably one of the few people who try to write a UI-based plugin for mudbox…
Soooo, I managed to solve some issues, but not all, and I’d like to share my conclusions so others can benefit, and maybe someone could help me out understanding the remaining issues…
Here goes:
As it turns out, them problem of integrating the QtXMLRPC project, was due to the fact that it was compiles for a 32bit platform, as well as the entire Qt source I compiled, while the Mudbox 2011 64bit version of the SDK, is using Qt compiled code/libraries that where created and compiled as a 64bit target…
I had to re-compile the entire Qt source again, using the 64bit version of the visual studio command prompt, and then do the same for the QtXMLRPC project.
I then re-oriented the mudbox-plugin project (turnTable) into sourcing the complete Qt source and not the one that comes with the SDK…
That was according to a suggestion I got from Wayne, as he said there are some nasty bugs in the SDK version.
I did that as follows:
I re-downloaded the source-code of the 4.5.3 Qt, and extracted it to:
c:\qt\4.5.3-x64
I then changed the system-environment variables:
PATH: &#xQT;DIR%\bin;
QTDIR: changed from “c:\qt\4.5.3-vc” to c:\qt\4.5.3-x64
Then, I ran the shortcut in the start menu programs:
“Microsoft Visual Studio 2008 >
Visual Studio Tools >
Visual Studio 2008 x64 Win64 Command Prompt”
That brings up a command prompt that would compile via visual studio 2008 while targeting for a 64bit platform.
From “c:\qt\4.5.3-x64” I ran:
“configure -platform win32-msvc2005”
A couple of minutes later, when the configuring process was complete, I ran:
“nmake”
That was yesterday…
Today, when the compilation of Qt has finished, I did similar process to compile the QtXMLRPC project.
Then, I just copied the “qtxmlrpc.lib” file that was generated, as well as the mudbox-specific libraries from the “Mudbox2011\SDK\lib” folder ("MudboxFramework.lib" and “cg.lib") into “c:\qt\4.5.3-x64\lib”.
I also copied the entire “Mudbox2011\SDK\include\Mudbox” folder, as is, into “c:\qt\4.5.3-x64\include”
Then, in Visual Studio 2008, I opened the solution file of the turnTable project, and in the main menu I went to the Qt Add-In stuff:
“Qt > Qt Options”
And in the dialog, I added the new x64 compiled version of Qt:
In the “Qt Version” tab, I pressed the “Add” button, and in “Version name:” I wrote “4.5.3-x64”, then I pressed the “...” button next to “path” and sourced the “c:\qt\4.5.3-x64” folder, then pressed “ok”. still in the “Qt Versions” tab of the “Qt Options” dialog, down in the “Default Qt\Win Version:” drop-down list, I choose the new “4.5.3-x64” version.
Then, I changed the project properties by going in the main menu to “Project > turnTable Properties”, and in the dialog, changed the following:
In: Configuration Properties > C/C++ > General : Additional Include Directories:
I replaced “..\..\include” with “$(QTDIR)\include”
In: Configuration Properties >Linker > General : Additional Library Directories:
I replaced “..\..\lib” with “$(QTDIR)\lib”
In: Configuration Properties >Linker > Input: Additional Dependencies:
I added “qtxmlrpc.lib”
Then i pressed “ok”
Then.....
Everything is Ok, and the project is building successfully, while instantiating an object from the “xmlrpc::Client” class that comes from the “qtxmlrpc.lib” file!
Hurray!!!
(… champagne, confetti, etc....)
BUT:
If I tried to move the project folder somewhere else, it stops compiling, saying that it cant find “../../../moc.exe”....
It’s like whatever I do, the “moc_*.*” files that get’s auto-generated, still look for stuff in the relative way in which it only works if the project is in “...Mudbox\SDK\Examples\"…
I went through the entire project properties screen to try to find where It’s defined, and I found squat…
I really don’t understand how/where in visual studio this info about the moc build-process is being defined…
It’s the reason that in this project I can’t use the Qt-Designer…
If anything happens to the currently existing “moc_*.*” files, that get’s re-generated anyways, either by moving the entire project folder somewhere else, or by modifying any moc-related code (signals/slots/Q_OBJECT stuff, or trying to use the Qt-Designer, or changing the file-names from which they are supposedly being generated - the source-files of the turnTable project .cpp/.h files), then all hell brakes loos and the project becomes uncompillable…
Please, someone help me with this, it’s ridiculousness the state I’m in, locked into using a project called “turnTable” with files called “turnTable.cpp/.h” and “turnTableDialogue.cpp/.h”, that all has to still be left in the “...\Mudbox2011\SDK\examples” folder (even though it doesn’t source anything relative to that location anymore, AFAIKT...), and not being able to use the Qt-Designer, when if I start a new Qt project, everything works fine…
It’s like I just need the specific mudbox-plugin-related definitions of compilation, that are in the turnTable project, and re-use them in a new Qt project, and I’ll be set…
Also, I really want to understand what’s going on with this whole moccing process…
Any ideas?
OooooooK, I again see no replies, but increasing views of this topic…
That suggests that there are people trying to do this like me, but are as unsuccessful as me…
This means that there really is an issue in the way the documentation of how to go about doing that is highly incomplete…
Is autodesk aware of this reality?
Anyways, here’s an update:
I successfully managed to compile a Qt-Project into a working mudbox-plugin (!)
Woohoo!
I can now graphically-construct UIs for a mudbox plugin that actually loads and works in mudbox!
What I did, is simply meticulously copying-over, page-by-page and line-by-line, all of the project-properties of the turnTable project, while maintaining every seemingly relevant properties regarding the Qt-compilation process…
I did that for both the “Debug” as well as the “Release” configurations that target the “x64” platform…
I still have no idea what the great majority of these properties even mean, that was a blind copying, but it worked…
Now:
The still-standing problem I have, is my inability to construct a working “WindowPlugin”.
That is to say, I want my UI to be embedded/docked into the native UI of mudbox.
This should be accomplished by making the plugin as a window, which is invokable from either the window-menu, of the tab-contextual-popup-menu of either the main-pane, or the properties-pane (like “Layers”, “Community-help”, etc...).
Now, according to the SDK’s documentation, the way to do that is by implementing an “interface” (abstract-class) called “Mudbox::WindowPlugin”, in the same way the “Community-help” window is built, as it is a sub-class-instance of the “Browser-Plugin” which is an implementation of this “Mudbox::WindowPlugin” interface.
Now, according to the SDK’s documentation, the way to do that is NOT by using the general “MB_PLUGIN” macro decleration, which thereupon requires the use of a call to the “Kernel()->AddCallbackMenuItem” function in order to register it to mudbox’s plugin-repositary and make it available in the plugins menu, but rather the very act of implementing the “Mudbox::WindowPlugin”, would automatically make it available in the “window” menu, as well as the new-tab pop-up menus of the panes.
However, it is seemingly too vague in the documentation on how to go about doing that, and in what way should this new WindowPlugin class-implementation be declared/associated into mudbox…
I found in the SDK’s documentation the way in which you supposedly can sub-class/instanciate the WindowPlugin-derived WebBrowserPlugin, by using the “Kernel()->RegisterWindowPlugin” function.
Needless to say I was unsuccessful at doing neither…
Where do I specify the name and title of my WindowPlugin?
Do I do it in the class-deceleration?
In the Start() function implementation?
Do I make a constructor/destructor which for the class, with the constructor having the insertion of name/title as parameters, as it is in the WebBrowserPlugin implementation?
How do I associate my WindowPlugin with a name and a title, and how do I register it to the mudbox’s windows pool?
Do I really need to use the “Kernel()->RegisterWindowPlugin” function?
It expects a WindowPlugin, which I assume by that meaning a class-implementation, but I implement it using “class <className> : public Mudbox::WindowPlugin”, which means it’s a sub-class of it, and I get an error when I try to pass it into the RegisterWindowPlugin function, saying that a “type-casting exists but is inaccessible"…
Do I need to pass a class-pointer, a class, an instance of the class or an instance-pointer?
Where do I need to place that call?
This is all very ambiguous and confusing…
Finally, got it figured out (!)
:buttrock:
No thanks to any of you guys!
:hmm:
I’m talking to myself here....
Is that healthy?
:argh:
Anyways, as it turns out, you DO still have to use the MB_PLUGIN macro…
Just give it a function that contains the registration call:
“Kernel()->RegisterWindowPlugin()”.
Then, all you have to do, is re-implement every virtual-function, most of the time doing nothing in there…
I also included in the implemented-class a constructor which takes a name and title (as QString) and puts them in it’s protected variables, and an empty destructor.
Word of caution:
In the IsEnabled() implementation, I tried to query my widget-object for IT’s IsEnabled() function, but it caused Mudbox to crash on that line, saying something about illegal-access or something…
So I just returned “true” there..
Also, in the Stop() implementation, I tried to do a Close() or a “delete” on the widget-object, and in both-cases Mudbox crashed with an error upon exiting…
So I left it empty also…
Here is my code:
PipelineToolBox.h:
#if defined(JAMBUILD)
#include <Mudbox/mudbox.h>
#else
#if defined __APPLE__
#include "Mudbox/mudbox.h"
#else
#include <Mudbox/mudbox.h>
#endif
#endif
#include "PipelineToolBoxDialog.h"
PipelineToolBox.cpp:
#include "PipelineToolBox.h"
#include "PipelineToolBoxDialog.h"
All the UI stuff is being done in the “PipelineToolBoxDialog” files, which include the whole QT moc_ stuff the and .ui file, which enables the usage of Qt-Designer and Qt-Resources.
Here is how the test-UI looks in Qt-Designer:
Here is how it looks in Mudbox in the properties pane:
And, yes - It does add it automatically both to all the right-click context-popup-menus, as well as to the Windows menu, so you can also place it in the main mudbox pane:
So, I hope this helps anyone else, it sure as hell would have helped me if I had read such kind of a post a week ago…
nJoy!
Ah, and F&@# AUTODESK!
Mudbox has one of the lousiest SDK-documentation I’ve seen yet!
Just had to put that one outta the way…
Well done man, its nice to know SOMEONE is playing with the SDK. I wish I knew anything about programming.
Well done, I have no even looked at the SDK for Mud so thanks for talking to your self inthis thread. Im sure that it will inspire others to dig in.
Paul Neale
PEN Productions Inc.
penproductions.ca / paulneale.com
Master Classes for Max, Mudbox and Composite
DotNet Tutorials
MX Driver Car and Trailer Rig On Sale!
What do you have planned up your sleeve ? :)
I’m still learning programming syntax, not even close to touching a SDK, yet.
Donno how much I can say, but we’re expanding out pipeline’ed version-control system to include support for other software then just the main 3d-package. | http://area.autodesk.com/forum/autodesk-mudbox/community-help/compiling-qt-plugins-using-the-sdk/ | crawl-003 | refinedweb | 2,986 | 55.37 |
.
A summary of what's on the table:
Time Scheduled Functions/Callbacks [Done] Functions will be called at specific times (periodical calling like ... "every 60 minutes starting at 08:30" may be considered) Timezone considerations)
The best possible place to add the scheduling seems to be in the strategy's
__init__ method and as such should be a method of
Strategy
This doesn't preclude having a global scheduling which could be entered in
cerebro to
1.9.75.123:
I have just read this tread and you statement
@vladisld said in Backtrader's Future:
Since no one I guess has a full understanding of the platform yet, we probably should limit ourselves to bug fixing only, at least initially - and see how it goes.
This is probably spot on and something I have failed to realize. Since your fork doesn't yet contain any changes, I am adding a few bug fixes to the main repository and release a new version with them.
@xyshell said in cerebro.resample() introduces data in the future:
Note: 6219.15 is the close price of the 1h data at 2020-03-20 01:00:00, while 6219.15 is also the open price of the 1min data
No. There is no such thing as the closing price of the
1-hour data, because that data doesn't exist. Naming things properly does help.
Additionally the information you provide is wrong, which is confirmed by looking at your data.
Note for the other readers: for whatever the reason, this data defies all established standards and has the following format:
CHLOVTimestamp
Your data indicates that
6219.15is the closing price of the
1-minbar at
00:59:00, hence the last bar to go into the resampling for
1-hourbetween
00:00:00and
00:59:00(60 bars -if all are present-) which is first delivered to you as a resampled bar at
01:00:00
6219.15is the opening price of the
1-minbar at
01:00:00
At
01:00:00 you have two bits of information available:
1-minbar for
01:00:00
1-hourresampled data for the period
00:00:00to
00:59:00
As expected.
is needed.
Note: notice the usage of
elif
@Kevin-Fu said in Building Sentiment Indicator class: TypeError: must be real number, not LineBuffer:
def next(self): self.date = self.data.datetime date = bt.num2date(self.date[0]).date() prev_sentiment = self.sentiment if date in date_sentiment: self.sentiment = date_sentiment[date] self.lines.sentiment[0] = self.sentiment
In any case that's probably where the error happens. There is no definition of
self.sentiment and the indicator understands you are looking for the line named
sentiment. The problems
@Kevin-Fu said in Building Sentiment Indicator class: TypeError: must be real number, not LineBuffer:
Any input would be greatly appreciated
But you provide no input with regards to the error. Only
TypeError: must be real number, not LineBuffer
Python exceptions provided a stacktrace which points to the different line numbers in the stack ... allowing to trace the error to the origin point (and showing any potential intermediate conflict)
The error is only telling us that you are passing an object (a
LineBuffer) there where a
float should have been passed by you. But not where you are doing it ... which is in the stacktrace ...
@Kevin-Galkov said in DataCls does autoregister:
If I run the mementum/backtrader/samples/oandatest/oandatest.py , it complains:
In any, it does complay because you don't have the proper package installed, you have something for
v20.
You have to use this:
The built-in module is only compatible with the old
Oanda API.
And
oandatest.py works with the old built-in module.
Timers go to
notify_timer which receives the
timer,
when it is happening and any extra
*args and
**kwargs you may created the timer with. You can use any of the latter to differentiate timers.
Or you can simply use the
timer id to associate different logic to each timer. | https://community.backtrader.com/user/backtrader/ | CC-MAIN-2021-10 | refinedweb | 670 | 62.68 |
pivot_root - change the root file system
Synopsis
Description
Notes
Errors
Bugs
History
#include <linux/unistd.h>
_syscall2(int,pivot_root,const char *,new_root,const char *,put_old)
int pivot_root(const char *new_root, const char *put_old);
pivot_root moves the root file system of the current process to the directory put_old and makes new_root the new root file system of the current process.:See also pivot_root(8) for additional usage examples. (/).
On success, zero is returned. On error, -1 is returned, and errno is set appropriately.
pivot_root may return (in errno) any of the errors returned by stat(2). Additionally, it may return:
pivot_root should not have to change root and cwd of all other processes in the system.
Some of the more obscure uses of pivot_root may quickly lead to insanity.
pivot_root is Linux-specific and hence is not portable.
pivot_root was introduced in Linux 2.3.41.
chdir(2), chroot(2), initrd(4), pivot_root(8), stat(2) | http://www.squarebox.co.uk/cgi-squarebox/manServer/usr/share/man/man2/pivot_root.2 | crawl-003 | refinedweb | 155 | 50.33 |
AWS Compute Blog
Configuring private integrations with Amazon API Gateway HTTP APIs
This post was written by Michael Hume – AWS Solutions Architect Public Sector UKIR.
Customers often want to use Amazon API Gateway REST APIs to send requests to private resources. This feature is useful for building secure architectures using Amazon EC2 instances or container-based services on Amazon ECS or Amazon EKS, which reside within a VPC.
Private integration is possible for REST APIs by using Network Load Balancers (NLB). However, there may be a requirement for private integration with an Application Load Balancer (ALB) or AWS Cloud Map. This capability is built into Amazon API Gateway HTTP APIs, providing customers with three target options and greater flexibility.
You can configure HTTP APIs with a private integration as the front door or entry point to an application. This enables HTTPS resources within an Amazon VPC to be accessed by clients outside of the VPC. This architecture also provides an application with additional HTTP API features such as throttling, cross-origin resource sharing (CORS), and authorization. These features are then managed by the service instead of an application.
HTTP APIs and Application Load Balancers
In the following architecture, an HTTP APIs endpoint is deployed between the client and private backend resources.
A VPC link encapsulates connections between API Gateway and targeted VPC resources. HTTP APIs private integration methods only allow access via a VPC link to private subnets. When a VPC link is created, API Gateway creates and manages the elastic network interfaces in a user account. VPC links are shared across different routes and APIs.
Application Load Balancers can support containerized applications. This allows ECS to select an unused port when scheduling a task and then registers that task with a target group and port. For private integrations, an internal load balancer routes a request to targets using private IP addresses to resources that reside within private subnets. As the Application Load Balancer receives a request from an HTTP APIs endpoint, it looks up the listener rule to identify a protocol and port. A target group then forwards requests to an Amazon ECS cluster, with resources on underlying EC2 instances. Targets are added and removed automatically as traffic to an application changes over time. This increases the availability of an application and provides efficient use of an ECS cluster.
Configuration with an ALB
To configure a private integration with an Application Load Balancer.
- Create an HTTP APIs endpoint, choose a route and method, and attach an integration to a route using a private resource.
- Provide a target service to send the request to an ALB/NLB.
- Add both the load balancer and listener’s Amazon Resource Names (ARNs), together with a VPC link.
HTTP APIs and AWS Cloud Map
Modern applications connect to a broader range of resources. This can become complex to manage as network locations dynamically change based on automatic scaling, versioning, and service disruptions. Its challenging, as each service must quickly find the infrastructure location of the resources it needs. Efficient service discovery of any dynamically changing resources is important for application availability.
If an application scales to hundreds or even thousands of services, then a load balancer may not be appropriate. In this case, HTTP APIs private integration with AWS Cloud Map maybe a better choice. AWS Cloud Map is a resource discovery service that provides a dynamic map of the cloud. It does this by registering application resources such as databases, queues, microservices, and other resources with custom names.
For server-side service discovery, if an application uses a load balancer, it must know the load balancer’s endpoint. This endpoint is used as a proxy, which adds additional latency. As AWS Cloud Map provides client-side service discovery, you can replace the load balancer with a service registry. Now, connections are routed directly to backend resources, instead of being proxied. This involves fewer components, making deployments safer and with less management, and reducing complexity.
Configuration with AWS Cloud Map
In this architecture, the Amazon ECS service has been configured to use Amazon ECS Service Discovery. Service discovery uses the AWS Cloud Map API and Amazon Route 53 to create a namespace. This is a logical name for a group of services. It also creates a service, which is a logical group of resources or instances. In this example, it’s a group of ECS clusters. This allows the service to be discoverable via DNS. These resources work together, to provide a service.
To configure a private integration with AWS Cloud Map:
- Create an HTTP API, choose a route and method, and attach an integration to a route using a private resource. This is as shown previously for an Application Load Balancer.
- Provide a target service to send requests to resources registered with AWS Cloud Map.
- Add both the namespace, service and VPC link.
Deployment
To build the solution in this blog, see the AWS CloudFormation templates in the GitHub repository and, the instructions in the README.md file.
Conclusion
This post discusses the benefits of using API Gateway’s HTTP APIs to access private resources that reside within a VPC, and how HTTP APIs provides three different private integration targets for different use cases.
If a load balancer is required, the application operates at layer 7 (HTTP, HTTPS), requires flexible application management and registering of AWS Lambda functions as targets, then use an Application Load Balancer. However, if the application operates at layer 4 (TCP, UDP, TLS), uses non-HTTP protocols, requires extreme performance and a static IP, then use a Network Load Balancer.
As HTTP APIs private integration methods to both an ALB and NLB only allow access via a VPC link. This enhances security, as resources are isolated within private subnets with no direct access from the internet.
If a service does not need a load balancer, then HTTP APIs provide further private integration flexibility with AWS Cloud Map, which automatically registers resources in a service registry. AWS Cloud Map enables filtering by providing attributes when service discovery is enabled. These can then be used as HTTP APIs integration settings to specify query parameters and filter specific resources.
For more information, watch Happy Little APIs (S2E1): Private integrations with HTTP API. | https://aws.amazon.com/blogs/compute/configuring-private-integrations-with-amazon-api-gateway-http-apis/ | CC-MAIN-2022-21 | refinedweb | 1,040 | 54.02 |
Vuex basics
Vuex is a state management and library for vue.js applications. It serves as a centralized store for all the components in your application.
For example, We have two components and we need to share data both components then we need vuex in vue.js.
Install the Vuex using npm
npm install — save vuex
How to use vuex in our application
Create a folder inside the folder create file store.js
store/store.js
import Vue from ‘vue’; //Beacuse here we are using plugin vuex
import Vuex from ‘vuex; //import vuex
Vue.use(Vuex); //Vue.use, now we can used vuex beacuse we added the plugin in vue
export const store = new Vue.Store({
state: {
/*State is object that hold’s your application data*/
},
getters: {
/*Getter is function that return back data contained in the state*/
},
mutations: {
/*Mutation is function that directly mutate the state as the state is immutable object*/
},
actions: {
/*Action is function that call mutations on the state. They can call multiple mutations can call other actions, and they support asynchrous operations.In actions we can fetch data from the api here.*/
}
});
Integrate store into main.js. Now we can use store global in our application for that we need import store in main.js
import store from ‘./store/store’;
new Vue({
el: ‘#app’,
store: store,
render: h => h(App)
}) | https://medium.com/@jagsingh.96/vuex-basis-51c0f2086c0b | CC-MAIN-2018-51 | refinedweb | 225 | 63.49 |
I saw that there was an intention to 'use named arguments for current and all subsequent arguments'
is there any way to have the plugin automatically generate these for a new method?
e.g.
I have this function:
def foo(a:Int, b: String, c: Long) {???}
when I type:
fo<ctrl+space>
I'd end up with
foo($cursor$)
at this point it would be nice to have an intention "add all named arguments"
which if selected would produce this:
foo(a=,b=,c=)
and then cursor would jump from a to b to c just like in a live template so that they can be quickly filled in
I saw that there was an intention to 'use named arguments for current and all subsequent arguments'
This really improves readability of the code. Feature requested! 👍
Hi, Benjamin,
There is no such action. Do you often use named arguments? If so, can you describe your usage scenarios?
If you just want to check signature of a method, you can use "Parameter Info" feature (Ctrl + P). It shows names, types and default values of parameters. It is also handy if there are overloaded alternatives.
Attachment(s):
parameter info.jpg
Yeah, I use named parameters quite frequently, basically whenever the order of arguments to a method don't have a natural, well known order, and the types are primitives.
In this case named parameters makes the code more readable at the call site, and guarantees that if someone changes order at the declaration site, the call site isn't affected.
So my workflow ends up writing the method name, using that to jump to the declaration, selecting the parameters, copy them, going back to the call site, pasting them, erasing the types, adding a bunch of '=' and then filling the method while the IDE blast a bunch of red-squiggly lines at me.
edit: Yep thanks for the suggesting of ctrl+p, but unfortunately, when you have a method that has several parameters, helpful as ctrl+p is to put them in the right order, it's a lot of rote typing to get the names all filled in.
edit2: perhaps a better workflow would be to use ctrl+p to just put them in the right order initially then use the intention to add the names after I do that.
+1, would be a huge time-saver!
+1 yes, please add this feature.
There is an intention called "Add names to call arguments", but for some reason it is not always available.
edit: The intention is not always available because "named arguments are not allowed for non-Kotlin functions." | https://intellij-support.jetbrains.com/hc/en-us/community/posts/205994889-Is-there-a-feature-to-automatically-add-in-named-parameters-for-new-functions-?sort_by=votes | CC-MAIN-2019-39 | refinedweb | 437 | 66.17 |
npm install storybook-addon-fetch-mock
storybook-addon-fetch-mock
This Storybook.js addon adds
fetch() mocking using
fetch-mock.
Why use storybook-addon-fetch-mock?
If you are already using Storybook.js, you may have components that call API endpoints. And to ensure your component Storybook documentation isn’t dependent on those API endpoints being available, you’ll want to mock any calls to those API endpoints. This is doubly true if any of your components alter data on the endpoint.
Fortunately, the Storybook ecosystem has many addons to make mocking Fetch APIs easier. Any of these addons will allow you to intercept the real API calls in your components and return any mocked data response you’d like.
1 If you use XMLHttpRequest (XHR), the Mock API Request addon will serve you well, but we don’t recommend it for Fetch API mocking. Its capabilities are very basic and some Fetch API requests cannot be mocked with this addon.
2 If you are wanting to mock a Fetch API that you are writing, writing mock resolver functions with the Mock Service Worker addon might be the easiest method.
3
If you are wanting to mock a Fetch API that you aren’t writing, writing simple JavaScript objects might be the easiest method of mocking. This project, storybook-addon-fetch-mock, is a light wrapper around the
fetch-mock library, a well-maintained, highly-configurable mocking library available since 2015. It allows you to write mocks as simple JavaScript objects, as resolver functions, or a combination of the two.
A quick example
Imagine a
UnicornSearch component that uses
fetch() to call an endpoint to search for a list of unicorns. You can use the
storybook-addon-fetch-mock to bypass the actual API endpoint and return a mocked response. After following the “Installation” instructions below, you could configure
UnicornSearch.stories.js like this:
import UnicornSearch from './UnicornSearch'; export default { title: 'Unicorn Search', component: UnicornSearch, }; // We define the story here using CSF 3.0. export const ShowMeTheUnicorns = { args: { search: '', }, parameters: { fetchMock: { // "fetchMock.mocks" is a list of mocked // API endpoints. mocks: [ { // The "matcher" determines if this // mock should respond to the current // call to fetch(). matcher: { name: 'searchSuccess', url: 'path:/unicorn/list', query: { search: 'Charlie', }, }, // If the "matcher" matches the current // fetch() call, the fetch response is // built using this "response". reponse: { status: 200, body: { count: 1, unicorns: [ { name: 'Charlie', location: 'magical Candy Mountain', }, ], }, }, }, { matcher: { name: 'searchFail', url: 'path:/unicorn/list', }, reponse: { status: 200, body: { count: 0, unicorns: [], }, }, }, ], }, }, };
If we open the “Show Me The Unicorns” story in Storybook, we can fill out the “search” field with “Charlie” and, assuming
UnicornSearch calls
fetch() to, our Storybook addon will compare each mock in
parameters.fetchMock.mocks until it finds a match and will return the first mock’s response.
If we fill out the “search” field with a different value, our Storybook addon will return the second mock’s response.
Installation
Install the addon as a dev dependency:
npm i -D storybook-addon-fetch-mock
Register the Storybook addon by adding its name to the addons array in
.storybook/main.js:
module.exports = { addons: ['storybook-addon-fetch-mock'], };
Optionally, configure the addon by adding a
fetchMockentry to the
parametersobject in
.storybook/preview.js. See the “Configure global parameters for all stories” section below for details.
Add mock data to your stories. See the “Configure mock data” section below for details.
Configure mock data
To intercept the
fetch calls to your API endpoints, add a
parameters.fetchMock.mocks array containing one or more endpoint mocks.
Where do the parameters go?
If you place the
parameters.fetchMock.mocks array inside a single story’s export, the mocks will apply to just that story:
export const MyStory = { parameters: { fetchMock: { mocks: [ // ...mocks go here ], }, }, };
If you place the
parameters.fetchMock.mocks array inside a Storybook file’s
default export, the mocks will apply to all stories in that file. But, if you need to, you can still override the mocks per story.
export default { title: 'Components/Unicorn Search', component: UnicornSearch, parameters: { fetchMock: { mocks: [ // ...mocks go here ], }, }, };
You can also place the
parameters.fetchMock.mocks array inside Storybook’s
preview.js configuration file, but that isn’t recommended. For better alternatives, see the “Configure global parameters for all stories” section below.
The
parameters.fetchMock.mocks array
When a call to
fetch() is made, each mock in the
parameters.fetchMock.mocks array is compared to the
fetch() request until a match is found.
Each mock should be an object containing the following possible keys:
matcher(required): Each mock’s
matcherobject has one or more criteria that is used to match. If multiple criteria are included in the
matcherall of the criteria must match in order for the mock to be used.
response(optional): Once the match is made, the matched mock’s
responseis used to configure the
fetch()response.
- If the mock does not specify a
response, the
fetch()response will use an HTTP 200 status with no body data.
- If the
responseis an object, those values are used to create the
fetch()response.
- If the
responseis a function, the function should return an object whose values are used to create the
fetch()response.
options(optional): Further options for configuring mocking behaviour.
Here’s the full list of possible keys for
matcher,
response, and
options:
const exampleMock = { // Criteria for deciding which requests should match this // mock. If multiple criteria are included, all of the // criteria must match in order for the mock to be used. matcher: { // Match only requests where the endpoint "url" is matched // using any one of these formats: // - "url" - Match an exact url. // e.g. "" // - "*" - Match any url // - "begin:..." - Match a url beginning with a string, // e.g. "begin:" // - "end:..." - Match a url ending with a string // e.g. "end:.jpg" // - "path:..." - Match a url which has a given path // e.g. "path:/posts/2018/7/3" // - "glob:..." - Match a url using a glob pattern // e.g. "glob:http://*.*" // - "express:..." - Match a url that satisfies an express // style path. e.g. "express:/user/:user" // - RegExp - Match a url that satisfies a regular // expression. e.g. /(article|post)\/\d+/ url: '', // If you have multiple mocks that use the same "url", // a unique "name" is required. name: 'searchSuccess', // Match only requests using this HTTP method. Not // case-sensitive. method: 'POST', // Match only requests that have these headers set. headers: { Authorization: 'Basic 123', }, // Match only requests that send a JSON body with the // exact structure and properties as the one provided. // See matcher.matchPartialBody below to override this. body: { unicornName: 'Charlie', }, // Match calls that only partially match the specified // matcher.body JSON. matchPartialBody: true, // Match only requests that have these query parameters // set (in any order). query: { q: 'cute+kittenz', }, // When the express: keyword is used in the "url" // matcher, match only requests with these express // parameters. params: { user: 'charlie', }, // Match if the function returns something truthy. The // function will be passed the url and options fetch was // called with. If fetch was called with a Request // instance, it will be passed url and options inferred // from the Request instance, with the original Request // will be passed as a third argument. functionMatcher: (url, options, request) => { return !!options.headers.Authorization; }, // Limits the number of times the mock can be matched. // If the mock has already been used "repeat" times, // the call to fetch() will fall through to be handled // by any other mocks. repeat: 1, }, // Configures the HTTP response returned by the mock. reponse: { // The mock response’s "statusText" is automatically set // based on this "status" number. Defaults to 200. status: 200, // By default, the optional "body" object will be converted // into a JSON string. See options.sendAsJson to override. body: { unicorns: true, }, // Set the mock response’s headers. headers: { 'Content-Type': 'text/html', }, // The url from which the mocked response should claim // to originate from (to imitate followed directs). // Will also set `redirected: true` on the response. redirectUrl: '', // Force fetch to return a Promise rejected with the // value of "throws". throws: new TypeError('Failed to fetch'), }, // Alternatively, the `response` can be a function that // returns an object with any of the keys above. The // function will be passed the url and options fetch was // called with. If fetch was called with a Request // instance, it will be passed url and options inferred // from the Request instance, with the original Request // will be passed as a third argument. reponse: (url, options, request) => { return { status: options.headers.Authorization ? 200 : 403, }; }, // An object containing further options for configuring // mocking behaviour. options: { // If set, the mocked response is delayed for the // specified number of milliseconds. delay: 500, // By default, the "body" object is converted to a JSON // string and the "Content-Type: application/json" // header will be set on the mock response. If this // option is set to false, the "body" object can be any // of the other types that fetch() supports, e.g. Blob, // ArrayBuffer, TypedArray, DataView, FormData, // URLSearchParams, string or ReadableStream. sendAsJson: false, // By default, a Content-Length header is set on each // mock response. This can be disabled when this option // is set to false. includeContentLength: false, }, };
Configure global parameters for all stories
The following options are designed to be used in Storybook’s
preview.js config file.
// .storybook/preview.js export const parameters = { fetchMock: { // When the story is reloaded (or you navigate to a new // story, this addon will be reset and a list of // previous mock matches will be sent to the browser’s // console if "debug" is true. debug: true, // Do any additional configuration of fetch-mock, e.g. // setting fetchMock.config or calling other fetch-mock // API methods. This function is given the fetchMock // instance as its only parameter and is called after // mocks are added but before catchAllMocks are added. useFetchMock: (fetchMock) => { fetchMock.config.overwriteRoutes = false; }, // After each story’s `mocks` are added, these catch-all // mocks are added. catchAllMocks: [ { matcher: { url: 'path:/endpoint1' }, response: 200 }, { matcher: { url: 'path:/endpoint2' }, response: 200 }, ], // A simple list of URLs to ensure that calls to // `fetch( [url] )` don’t go to the network. The mocked // fetch response will use HTTP status 404 to make it // easy to determine one of the catchAllURLs was matched. // These mocks are added after any catchAllMocks. catchAllURLs: [ // This is equivalent to the mock object: // { // matcher: { url: 'begin:' }, // response: { status: 404 }, // } '', ], }, }; | https://storybook.js.org/addons/storybook-addon-fetch-mock/ | CC-MAIN-2022-33 | refinedweb | 1,719 | 56.76 |
Details
Description.
Issue Links
- is a clone of
DIRMINA-824 AbstractIoBuffer.getObject cannot handle non-serializable class
- Closed
- is cloned as
DIRMINA-622 Initialise return ByteBuffer from PoolByteBufferAllokator with 0
- Closed
Activity
I am afraid, that is not enough. However, it is much better than getting an NPE in resolveClass.
My first example was a bit too simple. In our scenario we have our own remote method invocation mechanism. This allows us to very easily change the used transport and various parameters (TCP, SSL, XML-encoded, HTTP or even using RMI itself internally). We have a class similar to this:
public class RemoteMethodInv
{
static interface NonSerialisable
{ }
String methodName;
Class[] paramTypes;
Object[] paramValues;
}
Now if we want to call a method remotely which has a parameter of type NonSerialisabe, the invocation will always fail with Mina 2. But such a remote call is completely legal, as long as the corresponding parameter value is serialisable. Even using null as value fails, and null is very well serialisable.
I am not sure why Mina 2 uses an own (anonymous) ObjectInputStream at all, while Mina 1.1.7 does not. Maybe one can adapt the anonymous input stream to use more of its superclass. The default java.io.ObjectInputStream does not have this problem, which is no surprise since it may call lookup(Class, boolean).
Mina 1.1.7 uses the very same construction :
i'm still trying to get a clue about this part of the code and the rational behind it...
You are right, Mina 1.1.7 uses the same construction. I have to confess that we patched this class quite some time ago by using ObjectInputStream and ObjectOutputStream directly. For us this works so good, that we forgot about the patch. I will try this with Mina 2 now and report any problem. However, this might not be the final solution.
Any patch will be very welcomed. We can even build a version you can test if you like.
A rather simple patch but it works at least for our scenario.
I agree with Ulrich's patch. There's no explanation of why OOS/OIS are being overridden the way they are (original SVN change is 597545:) and from all I can see they're just creating issues.
If backwards compatibility of the protocol is desired, it can be changed to always write "0" and then leave the reading portion as-is.
Seems that this is not the first time this issue has been encountered. Probably time to fix it now...
Ok, someone already proposed a patch 3 years ago, but it was for MINA 1.1.7, which is considered as dead wood, thus it was never fixed in MINA 2 too.
I have applied the patch provided in
DIRMINA-627 :
I'll build some jars and put them on my home page for those who want to test it.
Available here for testing :
Please tell me if it's ok, so that I can close the issue and start a vote for a new release asap.
Thanks !
I did not try your new patched version yet. But as stated above, we use the very same patch in Mina 1.1.7 in our product. We applied the patch in november 2008 and did not have any problems with it yet. Sorry for the late reporting.
Patch applied ? | https://issues.apache.org/jira/browse/DIRMINA-822?focusedCommentId=13006422&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-11 | refinedweb | 559 | 74.19 |
I was able to repro the problem without the patchset and could not
reproduce with the patchset. But just had a quick question while
reviewing the patches.
Oleg Nesterov [oleg@tv-sign.ru] wrote:
| We don't change pid_ns->child_reaper when the main thread of the
| subnamespace init exits. As Robert Rex
| pointed out this is wrong.
|
| Yes, the re-parenting itself works correctly, but if the reparented
| task exits it needs ->parent->nsproxy->pid_ns in do_notify_parent(),
If the task was reparented, then with patch 1/4 its parent would be
the global init (init_pid_ns.child_reaper) right ?
| and if the main thread is zombie its ->nsproxy was already cleared
| by exit_task_namespaces().
If above is true, then even if the main thread's nsproxy is NULL, does
it affect the reparented task ?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at
Please read the FAQ at | http://fixunix.com/kernel/525390-%5Bpatch-2-4%5D-pid_ns-bug-11391-change-child_reaper-when-init-group_leader-exits.html | CC-MAIN-2016-30 | refinedweb | 162 | 65.01 |
One thing that came up in the retrospectives as a good thing a few times in my previous team was working in war teams. So what is a war team? To us it meant that whenever we ran into some kind of problem, a difficult bug, a big design challenge or similar we format war teams. Sometimes the war team was two people and sometimes it was the whole team. The war team focused on solving that single problem and nothing else. So what we said in the retrospects was that working in these teams was better than when everybody was working on their own tasks. I guess the explanation was that it was a great satisfaction in solving a difficult problem and also it is great working together with others. And this was in an environment where the team was sitting together in an open landscape and not in separate rooms.
What that taught us was to do as much pairing as possible when completing tasks. To be honest, it is hard to teach old dogs to sit and I guess that is why it came back in the retrospects from time to time. So what made me think about this today was that I stumbled over this old article about war rooms. It compares teams where each team member sits in their own cubicle compared to teams sitting together. I think that is the basic start to make any team more productive (more suggestions here). But once you have your team in the same room you want the team members to scramble around a few tasks rather than working on different things. As the article about war rooms point out; the team might say they don't like it but once they try it, they will probably love it...
When I wrote this the other day it made me think of another thing involving the memcmp function and the VC compiler. In the code I've seen over the years where memcmp was used it was always to find out if an area was identical or not. So the code typically looked something like this:
1: if (memcmp(a, b) == 0)
2: doSomething(a);
3: else
4: doSomething(b);
However another very common pattern is to write code like this:
1: if (!memcmp(a, b))
2: doSomething(b);
3: else
4: doSomething(a);
Turns out the latter results in faster code when compiled (at least with the VC compiler)! How is that? Turns out the compiler recognizes that you're not really interested in if memory block a is lower or greater than block b and the key to do this is the not operator. So by using the not operator the compare operation does not need to find the first difference and calculate which block is less than the other. The compiler instead generates code that just checks if the memory blocks are equal or not, which is much faster.
Ten years ago I worked on a project where we did a lot of fancy things with a number of databases. I learned a lot during that project but I also heard a funny story a coworker told me:...
The basic problem was that the developers never tested the database nor the accounting system with enough data to simulate several years of accounting information. You don't want to do that. There are also a number of other things you probably don't want to do. This article (register for a free account to read) covers a few good things to look out for. For example you there is usually a bad idea to do "IF (SELECT COUNT(*) FROM ... WHERE ...) > 0" when "IF EXISTS (SELECT 1 FROM ... WHERE ...)" works just as fine without the potential problem actually counting the number of rows.
I was working with some code a couple of weeks ago and I stumbled over this "interesting" "pattern"...
1: public class SomeApiWrapper
2: {
3: private string _method = "";
4: List<string> _arguments = new List<string>();
5: private string _instance = "";
6: private string _result = "";
7:
8: public string Method
9: {
10: set
11: {
12: _method = value;
13: ExecuteIfReady();
14: }
15: }
16:
17: public string Instance
18: {
19: set
20: {
21: _instance = value;
22: ExecuteIfReady();
23: }
24: }
25:
26: public string Argument
27: {
28: set
29: {
30: _arguments.Add(value);
31: }
32: }
33:
34: public string Result
35: {
36: get
37: {
38: if (_result.Length > 0)
39: return _result;
40: throw new InternalErrorException("Method not executed.");
41: }
42: }
43:
44: private void ExecuteIfReady()
45: {
46: if (_instance.Length > 0 && _method.Length > 0)
47: _result = SomeApi.Execute(_instance, _method, _arguments);
48: }
49: }
So what is happening here? Well, first of all you need to set the arguments, the method and the instance properties and then you just get the result. Setting the properties magically executes the method and stores the result as soon as you've provided enough information for the method call to be completed. An interesting twist however is that you cannot set the arguments last since the method is executed when you have set method and instance properties. the ordering between these two are however irrelevant...
Please don't do this. Ever. Properties should not have side effects in general and especially not side effects that are dependent of the order in which they are called. That is only confusing at best. If you read what I wrote the other day you might be confused because a DSL created using properties will have side effects. Well, no rule without an exception! Also consider properties that are lazy loaded (i.e. they do a costly retrieval of values only when asked but then remembers that value). That is also going to be OK. But the pattern you see above is not.
Preemptive comment: No this pattern was not seen in any code created by a Microsoft employee...?
So I've mentioned this topic before; choosing the right words to send the message you want. But this time I just want to mention a good read on the topic of using words. The short version is:
That's all for now. | http://blogs.msdn.com/b/cellfish/archive/2010/02.aspx?PostSortBy=MostRecent&PageIndex=1 | CC-MAIN-2014-42 | refinedweb | 1,023 | 70.63 |
Ranter
- djsumdog3161189dYou can write good things in PHP, yes. But it does have a LOT of language problems.
Until PHP7, there was no AST. There wasn't a spec for the language until Facebook wrote one. For the longest time there were no namespaces and all functions were in the default name space. There was register_globals. There is still the problem of automatic type conversion during comparison. There is no === for >= and <=. mysql_real_escape_string is a thing that exists.
Hiphop/Hack are Facebook written subsets of PHP to try to tame some of the language problems and/or to convert them to other compiled languages.
Even though PHP7 has fixed a lot of issues, and even though you can write good PHP code, it's still not a well designed language from a language perspective and you will hit a lot of these issues with the more advanced stuff you try to do with PHP.
If someone points out problems in your language, don't take it personally. You are not your code or your language preference.
- fahad32671133189d@djsumdog
PHP was never meant to be a full fledged language, necessity made it to become one. it has very humble background and yes it needs a lot of improvement but that does not make it any less than others.
It has powered a great part of the web we see today and if you are to blame it then go and check out other languages and technologies used in building web.
None of them are related but even then they are mixed up to create websites.
- ganjaman43875189dDoes devrant run on php? Yes.
Does devrant run well? Eeh lets say.
Does that make php good? Fuck no.
- inaba4512189dIs this some kind of elaborate satire and/or sarcasm or something? Or did you legit think you sounded cool when you wrote it?
Like even "it powers 90% of the web" holds more water than what you wrote
- erroronline11737189di don't get it. it is a shitty language because of issues in past versions? like firefox is shit because of netscape didn't support ajax?
because of an other syntax that your favourite language? like french is better because of these neat dashes over characters as opposed to english?
it's totally fine to prefer one language over another, but i think its childish to call these things shit in the first place.
- JoshBent27092189dI wonder when people get tired to bite the bait or do bad baits in hope for ++
- c3ypt1c10365189dI can do whatever the fuck I want.
- d4rkmindz368189dThe Problem of PHP is not the language itself. Its the shitty community. Thabks to Wordpress (and other cms) there are a lot of stupid script kiddies that think they're the guy. The post their crap code on SO and other portals. Other people really trying to learn PHP are copying those pieces of crap and getting hacked because they didn't understand the language. This is why so many people think PHP is shit.
And btw, don't use PHP 5.X for comparisons of the language. PHP 7 has improved so many things...
I also don't complain about Java 5 (^^) anymore.
- d4rkmindz368189d@d4rkmindz and other languages have their problems too so please stop bashing languages. They all deserve to be here (except this strange emoji-language)
- kurast223189dPHP is a shitty language.
- Midnigh-shcode1222189d"AS IF it were a shitty language"
... dude...
... it IS. no conditional.
and that's precisely why it's kind of easy and quick to build stuff with (up to a certain point) - similar principle to why it's easy to build a mud hut, because you're just slapping some dirt and twigs together. but try and build actual serious house with it, and you'll quickly realize/remember why "dirt" is basically a derogatory word.
- Root41391148dPHP7 is better. But it still has so very many issues.
PHP is decent, but it is objectively worse than other languages, and that isn't opinion.
I'm so glad I'm not the only one accurately describing PHP to its fanboys anymore.
- inaba4512148d@Root tbh I think if zend were to implement this...
it would be a huuuuge improvement. Having every file being an endpoint is definitely a cause for a lot of confusion and spaghetti for beginners when you start to scale your app.
A proper build in view engine would also be a big step since the ability to mix your serverside code and your clientside code the way PHP does it.
An a thing that would be an indirect improvement would be if teachers actually started teaching about clean code and proper god damn program structure
Your Job Suck?
Take a quick quiz from Triplebyte to skip the job search hassles and jump to final interviews at hot tech firms
Get a Better Job
Related Rants
- Company
- About
- News
- Swag Store
- Free Stickers
- devDucks
I'm sick of all the people who are complaining about PHP as if it were a shitty language.
Watch your mouth next time you abuse PHP or at least try to do so after reading 'about' section of devRant.
rant
phplove
php7
php | https://devrant.com/rants/1627907/im-sick-of-all-the-people-who-are-complaining-about-php-as-if-it-were-a-shitty-l | CC-MAIN-2019-09 | refinedweb | 860 | 73.58 |
You can found in this library some utilities and tasks that can be shared between multiple gulp's build processes.
npm install gulp-common-build-tasks
var common = require('gulp-common-build-tasks');
This is a wrapper over gulp that gives more functionnalities to it.
var common = require('gulp-common-build-tasks');var tasks = common.tasks();module.exports = tasks;
Namespaced tasks groups will prefix every subtasks under this namespace.
Example: A task named
.test under the namespace
application will create a
gulp task named
namespace.test.
var common = require('gulp-common-build-tasks');var tasks = common.tasks('namespace');module.exports = tasks;
tasks.setNamespace('namespace');
tasks.create(taskName, dependencies?, gulpFunction?)
tasks.create('.aTask', ['.anotherTask'], function(gulp, config) {[...]});
If a task name is prefixed with a dot
., like
.aTask, it will create a relative task name
based on the namespace.
If the dot
. is not present, it will be considered as a root task name.
This is also real when working with dependencies.
tasks;
If you wrap the dependency name or the
require() with a function you can create tasks based on
the result of this function.
#### Dependencies
{return {return configisSomeFeatureEnabled ? taskName : false;};}tasks;
{return {return configisSomeFeatureEnabled ? importedTasks : false;};}tasks;
A tasks group that provide two gulp task:
So you just need to import it in your tasks group with
tasks.import(common.scripts).
An utility that fills a config with default values like:
.jshintrcfile.
.jscsrcfile.
Jimmy Fortin | https://www.npmjs.com/package/gulp-common-build-tasks | CC-MAIN-2017-43 | refinedweb | 236 | 52.46 |
I been trying to learn Java and I been using a book call Java Concepts by Cay Horstmann and i'm having a bit of trouble. I'm doing the programming projects and it askes to construct a gregorianCalender object from a year,mnth, and day of the month. So far I got
import java.util.Calendar;
import java.util.GregorianCalendar;
public class PP21
{
public static void main (String[] args)
{
GregorianCalendar cal = new GregorianCalendar();
GregorianCalendar eckertsBirthday = new GregorianCalendar();
int dayOfMonth = cal.get(Calendar.DAY_OF_MONTH);
int month = cal.get(Calendar.MONTH);
int year = cal.get(Calendar.YEAR);
int weekday = cal.get(Calendar.DAY_OF_WEEK);
System.out.println("The month is: " + month);
System.out.println("The day is: " + weekday);
System.out.println("The year is: " + year);
}
}
Now the problem I am having is that I'm suppose to do the date and weekday that is 100 days from today. I believe it is something like cal.add or some sort, but I been trying for the last hour with no lucks. Any help would be great.
Thanks in advance
If you look at the API for the GregorianCalendar class, you'll see that the call should be:
cal.add( Calendar.DATE, 100 );
You then can query the instance for its day of the week and the date.
You can access the API at
even better, you can download the API and have a shortcut on your desktop so it is always available at the (double) click of your mouse
Last edited by nspils; 09-21-2006 at 01:59 AM.
int dayOfMonth = cal.get(Calendar.DAY_OF_MONTH);
int month = cal.get(Calendar.MONTH);
int year = cal.get(Calendar.YEAR);
int weekday = cal.get(Calendar.DAY_OF_WEEK);
So I would have to replace the cal.get with cal.get( Calendar.DATE, 100 ); ?
I'm still kinda confused. A little more help would be great.
If you call cal.add( Calendar.DATE, 100 );
the current date of cal will be 100 days after the date which is cal's "current" date. Now that you have that current date, you get day of the month, month, year, and day_of_week for that new date.
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center | http://forums.devx.com/showthread.php?156034-Help-with-GregorianCalendar | CC-MAIN-2017-13 | refinedweb | 373 | 60.31 |
WSDL SOAP AnalyzerEdit online
WSDL SOAP Analyzer is a tool that helps you test if the messages defined in a Web Service Descriptor (WSDL) are accepted by a Web Services server.
After you edit and validate your Web service descriptor against a mix of the XML Schemas for WSDL and SOAP, it is easy to check if the defined SOAP messages are accepted by the remote Web Services server by using the integrated WSDL SOAP Analyzer tool (available from the toolbar or Tools menu).
- Services - The list of services defined by the WSDL file.
- Ports - The ports for the selected service.
- Operations - The list of available operations for the selected service.
- Action URL - The script that serves the operation.
- SOAP Action - Identifies the action performed by the script.
- Version - Choose between 1.1 and 1.2. The SOAP version is selected automatically depending on the selected port.
- Request Editor - It allows you to compose the web service request. When an action is selected, Oxygen XML Editor tries to generate as much content as possible for the SOAP request. The envelope of the SOAP request has the correct namespace for the selected SOAP version, that is for SOAP 1.1 or for SOAP 1.2. Usually you just have to change a few values for the request to be valid. The Content Completion Assistant is available for this editor and is driven by the schema that defines the type of the current message. While selecting various operations, Oxygen XML Editor remembers the modified request for each one. You can press the Regenerate button to overwrite your modifications for the current request with the initial generated content.
- Attachments List - You can define a list of file URLs to be attached to the request.
- Response Area - Initially it displays an auto generated server sample response so you can have an idea about how the response looks like. After pressing the Send button, it presents the message received from the server in response to the Web Service request. It may show also error messages. If the response message contains attachments, Oxygen XML Editor prompts you to save them, then tries to open them with the associated system application.
- Errors List - There may be situations where the WSDL file is respecting the WSDL XML Schema, but it fails to be valid (for example, in the case of a message that is defined by means of an element that is not found in the types section of the WSDL). In such a case, the errors are listed here. This list is presented only when there are errors.
- Send Button - Executes the request. A status dialog box is displayed when Oxygen XML Editor is connecting to the server.
Once defined, a request derived from a Web Service descriptor can be saved with the Save button to a Web Service SOAP Call (WSSC) file for later reuse. In this way, you save time in configuring the URLs and parameters.
You can open the result of a Web Service call in an editor panel using the Open button. | https://www.oxygenxml.com/doc/versions/21.1/ug-editor/topics/composing-web-service-calls.html | CC-MAIN-2020-05 | refinedweb | 509 | 63.39 |
Large Projects using Turbo Vision
Whilst using Turbo Vision I have encountered some problems - maybe someone out there knows the answers though the Borland Support Line was unable to offer assistance.
CompuServe offers a number of useful files in the Borland Forum for using far virtual tables (farvt.zip) and overlaying Turbo Vision (Borland Technical Document TI1006.zip). Unfortunately the two are not compatible since overlayed TV functions require a local (near) pointer to virtual tables.
Using the IDE to create an overlayed project, some difficulties were encountered using multiple sub-directories for the project files. With a set-up as follows the compiler had certain problems:-
C:\Project.dir\Sub_dir.1
\Sub_dir.2
\Sub_dir.3
\etc.
Using the flag option Options/Compiler/Compile via Assembler as given in the tech.doc.(TI1006) the assembler first ran out of memory whilst compiling, which was resolved by replacing the Transfer option TASM to the extended Assembler TASMX. Although this cured the 'out of memory' problems further strange things started happening - namely the linker was unable to find the Object files. A little searching provided the answer, that the assembler was being called not through the transfer mechanism but directly from the IDE compiler although the correct (TASM/TASMX) assembler was being called as set-up in the Transfer list.
This had the result that the transfer option for the OBJ destination directories (Subdir.1 etc.) was being ignored all the OBJ files were being dumped in the top (Project.dir) directory. The linker subsequently looked for the files in their respective sub-directories and not finding them produced an error.
We eventually came up with two solutions to this problem, firstly, placing all the project files in the project directory though since many files had the same name but were in different sub-directories this involved considerable renaming of source files. A possible second solution was proposed in calling a small 'c' routine named TASM that resolved the sub-directory calls (possibly a batch file would be sufficient).
So having renamed the files two further (minor) problems developed. The code size, data size and line count on the project file lines were not available and secondly, when compiling with debug information enabled the compiler
produced the error message 'Cannot read diskette in drive B(or A)' with the options 'Retry' and 'Cancel'. Selecting 'Retry' the compiler would continue though this error was repeated several times before compilation was finally completed (the computer was only equipped with one floppy).
If you have any solutions to the above problems or similar 'interesting' experiences it would be nice to hear from you.
Finally one tip in using TV, if you are not using stream resource files it certainly pays to extract all the streamable functions from the TV library files (::build, ::write, ::read and overloaded << and >> functions) and re-Making the TV.LIB. The resulting library is about half the size of the original and for overlayed applications allows the use of an overlay buffer of 64K (instead of 128-192K required for the complete library), a definite advantage when memory is tight !
Martin Anderson
Dear Mike,
The first two copies of Overload are very good. As a novice in C++,
I really learnt a lot. Well done. I have a few questions, and I wonder
if you can find the answers for me.
- I noticed a lot of C++ books and some C++ compilers are labelled with different AT&T versions. I heard AT&T version 3 supports templates, but on the other hand some people are saying the template standard is still under development. Hence I am confused. Can you tell me what each AT&T version means.
I know you are going to publish a review on Borland C++ for OS/2, and I am looking forward to reading it. How likely for you to publish reviews on or comparison among other C++ compilers, especially Watcom C/C++32 9.5, Zortech C++ 3.1, High C++ and GNU C++? I happen to have access to Salford C/386 (including C++ extension) on an evaluation license (expiring the end of July). I think I would like to submit a review of this, but I am not sure about the license agreement. If you like the idea, I think it would be better for you to speak to them on the magazines behalf.
I think I read somewhere in the second issue of Overload that a destructor can be called directly. Can you tell me how to do it and under what situation this is needed.
I declared a class in a header file which is included in a few CPP files. One of the constructors has a constant array as a default argument. It looks like I have to define the constant array before the class definition. Therefore I end up with multiple copies of the same array in the same
executable file. I think only one copy is enough but do not know how to do it. May I have some advice.
In the book review section in Overload, I think if the publication date is printed next to the ISBN would be very helpful. Can this be done?
As you mentioned you were going to do a review on "World of ObjectWindows C++" video, I think it is the right time to ask this question. In the accompanying workbook, the printed code more or less in the middle of page 39 reads something like:
class TFrameWin : public TWindow
{
public:
virtual void CMFileNew(RTMessage)
= [CM_FIRST + CM_FILENEW];
:
};
I do not understand the syntax for this virtual function declaration (the use of []). Would you explain what this system is please?
Any congratulations belong to all the contributors. I will answer your questions as best as I can.
1) A detailed description of the AT&T standards is of little use to us. The AT&T versions represent steps in the evolution of the C+ + language. The latest AT&T version is 3.0. The evolution and standardisation of the language has now been taken on by the ANSI/ISO committees. I don't think any of the current compilers fully supports the AT&T 3 standard (though I feel I will soon be corrected). To the best of my knowledge none of the current crop of compilers support exceptions (the AT&T way). As to templates, the AT&T 3 templates as defined by Bjarne Stroustrup in the ARM (Annotated Reference Manual) 2nd edition is the latest version. I believe that the ANSI/ISO committee have not yet finalised all the details. .
2) We will be looking at many C+ +
compilers in the coming editions and trying to evaluate the strengths
and weaknesses of each. In order to fully evaluate a compiler and the
components that go into making up the complete environment is a long
task and cannot be accomplished in a few weeks.
IDE's can appear glossy on the surface, its only through use that their usefulness as professional tools can be evaluated.
3) You read correctly, a destructor can be called explicitly by means of -> or . operators. In the ARM, Bjarne Stroustrup says "Explicit calls of destructors are rarely needed. One use of such calls is for objects placed at specific addresses using a new operator. Such use of explicit placement and destruction of objects can be necessary to cope with dedicated hardware resources and for writing memory management facilities".
4) If I understand your problem correctly the following method should ensure that there is only one instance of the constant array in your executable, and that all object modules use the same one.
Header file:
#ifndef ThisHeaderFile_H
#define ThisHeaderFiIe_H
#if defined(ThisIsMain)
const char MyArray[] = {"This is Overloaded"};
#else
extern const char MyArray[];
#endif
#endif
The main program should then contain the statements:
#define ThisIsMain
#include "header.h"
#undef ThisIsMain
The Other modules including this header file must not define ThisIsMain. The extern keyword indicates to the compiler that the variables concerned belong to another module and the address is then fixed up by the linker.
5) I see no reason not to. I shall try to ensure all book reviewers supply me with the date of publication.
6) Borland have extended the C+ + compiler to handle this syntax. It is not standard C++. It does, in my opinion, one of the main reasons that I prefer using OWL rather than MFC. This extension allows the programmer to trap messages by a very simple process. OWL programs do not suffer from the "switch from hell" statements that C/Windows API programs used to suffer from. Object Windows allows you to associate the response to a message with the window by means of defining a member function of the class to handle that particular message.
These member functions are
called message response functions and they contain the dispatch
index which identifies the member function that will be called. All the
message response functions have a similar syntax as follows:
virtual void MemberFunction(RTMessage) = [dispatchIndex] ;
This often looks worse than it really is because of the use of the constants and offsets, in your example:
virtual void CMFileNew(RTMessage) = [CM_FIRST + CM_FILENEW];
The CM_FIRST is the offset. The offsets can take one of the following values depending on the source of the message that you wish to trap (table is on page 99 of my OWL manual):
WM_FIRST Windows Messages
WM_USER Programmer defined window messages
WM_INTERNAL Reserved for internal use
ID_FIRST Programmer defined Child ID messages
ID_INTERNAL Reserved for internal use
NF_FIRST Programmer defined notification messages
NF_INTERNAL Reserved for internal use
CM_FIRST Programmer defined command messages
CM_INTERNAL Reserved for internal use
The internal ones can be ignored, of the others, any of the standard windows messages (WM_PAINT etc) can be trapped by using [WM_FIRST + WM_PAINT]. If you have defined a radio button on a dialog box and you want to respond to it (to gray items?) everytime it is pressed then you will trap this message by use of the [ID_FIRST + ID_MyRADBUT] dispatch index. The command messages are used primarily to pick up on the messages from the menu system. In the case of your example we will pick up one of the predefined message numbers. If you had allocated an id of 112 to a menu item in the resource editor, you then create a message response function to trap it as follows:
virtual void MenuItem(RTMessage) = [CM_FIRST + 112];
I hope that has helped to clear up
some of your problems. If this is not clear enough, let me know and
I'll put an article on OWL Basics in the next edition.
Mike,
Just a quick note to say how good Overload is and keep it up. I find the 'tutorial1 parts are pitched just right for me. I quite liked all the different fonts in issue 1, but then I'm a font junky so the people who complained there where too many where perhaps correct!
If you can't fit everything on the issue disk what about gzip? I would think in the context of overload it would be free and an advert for the FSF. I might not agree with their philosophy, but some of the results are excellent!
Keep up the good work,
Rick Stones
Mike,
"On another topic, there is a matter I should like the group liaising with Borland to raise with the company. There is a systematic error in the version 3.0 ObjectWindows for C++ User's guide, which makes the tutorial section of the guide unnecessarily confusing. In the reference section none of the class private members is documented, except for class TStreamable. As far as I can make out, this error is not corrected either in the documentation files for 3.0 or in the documentation in the 3.1 upgrade."
"As to the matter of the ObjectWindows documentation, on pp. 225, under the section 'Sample Class Header File' it states that:
All functions with protected member access are labelled to the right of the program text (declarations). All functions with public member access are not labelled. Private members are not listed.
I suppose this is done because the private members play no part in the ObjectWindows class hierarchy (i.e. private member functions in a base class cannot be accessed by classes derived from that base class). On the other hand, the private pure virtual member streamableName is mentioned as it must be replaced in all classes derived from TStreamable. This in no way affects the protection scheme as a friend of a base class can have virtual access to the derived class members.
This is the only time that I can see a need for a user of ObjectWindows to know what private members are present. In our case the only place where streamableName is used is in opstream, which is a friend of TStreamable. Opstream uses the unique name returned from calling the derived classes streamableName member function.
Normally, as this is a different chapter, I would not expect the constraint imposed on chapter 16 to apply to chapter 17. However, they have stated that the sample class entry format on p225 applies to this chapter also.
I will also forward your comments to Borland UK if you wish" - Mike
"If you don't pick up the point that private members are not documented in the reference section, you are liable to experience disorientation when you go from the tutorial to the reference section and discover that the child list, a data member referred to throughout the tutorial, is not documented. Doubts may arise whether the tutorial and the documentation fully reflect the current behaviour of the OWL software.
Borland would, I think comment along the following lines. Their policy of not documenting private members is clearly stated on p.225 of the OWUG. Documenting private members would be irrelevant, and indeed counter, to the objective of helping the programmer to use the OWL application framework successfully. Such use involves deriving classes from a set of fundamental classes whose behaviour has been adequately defined by Borland.
With most of this I am in agreement. I believe, however, the manual has the potential to mislead readers who don't read manuals consecutively and completely. To enhance understanding of key concepts such a reader may turn from the tutorial to the reference section. He may find that reading only the first two pages of the introduction to the reference section is needed to use the reference section. The false statement on the first page (p. 223) that all class members are listed and the exposition in the following paragraph could together lead the reader to believe that all class members are documented. As the windows object, introduced early in the tutorial, is a key concept in OWL, it is likely that the TWindowsObject documentation will be one of the first parts of the reference section to be consulted by the tutorial reader. He will quickly discover that the child list data member is frequently referred to in the TWindowsObject documentation and is itself undocumented. This unexpected discovery is liable to leave the reader feeling confused.
The reader described above is not entirely hypothetical; I am one such reader. What changes do I suggest to the OWL manual? First, the false statement on p. 223 should be corrected. Second, the two mentions of the private data member CreateOrder in the documentation of TWindowsObject read and write member functions are inconsistent with the implementation of the Borland policy on private member documentation; they should be removed. Third, there should be a prominent statement of the Borland policy on private members where it will not be missed by the tutorial readers who don't read manuals consecutively and completely - possibly the majority. The paragraph on the OWL Reference in the introduction (p. 2) would be a suitable place."
Okay, well we haven't heard a
complaint like this one before, but I've forwarded these comments to
the US. My feeling is that private members are, as the word says,
private, and not supposed to be used by the developer. I think you've
answered entirely reasonably. If he wants to give me a call about it,
then give him my number, and get him to give me a call. - Guy Martin -
Borland UK Ltd
Dear Mike,
The magazine is excellent. I am glad that it will also include Microsoft C++ (and I hope users of other C++ systems). I like the idea of the EMail interviews. (I.e. the Barnje Stroustrup interview) It is an easy way of contacting the great and the good with out having to send a reporter to the other side of the world. I hope that there will be more interviews with prominent persons, e.g. Messrs King (MS), Martin (Borland), Jenson (JPI) etc possibly even Gates and Khan?
Wonderful idea, would you like to do it? I am currently interviewing Tom Cargill and I have several more lined up, but any assistance would be gratefully received.
Can any one help with a problem I have? I am using radio buttons with Borland's Turbo Vision screen graphics library. I am trying to switch 48 items as one unit. The most sensible layout appears to be 4 columns of 12. As far as I can see I must do this as 4 separate sets of buttons.
Does any one know of a simple way of setting up the 4 columns of 12 buttons as one set of radio buttons? I would ask Borland but they charge for tech support these days.
Turbo Vision is limited to 16 radio buttons in a single block. This is co-incidently the number of bits in an integer, which is used to record the current button pressed by setting the appropriate bit. Borland assure me, that this can be altered by modifying it to a long (I think you may need to check anything using that field and function return types as well). This will extend your button range to 32.
IMHO, I think you have a serious design problem. Anything requiring 48 radio buttons sounds like a Visual Basic programmer struggling to get out somewhere. I suggest that you try using a drop-down combo-list, so that only one of the list can be selected at any one time. This has the advantage of only taking up a small area of screen and the list can be sorted, making the items easy to find.
Something that occurred to me whilst reading Adrian Fagg's item on multiple compiler coding in CVu V5i6. Can we bring pressure to bear on the major C++ compiler manufacturers re standardisation?
I am not asking for much. Just the same functions (using the same names) in the same header files. They should put extensions and differences into one or two separate .h/.hpp files. (There is a starting point, header extensions) The C++ market is still young enough not to be set in it's ways, I hope.
It should not be to difficult to do for future compiler releases but it does depend on co-operation between companies. This will only happen if it is seen as something the market wants. Do we want it?
When C started it was hailed as portable, C++ has inherited the
mantle. If we end up with code that is specific to a machine &
compiler what happens when "The next PC" arrives? Rewrite all the code?
We have the next platforms on the horizon (Alpha, Power PC etc). Now is
the time to press for a core standardisation.
Regards - Chris Hills | https://accu.org/journals/overload/1/3/toms_1392/ | CC-MAIN-2021-10 | refinedweb | 3,277 | 61.56 |
04 September 2009 17:57 [Source: ICIS news]
By Nigel Davis
?xml:namespace>
It predicts overall chemical industry capacity utilisation in 2010 to be 79%, up from 75% in 2009 but below the trough of 81% in 2001 and 83% in 1993.
This is hardly good news. The shape of the recovery is beginning to be called; and there is not a great deal of confidence in chemicals.
One might have expected more broadly based and cyclically protected companies to fare better as demand begins to improve but that does not look as though it will be the case. The trouble with recession this time round is that it is deep rooted and widespread.
Companies of all sorts, in myriad industries remain under pressure and consumer confidence is low. The macro-economic data look positive, and the world may be pulling out of recession faster than some expected, but all is relative.
As industry economists are wont to point out: it can take years for a sector recovery to fully take hold following a damaging recession.
Financial and commodity markets have already discounted a recovery but it is the sustainability of the recovery that is open to question.
A reflection of this is being seen across many chemicals markets. Upstream, prices have moved higher: olefins and aromatics have gained ground. But price increases have been hard won or just stalled down many product chains as companies have met customer resistance.
End-use demand is not increasing, yet. Chemicals production, prices and sales all remain depressed, although improved from the start of the year.
Chemicals production in
The disquiet over continued growth in
At a more mundane level it is right to ask just what will happen when recession-driven tax incentives are withdrawn and ‘cash for clunkers’ plans end.
The economic outlook improved last month for the first time since the third quarter of 2007 but it is far too early to take the rising arc of the upswing for granted.
The crude market is certainly overblown. Petrochemical and plastics prices have been driven higher on the back of rising oil and, thankfully, thus far sustained
But the world’s big chemicals markets in
Chemical companies are likely to be forced to face higher feedstock and energy costs in the second half, particularly if they have bought forward on rising prices.
Nomura expects to see this negative feedstock and energy cost impact on BASF margins in the first half of 2010.
If capacity utilisation remains as low as the bank suggests then times will indeed be hard for the chemicals giant and for much of the industry running into next year.
In
To discuss issues facing the chemical industry go to ICIS connect | http://www.icis.com/Articles/2009/09/04/9245373/insight-prepared-for-disappointment-in-2010.html | CC-MAIN-2015-22 | refinedweb | 453 | 59.33 |
#include <stdio.h> #include <conio.h> struct node { int data; struct node* next; }; int main() { struct node* head = NULL; struct node* second = NULL; struct node* third = NULL; head = (struct node*)malloc(sizeof(struct node)); second = (struct node*)malloc(sizeof(struct node)); third = (struct node*)malloc(sizeof(struct node)); head->data = 1; head->next = second; second->data = 2; second->next = third; third->data = 3; third->next = NULL; struct node* new1; struct node* temp1; temp1 = head; while(temp1->next != NULL) { temp1 = temp1->next; } new1 = (struct node*)malloc(sizeof(struct node)); temp1->next = new1; new1->data = 5; new1->next = NULL; while(temp1 != NULL) { printf("%d ",temp1->data); temp1 = temp1->next; } return 0; }
This is the program for inserting a node at the end of linked list. The expected output is = 1 2 3 5, 5 is the new node's value here. But the current output is = 3 5. I don't know where am I wrong. Any answer will be appreciated.
while(temp1->next != NULL) { temp1 = temp1->next; }
After this loop your
temp1 is at the end of the list and you are adding a node at the end of the list.
Now you are trying to print from
temp1 obviously you will get only 2 nodes new one and the one before it. If you want the whole list print from
head. Just before printing after adding your new node. Point temp1 to head.
temp1 = head; while(temp1 != NULL) { printf("%d ",temp1->data); temp1 = temp1->next; } | http://databasefaq.com/index.php/answer/379447/c-data-structures-linked-list-insertion-inserting-a-node-at-the-end-of-a-linked-list | CC-MAIN-2018-26 | refinedweb | 245 | 74.49 |
Redefine CDAccount from Display 10.1 so that it is a class rather than a structure. Use the same member variables as in Display 10.1 but make them private. Include member functions for each of the following: one to return the initial balance, one to return the balance at maturity, on e.
10.1 is as follows
#include <iostream>
using namespace std;
struct CDAccount
{
double balance;
double interest_rate;
int term; //months until maturity
};
void get_data(CDAccount& the_account);
//postcondition: the_account.balance and
the_account.interest_rate
//have been given values that the user entered at the key
board.
int main()
{
CDAccount account;
get_data(account);
double rate_fraction, interest;
rate_fraction = account.interest_rate/100.0;
interest = account.balance*rate_fraction*(account.term/12.0);
account.balance = account.balance + interest;
cout.setf(ios::fixed);
cout.setf(ios::showpoint);
cout.precision(2);
cout << "When your CD matures in "
<< account.term << " months,\n"
<< "it will have a balance of $"
<< account.balance << endl;
return 0;
}
//uses iostream:
void get_data(CDAccount& the_account)
{
cout << "Enter account balance: $";
cin >> the_account.balance;
cout << "Enter account interest rate: ";
cin >> the_account.interest_rate;
cout << "Enter the number of months until maturity\n"
<< "(must be 12 or fewer months): ";
cin >> the_account.term;
}
SAMPLE DIALOGUE
Enter account balance: $100.00
Enter account interest rate: 10.0
Enter the number of months until maturity
(must be 12 or fewer months): 6
When your CD matures in 6 months,
it will have a balance of $105.00 | http://www.chegg.com/homework-help/questions-and-answers/redefine-cdaccount-display-101-class-rather-structure-use-member-variables-display-101-mak-q1275798 | CC-MAIN-2016-36 | refinedweb | 236 | 52.36 |
If you have Windows Azure Table Storage and you want to access that from your phone then the best one to me to use the proxy or OData. Because then you will be able to control the number of rows as Phone has limited capacity and Azure Table Storage is massive. However, you can directly access the Table Storage from your phone application but in that case you need to hardcode your 512 bit secret key which is the golden pass to your Azure Table Storage account and you will not be doing it for sure. In a separate post I will demonstrate the capability of exposing your Windows Azure Table data as OData. Here I will show how you can add record from Windows Phone to your Windows Azure Table Storage.
Now to do that I need to create Windows Phone Application and add one small component from NuGet.
After it opens then run this command
Install-Package Phone.Storage
Once the assemblies are added to the project, let us do few cleanup job. Under the folder called “App_Start” there will be a C# code file called “StorageInitializer.cs”. Delete it as we will be doing it in the same page.
In that file it basically initializes the connection to Windows Azure Storage where we need to pass the account name and secret key with the URLs. Also we need two main namespaces to be added
using Microsoft.WindowsAzure.Samples.Phone.Storage;
using System.Data.Services.Client;
After that initialize the connection,
var resolver = new CloudStorageClientResolverAccountAndKey(
new StorageCredentialsAccountAndKey("storageacc", "XYZKEYYYYYY"),
new Uri(""),
new Uri(""),
new Uri(""),
Deployment.Current.Dispatcher);
CloudStorageContext.Current.Resolver = resolver;
After that the Entity structure will have to be created
public class Employee : TableServiceEntity
{
public string EmpName { get; set; }
}
Now assume in a button’s click you are saving the data.
string tName = "Employee";
private void btnSave_Click(object sender, RoutedEventArgs e)
{
var tableClient = CloudStorageContext.Current.Resolver.CreateCloudTableClient();
tableClient.CreateTableIfNotExist(tName,
p =>
{
var contextTable = CloudStorageContext.Current.Resolver.CreateTableServiceContext();
});
var empData = new Employee()
{
PartitionKey = "Dev",
RowKey = Guid.NewGuid().ToString(),
Timestamp = DateTime.Now,
EmpName = txtVal.Text
};
var ctx = tableClient.GetDataServiceContext();
ctx.AddObject(tName, empData);
ctx.BeginSaveChanges(asyncData => { var sRes = ctx.EndSaveChanges(asyncData); }, null);
MessageBox.Show("Saved..");
}
That’s it!!! Isn’t it so cool?
Tips:
For more in details discussion please refer to Windows Azure Toolkit for Phone at
Namoskar!!!
Wah, kya baat hai?
Great sample! Very clean and usefull! Thanks a lot!
Glad you liked it.
n what is the way if i want to read the entities from my windows phone ?? You have only told about storing not about accessing
But how to get data from storage?? | http://blogs.msdn.com/b/wriju/archive/2012/02/08/windows-phone-7-5-working-with-azure-storage-table.aspx | CC-MAIN-2014-15 | refinedweb | 440 | 50.33 |
I'm not a fan of rounded corners in the UI for most programs. I'm wondering if anyone knows of some sort of extension, stylish script, or theme that sharpens the corners of the buttons in my Firefox toolbar to the old 3.6 style.
On top of that, the new Firefox 8 update changed the way the icon for a new tab looks. Now it's this funky looking dotted square:
I preferred the old icon with the paper with a folded corner. Any way to bring it back?
Edit the userChrome.css file to get the old "new tab" icon back:
/*
* Do not remove the @namespace line -- it's required for correct functioning
*/
@namespace url(""); /* set default namespace to XUL */
#page-proxy-favicon:not(src), .tab-icon-image:not(src),
#personal-bookmarks .bookmark-item .toolbarbutton-icon:not(src) {
list-style-image: url("chrome://global/skin/icons/folder-item.png")!important;
-moz-image-region: rect(0px, 16px, 16px, 0px)!important;
}
}
Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).
I found a stylish theme that changes the mentioned icon to a Firefox logo, if you just do a little editing on the script you could make it any logo you wanted.
Branding Logo as Favicon on New/Blank Tabs
asked
4 years ago
viewed
1161 times
active | http://superuser.com/questions/356057/sharpening-corners-in-firefox-8-bringing-back-old-new-tab-icon | CC-MAIN-2016-30 | refinedweb | 228 | 63.49 |
11 November 2012 18:54 [Source: ICIS news]
TORONTO (ICIS)--The chemical explosion at a pharmaceuticals plant east of ?xml:namespace>
The worker, who had suffered burns covering up to 90% of his body in Thursday’s explosion at Neptune Technologies & Bioressources’ plant in
Meanwhile, Quebec’s labour safety agency CSST suspended recovery and clean-up work at the site because of explosion risks from the remaining acetone, spokeswoman Julie Fournier said in a media briefing.
Fournier said that the exact amount of acetone remaining on the site was not known. Officials estimated that about 27,000 litres remained. Initially, they had talked about 15,000 litres. | http://www.icis.com/Articles/2012/11/11/9612969/canada-chemical-explosion-claims-third-victim.html | CC-MAIN-2014-42 | refinedweb | 107 | 59.84 |
#include "mbl_read_multi_props.h"
#include <vsl/vsl_indent.h>
#include <vcl_sstream.h>
#include <vcl_iostream.h>
#include <vcl_string.h>
#include <vcl_cctype.h>
#include <vcl_utility.h>
#include <vcl_iterator.h>
#include <mbl/mbl_parse_block.h>
#include <mbl/mbl_exception.h>
Go to the source code of this file.
Definition in file mbl_read_multi_props.cxx.
Throw error if there are any keys in props that aren't in ignore.
Definition at line 234 of file mbl_read_multi_props.cxx.
merge two property sets.
Definition at line 200 of file mbl_read_multi_props.cxx.
Print a list of properties for debugging purposes.
Definition at line 17 of file mbl_read_multi_props.cxx.
Read properties from a text stream.
Read properties with repeated labels from a text stream.
The function will terminate on an eof. If one of the opening lines contains an opening brace '{', then the function will also stop reading the stream after finding a line containing a closing brace '}'
Every property label ends in ":", and should not contain any whitespace. Differs from mbl_read_multi_props(afs) in that all whitespace is treated as a separator. If there is a brace after the first string following the label, the following text up to matching braces is included in the property value. Each property label should not contain any whitespace.
Definition at line 52 of file mbl_read_multi_props.cxx. | http://public.kitware.com/vxl/doc/release/contrib/mul/mbl/html/mbl__read__multi__props_8cxx.html | crawl-003 | refinedweb | 210 | 55.3 |
send a string to phpteeronline May 4, 2013 2:26 AM
Basically this again another new research for myself.
What I'm trying to do: send out a var from as2 as a string, to a php file that would write that string down to a txt file (test.txt)
I don't know how to do it, but I have this sample on hand ( which upload a file from local computer to server).
What I'm trying to learn from this sample is how php grab data sending over from as2, so if anyone can point out which part in the php file that holds the name of the uploading object.
very grateful,
as2:
import flash.net.FileReference;
var fileRef:FileReference = new FileReference();
var oListener:Object = new Object();
fileRef.addListener(oListener);
oListener.onSelect = function(fileRef:FileReference):Void {
tData.text = "File Name: " + fileRef.name;
}
tBrowse.onPress = function() { << tBrowse is a button
browseFiles();
}
function browseFiles(): Void {
fileRef.browse();
}
tUpload.onPress = function() {
uploadFile();
}
function uploadFile():Void {
fileRef.upload("simplefileupload.php");
}
php code "simplefileupload.php"
<?php
move_uploaded_file($_FILES['Filedata']['tmp_name'], './'.$_FILES['Filedata']['name']);
?>
1. Re: send a string to phpteeronline May 4, 2013 2:34 AM (in response to teeronline)
I have a simple fwrite php file, I just need to know what goes to $somecontent so the file name will be written on test.txt
for example, $somecontent = $_FILES['Filedata']['tmp_name'], './'.$_FILES['Filedata']['name'] << will this do it????
<?php
$filename = 'test.txt';
$somecontent = "something";
}
?>
2. Re: send a string to phpteeronline May 5, 2013 8:00 PM (in response to teeronline)
Hi, I have no idea what to google in this case, I tried a few, none comes simple explanation, that or doing something complicated. Can anyone point me to the right direction please
3. Re: send a string to phpteeronline May 6, 2013 11:34 AM (in response to teeronline)
Still looking for way to transfer a text from Flash to Php and have Php write that text down to test.txt.
Any help will be very grateful.
Thank you for reading
4. Re: send a string to phpteeronline May 6, 2013 11:39 PM (in response to teeronline)
Hi, I found a online tut, I have these codes on my swf, I don't why it isn't working, do I need to import some lib?
var urlReq:URLRequest = new URLRequest ("getVars.php");
submitT.addEventListener(MouseEvent.CLICK, send2php); <<submitT is a button
function send2php(evt:MouseEvent):void {
// Set the method to POST
urlReq.method = URLRequestMethod.POST;
// Define the variables to post
var urlVars:URLVariables = new URLVariables();
urlVars.userName = 'myUsername';
urlVars.userPass = 'myPassword';
// Add the variables to the URLRequest
urlReq.data = urlVars;
// Add the URLRequest data to a new Loader
var loader:URLLoader = new URLLoader (urlReq);
// Set a listener function to run when completed
//loader.addEventListener(Event.COMPLETE, onLoginComplete);
// Set the loader format to variables and post to the PHP
loader.dataFormat = URLLoaderDataFormat.VARIABLES;
loader.load(urlReq);
}
And these on getVars.php,
<?php
//Grab username and password variables
$username = $_POST['userName'];
$password = $_POST['userPass'];
// for additional variables use the &
// success=true&username=$username&password=$password
echo $username;
echo $password;
echo "success=true";
?> | https://forums.adobe.com/thread/1204970 | CC-MAIN-2018-39 | refinedweb | 521 | 56.86 |
Self vs @: its a decision, not a rule.
As a new programming student, I grasp for black and white rules to follow when learning a new concept, and the case of
self vs.
@ was no different. It was not clear when I should use
self and when I should use
@ to access instance variables. I researched in books and online, looking for a rule that would eliminate my confusion. Instead of finding one answer, I found a whole variety. Unfortunately, for the beginner in me at this stage, it seems there is no straight answer to this question.
What I realized, however, is
self vs.
@ is a question and a decision I will face every time I write a custom class and/or instance method in object oriented programming. This fact points to a subtler side of problem solving that web development requires and illuminates a next level of sorts in my learning: designing code. If level one is you can find a solution to the problem, this next level is you can find multiple solutions from which you’ll pick the best option. Designing code is understanding the decisions you face as you program and knowing the advantages and disadvantages of your options. This is what is required when faced with the situation of
self vs.
@.
So, let’s take a closer look…
The decision: use
self.var or
@var
First, understand the fundamental difference: the former calls a method that returns the instance variable, and the latter does not — it is the instance variable.
What can be confusing is that
self.var appears to be calling
self.instance_var because it happens to share the same name as the
instance_var. In actuality, however,
self.var calls
self.a_method. In the scenario of accessing an instance variable,
a_method is the getter or setter method. In order to use
self, these getter and/or setter methods need to exist either explicitly with a method you write or implicitly with
attr_reader,
attr_writer, or
attr_accessor.
Its easy to skip over the word ‘self’ altogether and just know that it is necessary in the same way ‘@’ is necessary to indicate an instance variable.
self, however, is actually doing a little more.
self references the object.
For example…
class Dog
attr_reader :breed
def initialize(breed)
@breed = breed
end
def display_breed
puts self.breed
end
end
fido = Dog.new('beagle')
fido.display_breed
In the code above, we have a class of
Dog, and we are choosing to do two things: 1.) make a new instance of
Dog,
fido = Dog.new('beagle') and 2.) call the
display_breed method on
fido. When we do #1,
fido (as well as any instance of Dog) is initialized with an instance variable
@breed whose value is passed in as an argument. When we do #2, the
display_breed method prints
self.breed. What is
self.breed though? Here
self refers to the object
fido, which calls the implicit breed getter method. This getter method holds the value of
@breed: ‘beagle’. So
display_breed prints
beagle.
Secondly, consider the trade-offs of each option:
A. Private vs. Public
self.var is public, and therefore can be accessed outside the class. Depending on the circumstance, this public state could cause problems because the data is no longer protected. Note: one exception is if the getter and setter methods are explicitly made private with the ‘private’ keyword. In this case, these methods cannot be accessed outside the class.
@var is private, and therefore cannot be accessed outside the class. Obviously, this private state puts limitations on the code, which is a key concept in object oriented programming.
B. Maintenance/Encapsulation/Custom logic
self can actually be used on any instance method not just the accessor methods. One advantage to using
self is any custom logic pertaining to an instance variable is easier to maintain as the code is written in one place.
Writing this custom logic with
@var would require you to make changes in many places in your code when a revision is required at a later time.
C. Cleanliness/Readability
Code littered with
self can be harder to read due to the added text.
Note 1:
self is not required for getter methods. In the example of
fido, the
display_breed method could have been written as:
def display_breed
puts breed
end
Note 2:
self is required for calling setter methods. Without self, the syntax of assigning a value to the instance variable via the setter method is the same as the syntax of assigning a value to a local variable
method_or_var = value, and Ruby would interpret the value as a local variable instead.
D. Performance
@var is faster.
Lastly, decide what is the most important factor to accommodate in your code. Over time, a developer might consistently choose one factor as more important than others, and this becomes his/her preference.
Which factors do you deem as most important to consider when deciding to use
self or
@? What is your preference? | https://medium.com/@jocie.moore/self-vs-its-a-decision-not-a-rule-5beff8062a35 | CC-MAIN-2018-51 | refinedweb | 827 | 65.42 |
fabs (3p)
PRO>
double fabs(double x); float fabsf(float x); long double fabslUpon successful completion, these functions shall return the absolute value of x. If x is NaN, a NaN shall be returned. If x is ±0, +0 shall be returned. If x is ±Inf, +Inf shall be returned.
ERRORSNo errors are defined. The following sections are informative.
EXAMPLES
Computing the 1-Norm of a Floating-Point VectorThis example shows the use of fabs() to compute the 1-norm of a vector defined as follows:
norm1(v) = |v[0]| + |v[1]| + ... + |v[n−1]|
#include <math.h>
double norm1(const double v[], const int n) { int i; double n1_v; /* 1-norm of v */
n1_v = 0; for (i=0; i<n; i++) { n1_v += fabs (v[i]); }
return n1_v; } | https://readtheman.io/pages/3p/fabs | CC-MAIN-2019-09 | refinedweb | 128 | 67.35 |
- Introduction
-
3.3. Accessing Individual String Characters
You want to process individual characters within a string.
Technique
Use the index operator ([]) by specifying the zero-based index of the character within the string that you want to extract. Furthermore, you can also use the foreach enumerator on the string using a char structure as the enumeration data type.
The string class is really a collection of objects. These objects are individual characters. You can access each character using the same methods you would use to access an object in most other collections (which is covered in the next chapter).
You use an indexer to specify which object in a collection you want to retrieve. In C#, the first object begins at the 0 index of the string. The objects are individual characters whose data type is System.Char, which is aliased with the char keyword. The indexer for the string class, however, can only access a character and cannot set the value of a character at that position. Because a string is immutable, you cannot change the internal array of characters unless you create and return a new string. If you need the ability to index a string to set individual characters, use a StringBuilder object.
Listing 3.4 shows how to access the characters in a string. One thing to point out is that because the string also implements the IEnumerable interface, you can use the foreach control structure to enumerate through the string.
Listing 3.4 Accessing Characters Using Indexers and Enumeration
using System; using System.Text; namespace _3_Characters { class Class1 { [STAThread] static void Main(string[] args) { string str = "abcdefghijklmnopqrstuvwxyz"; str = ReverseString( str ); Console.WriteLine( str ); str = ReverseStringEnum( str ); Console.WriteLine( str ); } static string ReverseString( string strIn ) { StringBuilder sb = new StringBuilder(strIn.Length); for( int i = 0; i < strIn.Length; ++i ) { sb.Append( strIn[(strIn.Length-1)-i] ); } return sb.ToString(); } static string ReverseStringEnum( string strIn ) { StringBuilder sb = new StringBuilder( strIn.Length ); foreach( char ch in strIn ) { sb.Insert( 0, ch ); } return sb.ToString(); } } } | http://www.informit.com/articles/article.aspx?p=102193&seqNum=4 | CC-MAIN-2019-04 | refinedweb | 336 | 50.23 |
"CAMBODIA signs Ottawa treaty: no more landmines", proclaims one of many
banners strung around Phnom Penh in honor of the landmine ban treaty signed in Canada
early this month. If only it were that simple.
The reality, experts say, is that the euphoria over a Nobel Peace Prize and an international
mine ban needs to be tempered by a strong dose of realism.
"You can't be anti-ban - undoubtedly the ban is a good thing. But someone experienced
in the field has to put an experienced field view forward. The ban will not be as
effective as anyone hopes or dreams it will be," said Leonard Kaminski, project
coordinator of the demining agency The HALO Trust.
The ban requires signatory nations to stop production, use and export of anti-personnel
mines; pass domestic anti-mine laws; destroy all stockpiles within four years; and
facilitate demining and help for mine victims. Mine injuries kill or maim about 200
people a month in Cambodia.
In response, the Cambodian government has drafted a mine law. Approved by the Council
of Ministers Nov 28, the law prohibits "use, production, holding, business,
import and export" of anti-personnel mines except for use in training.
But Kaminski doubted the effectiveness of such laws. He pointed to Russia, Vietnam
and China, which between them produce over 95% of the mines found by HALO in Cambodia,
but have not signed the treaty.
He emphasized that mines are very cheap, effective weapons. In conflict situations
such as Cambodia's, he said that "there is a market [for mines], and someone
will produce things for that market".
In addition, experts worry that non-signing states, especially those near conflict
zones, may become booming mine production centers.
"Mines may even become more readily available" as an unintended result
of the ban, said Ian Brown, program director of Mines Advisory Group (MAG).
Yet Emma Leslie, press officer of the Cambodia Campaign to Ban Landmines, maintained
that even without the non-signers - including the United States - the ban serves
a useful purpose.
"Getting countries to destroy their stockpiles... dramatically reduces the amount
of mines available in the world. And [the ban] stigmatizes the whole issue [of using
mines]," she said.
That stigma has certainly not trickled down to the resistance fighters in Cambodia,
who are still laying homemade devices, demining agencies confirm. Four days after
the Nobel prize was announced, Khmer Rouge radio claimed its forces had the right
to use "any type of weapon" to defend Cambodia's sovereignty.
"You can take any mortar and the detonator from a grenade and you have a fragmentation
mine," noted Kaminski. "Making explosive devices cannot be controlled."
General Ko Chean, commander of Military Region Five, said that the Khmer Rouge resistance
was using homemade mines, often made of leftover shells stuffed with TNT. "These
homemade mines are as powerful as the real mines," he said.
The resistance, for its part, asserted that the government side is laying mines,
despite repeated government claims to the contrary.
"They are using the old mines they have possessed against us in O'Smach. They
use the Chinese and Russian made mines to support their operations," resistance
spokesman Puth Chadarith said by phone Dec16.
But independent sources said they could not confirm use of commercial mines since
the current civil fighting broke out in July. A September HALO report on the mines
situation on Rte 68 near the resistance base of O'Smach reads: "What is important
is that neither the Funcinpec/Khmer Rouge alliance, nor the CPP forces appear to
have stocks of manufactured mines available."
While government forces around O'Smach may not be laying mines - the weapons are
largely defensive, rather than offensive, experts note - the issue of Cambodia's
mine stockpiles remains murky.
Gen Neang Phat, Defense Ministry chief of military information, told the Post: "The
ministry does not provide the soldiers at the front lines with mines, and even in
the Ministry stock there are no mines at all."
Officials from the quasi-government Cambodian Mine Action Center (CMAC) also said
they had no information about government stockpiles, yet NGOs in the field avow that
large stores do exist.
"They certainly have stockpiles, no question," said Kaminski. "UNTAC
saw them, every second person in demining saw them... I am unaware of any destruction
or deployment or sale of these stockpiles."
"They definitely have mines. How many, that is the question," agreed Serge
Dumortier, coordinator of Handicap International's mine department. He added that
before the government signed a peace deal with Pailin, mines were regularly trucked
to the area from Phnom Penh.
The stockpile situation will not be assessed until after the National Assembly passes
the mine law, according to Niem Chouleng, CMAC's assistant director. He said that
a committee would be formed to count the stocks within 90 days of the bill's passage.
But he estimated the count would take up to 18 months to do.
In line with the Ottawa treaty, the draft law requires CMAC to destroy all Cambodia's
mine stocks within a year of the law's enactment. But the law also reflects a loophole
in the treaty which allows countries to retain "a number" of mines for
training purposes. "The amount of such mines shall not exceed the minimum number
absolutely necessary," reads article 3(1).
MAG's Brown noted that this article was a significant "get-out clause"
that could allow governments to keep as many mines as they wish.
"Of course we support the ban - the only way to stop landmines is to stop making
them and destroy what is above the ground," he added. "[But] the ban is
going to have zero effect [on Cambodia] for the next however many years simply because
of the number of mines still in the ground."
Brown also said that while the ban bandwagon has generated a lot of money for the
issue, he is concerned about where it will all end up.
"I hope money doesn't go into newfangled, sexy [mine clearance] research projects
which don't actually do a great deal," he said, observing that many such projects
have close links to the companies that make mines in the first place, and also to
government defense departments.
HALO officials are concerned that too much emphasis on the "feel-good"
ban will result in less emphasis, financial and otherwise, on the other aspects of
dealing with landmines.
"The mine problem is multifaceted: the ban; awareness; clearance; emergency
services; self-respect and orthopedics; and skills training," said Paul Heslop,
HALO's program manager. "Nobody should focus on any one of them... ban the landmines
is a bit of a fad, when essentially the problem will continue for a lot longer."
Those who work with victims agree. "Many of us are concerned that not enough
attention is being paid to the already huge numbers of amputees - we have to make
a lifetime commitment to their care," said Carson Harte, principal of the Cambodian
School of Prosthetics and Orthotics, who attended the Ottawa conference.
In response, Leslie noted that the language of the ban itself promotes demining and
victim assistance. "In fact [the drive for the ban] has actually promoted the
whole process ... without question it's a promotion of those issues, not taking away
from them." She added that during the Ottawa conference, governments pledged
between $200 and $250 million for demining, including $87 million from the US.
But that money will be a long time in reaching Cambodia, according to Dumortier.
"I went to USAID, and they said we'll never see the color of this money,"
he reported. He was told the process of getting the funds through US Pentagon bureaucracy
could take four or five years.
In the end, experts agree, the main issue for Cambodia is the human cost of mines
already - and still being - laid. Asked Heslop: "In the past five years the
ban has been talked about, how many mines has it talked out of the ground, and how
many have been taken out one leg at a time?" | http://www.phnompenhpost.com/national/after-ban-and-nobel-what-about-mines | CC-MAIN-2017-34 | refinedweb | 1,348 | 59.64 |
So I'm trying to make my junit test into an executable so I can execute it with a .bat file. Does anybody know how to do this? The way I've been trying to use is turning it into an executable .jar file. If there is a better way let me know. How do I generate a main method for this script so I can export it as an executable .jar? import com.thoughtworks.selenium.*; import org.junit.After; import org.junit.Before; import org.junit.Test; import static org.junit.Assert.*; import java.util.regex.Pattern; public class Hello extends SeleneseTestCase { private Selenium selenium; @Before public void setUp() throws Exception { selenium = new DefaultSelenium("localhost", 4444, "*firefox C:\\Program Files (x86)\\Mozilla Firefox\\firefox.exe", ---------------------------- Please feel free to add me on skyep - ScytheMarketing I'm willing to make a donation to anybody willing to help me. | https://www.blackhatworld.com/seo/need-somebody-savy-with-selenium-java.736501/ | CC-MAIN-2017-51 | refinedweb | 148 | 50.12 |
Graph Metrics Manually on a CloudWatch Dashboard
If a metric has not published data in the past 14 days, you cannot find it when searching for metrics to add to a graph on a CloudWatch dashboard. Use the following steps to add any metric manually to an existing graph.
To add a metric that you cannot find in search to a graph
Open the CloudWatch console at.
In the navigation pane, choose Dashboards and select a dashboard.
The dashboard must already contain a graph where you want to add the metric. If it does not already, create the graph and add any metric to it. For more information, see Add or Remove a Graph from a CloudWatch Dashboard.
Choose Actions, View/edit source.
A JSON block appears. The block specifies the widgets on the dashboard and their contents. The following is an example of one part of this block, which defines one graph.
{ "type": "metric", "x": 0, "y": 0, "width": 6, "height": 3, "properties": { "view": "singleValue", "metrics": [ [ "AWS/EBS", "VolumeReadOps", "VolumeId", "vol-1234567890abcdef0" ] ], "region": "us-west-1" } },
In this example, the following section defines the metric shown on this graph.
[ "AWS/EBS", "VolumeReadOps", "VolumeId", "vol-1234567890abcdef0" ]
Add a comma after the end bracket if there is not already one, and then add a similar bracketed section after the comma. In this new section, specify the namespace, metric name, and any necessary dimensions of the metric you are adding to the graph. The following is an example:
[ "AWS/EBS", "VolumeReadOps", "VolumeId", "vol-1234567890abcdef0" ], [ "
MyNamespace", "
MyMetricName", "
DimensionName", "
DimensionValue" ]
For more information about the formatting of metrics in JSON, see Properties of a Metric Widget Object.
Choose Update. | https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/add_old_metrics_to_graph.html | CC-MAIN-2019-13 | refinedweb | 275 | 61.97 |
Ant using Antlr4.
What’s Autocomplete?
Assume the following grammar:
// Parser query: Count Fruits EOF; // Lexer Count: [Cc] [Oo] [Uu] [Nn] [Tt]; Fruits: "[0-9a-zA-Z]"; Ws: [\s] -> skip;
This grammar would produce a parser that would match the word “count” followed by any alphanumeric string in quotes, both strings are case insensitive, and any number of spaces between the two words are acceptable. For examples: count “apples”
Now, assume we want to present our users with a GUI input field, and then do two things. First, we want to make sure that whatever they type in is a valid input according to the grammar defined above. And second, as the user is typing we want to show suggestions about what can be typed next.
Let me explain the second point a bit more with a table of some permutations:
From the table above, we can see that autocomplete tries to suggest possible input based on the current parser and lexer token (more about this a bit later). Note, the list of fruits for the Fruits token would probably come from some sort of a storage, in-memory, backend, whatever it might be. While the Count token can be suggested as is.
Below we will focus on Fruits Rule and how to create an autocomplete for it.
Autocomplete & Errors – Two Good Friends
An important point to understand is that autocompletion assumes an invalid user input according to the defined grammar. That is, if we try to validate a partial (incomplete) user’s input, the parser will throw an error. If an error was thrown, we should be able to figure out where parser stopped, what rule or token was expected when the execution stopped and what text the parser was unable to match according our grammar.
So, errors you say, right? How about we hook into these errors, take a look around and see if we can create autocomplete functionality based on the errors the parser produces.
A side note. Code samples below are written in Javascript. Antlr4 has a Javascript target that will convert your grammar into Javascript code. The language choice is also dictated by the task – autocompletion is primarily a frontend job, and Javascript is a frontend language. Besides, it’s really cool to be able to validate grammar right in the browser, avoiding the server side altogether.
Another Side note. The examples below should easily translate to the language of your choice if there is a need.
Let’s begin by creating an error listener:
var ErrorListener = require('antlr4/error/ErrorListener').ErrorListener; function TestGrammarErrorListener() { ErrorListener.call(this); this.partialFruit = null; this.errors = []; return this; } TestGrammarErrorListener.prototype = Object.create(ErrorListener.prototype); TestGrammarErrorListener.prototype.constructor = TestGrammarErrorListener;
Here, we import the standard Antlr4 Error listener, and subclass it with our own
TestGrammarErrorListener. Important points to notice:
partialFruit property – will hold the string that the parser failed on
errors array – will hold a list of errors that the
ErrorListener caught.
I will not explain how to attach the error listener to your parser or tree walker. There are plenty of examples of that.
Now, that we have the error listener created and attached to our parser, we need to do one important change – override the actual error handling method.
TestGrammarErrorListener.prototype.syntaxError = function(recognizer, offendingSymbol, line, column, msg, e) { this.errors.push(arguments); this.partialFruit = null; var typeAssistTokens = ["Fruits"]; var parser = recognizer._ctx.parser, tokens = parser.getTokenStream().tokens; // last token is always "fake" EOF token if (tokens.length > 1) { var lastToken = tokens[tokens.length - 2], tokenType = parser.symbolicNames[lastToken.type]; this.tokenType = tokenType; if (typeAssistTokens.indexOf(tokenType) >= 0) { this.partialFruit = lastToken.text; } } };
Here we override
syntaxError method, that is called every time (should be at most once per parse job) the parser encounters a mismatch in input according to the grammar. Let me explain the code a bit more.
typeAssistTokens is a list of tokens which will activate our type assist. See the grammar (tokens are the left side of the rule definitions – everything before a colon).
Then, we ask the parser to give us a list of tokens that were parsed. We take the last token – that’s the token where the parser failed and stopped. Then we need to check the type of the last token. If it’s of type we’re interested in (Fruits in this case), then we extract the text from that token and that’s the text we can use for type assist. That text is assigned to
partialFruit property.
So, if
partialFruit is not null, then we need to active type assist. Fire off some ajax requests, do some wildcard queries to the database, or something of this nature. Additionally, you can use
tokenType property to differentiate between different type assist sources, when your grammar gets more complex.
Help! Nothing works!
There are a few interesting quirks I encountered along the way. Check them out before giving up on parsers and taking out your regular expression hack tool.
My
ErrorListener Doesn’t Consistently Trigger
I am glad you asked. Take a look at the grammar above once more. See the EOF token? That’s a special magical token to tell the lexer to parse everything, and I mean this time for real, everything, no cheating. Lexer’s behaviour is somewhat odd. It tries so hard to find a token in user’s input that sometimes it goes too far and leaves out some trailing input as long as everything before it matched. I didn’t dig any deeper for a better explanation. If you have it, please let me know. However, adding
EOF to the end of your grammar should do the trick, and probably break some other things along the way 🙂
I Get a Wrong Token in Error Listener
So, your syntaxError method is triggered, but the token you get there is not the token you’re expecting. Say, you expected the Fruits token, but you got the Count token instead.
The problem lies in the Lexer again. The Lexer will try to match any of the rules against the input it has, without any knowledge of precedence of these rules – that’s Parser’s job. So, if you have a very liberal token,
say: SomeRule: (\w+); the Lexer will match some of your input according to that rule, but the parser will fail at that, because that’s not the rule it expected.
How do you solve the problem? Well, avoid very liberal rules if you can. If you cannot, as it’s probably the case, then hide liberal rules in lexing modes.
Lexing modes, essentially split your lexing rules into isolated namespaces or sublexers. Anything inside a sublexer is not visible to the main lexer. You can activate sublexers based on some trigger conditions. For instance, in our Fruits rule above, we can active the sublexer once an opening quote is encountered, and exit the sublexer, once the closing quote is encountered. And then we can move out the Fruits regular expression into the sublexer, thus avoiding a liberal lexing rule in the global lexing space.
This trick will ensure that you get the right tokens in your
syntaxError method.
Conclusion
Autocomplete functionality with Antlr4 turned out to be quite easy to implement and is very scalable in terms of differentiating between different autocomplete sources based directly on grammar rules. The entire autocomplete logic resides right in the browser which provides you with flexibility to trigger autocomplete as you see fit. On the other hand, Antlr4 ships other language targets, meaning, that if there is ever a need, autocomplete functionality can easily be moved to the server side. This can prove to be useful, should your grammar grow in size and complexity and the frontend is not performant enough to cater for those needs, or if you are willing to keep your parser completely closed source. | https://www.rapid7.com/blog/post/2015/06/29/how-to-implement-antlr4-autocomplete/ | CC-MAIN-2021-21 | refinedweb | 1,308 | 63.7 |
Artifact ae38050a58331844d7a619450604efeb9742ee4b:
- File doc/parts/dsl_officer: 6858)
[list_begin definitions] [comment {- - -- --- ----- -------- -------------}] [call [cmd alias] [arg name] [const =] [arg name']...] [call [cmd alias] [arg name]] This is a structuring command, for the command hierarchy. Its main uses are the creation of alternate command names, and of shortcuts through the command hierarchy. [para] For example, [syscmd stackato]'s command specification for alias management is written using deep nesting and uses aliases to provide the look of a flat namespace to application users. [para] In the first form the [arg name] is given the explicit path to the actor the name is an alias for. In the second form the alias implicitly refers to the immediately preceding [term officer] or [term private]. Note that "immediately" is interpreted at the current level. The system is [emph not] looking into a nested specification for its last command. [list_begin arguments] [arg_def string name] The name of the alias. [arg_def string name'...] The path to the actor, as list of names. [list_end] [comment {- - -- --- ----- -------- -------------}] [call [cmd common] [arg name] [option -extend] [option --] [arg text]] This is another structuring command, for structuring the specification itself instead of the command tree it declares. [para] It creates named values, usually code blocks, which can be shared between specifications. Note that while each block is visible in the current [term officer] and its subordinates, parents and siblings have no access. [para] An example of such a block would be [example { common *all* { option debug { Activate client internal tracing. } { undocumented list when-complete [lambda {p tags} { foreach t $tags { debug on $t } }] } } }] This example defines an option to access the subsystem for debug narative (See package [package Tcllib]). The example is actually special, as the block named [const *all*] is reserved by the framework. This block, if defined, is automatically included at the front of all [term private] specifications, i.e. shared across all the privates specified underneath this [term officer]. A very important trait for the [term option] in the example, as it makes the debug setup available to all privates without having to explicitly include the block, and possibly forgetting such. [para] Generally speaking, the framework reserves all blocks whose name begins with a star, i.e [const *], for its own use. [para] Using option [option -extend] will change the behaviour to extend inherited content instead of writing over it. [para] Using option [option --] will prevent misinterpretation of the following argument as option, even if it begins with a dash. [list_begin arguments] [arg_def string name] The name of the common block. [arg_def string text] The text of the block. [list_end] [comment {- - -- --- ----- -------- -------------}] [call [cmd default]] This command sets up a special kind of alias. The last [term private] or [term officer] is set as the default command to use at runtime. This means that if during "Dispatch" phase the currently processed word does not match any of the commands known to this [term officer] this default is used. If no default is specified an error will be thrown instead. [comment {- - -- --- ----- -------- -------------}] [call [cmd description] [arg text]] This command declares the help text of the [term officer]. [comment {- - -- --- ----- -------- -------------}] [call [cmd intercept] [arg cmdprefix]] [call [cmd ehandler] [arg cmdprefix]] [emph Note:] While the form [cmd ehandler] is still usable, it is deprecated and will be removed in a future release. This is an advanced command which should normally only be specified at the top of the whole hierarchy (from which its value will automatically propagate to all subordinates). [para] At runtime the framework will call the specified command prefix with a single argument, a script whose execution is equivalent to the phases [term Parsing], [term Completion], and [term Execution] of the framework, as described in [term [vset TITLE_FLOW]]. The handler [emph must] call this script, and can perform any application-specific actions before and after. [para] This handler's main uses are two-fold: [list_begin enumerated] [enum] Capture and hande application-specific errors which should not abort the application, nor shown as Tcl stacktrace. [enum] Cleanup of application-specific transient state the [term parameter] callbacks (See [term [vset TITLE_DSL_PARAMETER]]) and/or actions may have set during their execution. This is especially important if the interactive command line shells of the framework are enabled. Without such a handler and its bespoke cleanup code transient state [emph will] leak between multiple commands run from such a shell, something which is definitely not wanted. [list_end] [comment {- - -- --- ----- -------- -------------}] [call [cmd custom-setup] [arg cmdprefix]] This is an advanced command which should normally only be specified at the top of the whole hierarchy (from which its value will automatically propagate to all subordinates). [para] When called multiple times, the specified commands accumulate. This makes it easy to specify several indepedent customizations. [para] At runtime the framework will invoke all the specified commands with a single argument, the command of the actor to initialize. The command prefix is then allowed to modify that actor as it sees fit. The common use case will be the extension of the object with additional subordinates. An example of this is the package [package cmdr::history] which provides a command [cmd cmdr::history::attach] to add the history management commands to the actor in question. [comment {- - -- --- ----- -------- -------------}] [call [cmd officer] [arg name] [arg script]] This command creates a named subordinate [term officer] with its specification [arg script] of officer commands as described here. [comment {- - -- --- ----- -------- -------------}] [call [cmd private] [arg name] [arg script] [arg cmdprefix]] This command creates a named subordinate [term private] with its specification [arg script] of private commands (See [term [vset TITLE_DSL_PRIVATE]]), and a command prefix to invoke when it is chosen. [para] This command prefix is called with a single argument, the [package cmdr::config] instance holding the [term parameter]s of the private. [para] For an example see section [term {Simple backend}] of [term [vset TITLE_DSL]]. [comment {- - -- --- ----- -------- -------------}] [call [cmd undocumented]] This command excludes the [term officer] (and its subordinates) from the generated help. Note that subordinates reachable through aliases may be included, under the alias name, if they are not explicitly excluded themselves. [list_end] | https://core.tcl-lang.org/akupries/cmdr/artifact/ae38050a58331844 | CC-MAIN-2019-26 | refinedweb | 999 | 52.29 |
by Michael S. Kaplan, published on 2010/09/10 07:01 -04:00, original URI:
The question might seem familiar to some:
Our app could be deployed on Windows XP/2003 which doesn?
Seem familiar?
Well, as I suggested in this blog, this other blog, and thisthird blog, there is really no choice in the matter -- if you want to use a culture to a machine that does not have it, you have no choice but to install a custom culture on the box (ideally it is mostly or completely based on the culture that one is trying to synthetically emulate).
\A follow-up question someone else asked me, offline, was whether they really had to base the custom culture on the actual one, since that could potentially be more difficult to do.
Now in practice for most people (including the one asking the question here), the requirement is just fro resource loading.
This can make it tempting to not work very hard to fill in the complete culture accurately.
Though since any other applications running on the machine will also find this culture, it really makes the most sense to do as much as you can to do a quality job.
You're not just working for yourself here; you're working for the whole machine and any managed code running on it (in the case of Vista and later these cultures will be available as locales in native code as well). Doing the job halfway definitely isn't good enough....
ErikF on 13 Sep 2010 6:18 AM:
Fortunately I've never had to do this for any of my programs, but wouldn't this fit under the category of a supplemental custom culture? A site that came up when I searched about this (en.csharp-online.net/Using_Custom_Cultures%E2%80%94Public_Custom_Cultures_and_Naming_Conventions) suggests using either GUIDs after the language name or the private "x-" namespace. From what I can tell in the e-mail that you received, this would work fine because the culture is being set explicitly (I'm not sure how well it would work if the culture was being set automagically!)
Michael S. Kaplan on 13 Sep 2010 12:51 PM:
If you load up a built-in culture on a machine as your source then it is a Windows-only culture there, but when you install it on a machine where it is not loaded it essentially becomes a regular custom culture....
No x- names needed to handle the general case here, which is just to support a particular culture.
go to newer or older post, or back to index or month or day | http://archives.miloush.net/michkap/archive/2010/09/10/10060138.html | CC-MAIN-2017-17 | refinedweb | 440 | 62.01 |
Deep Learning AMI with Source Code (CUDA 8, Ubuntu)Amazon Web Services | 2.5_Jan2018
Linux/Unix, Ubuntu 16.04 - 64-bit Amazon Machine Image (AMI)
Keras, tensorflow etc. not installed
Like others have noted, Keras, Tensorflow are missing and Amazon support wants me to pay to get it sorted out. No thanks!
- Mark review as helpful
Great AMI with Python3
As others pointed out, use python3.
Most of the major frameworks are installed with GPU enabled.
Of course we can instantly use the Gluon.
Great AMI
Saves lots of time in searching for CUDA drivers, cuDNN, and building DL frameworks. New Keras support for MxNet is great too!
Deep Learning AMI is just an Ubuntu image with nothing installed :)
Two weeks ago I got excited at a presentation from two AWS architects in Cambridge telling us how great it is to use Amazon cloud services because they come with everything pre-installed and you could focus on doing your research.
I did choose the p2.xlarge AMI in Ireland to do deep learning on image data. I tried following several tutorials I found on Amazon websites and smart IT blogs to be successful. The disappointment started with Jupyter not being installed. Then trying to do simple Python import commands resulted in Tensorflow not found, MXnet not found and including other core packages required.
After installing Jupyter and lots of other libraries needed to get tensorflow-gpu support to work it turned out that the Cuda library is not properly installed. See also other reviews.
I have spent now a full day installing stuff from scratch which cost me a lot of time and 10$ in cloud services without much progress. I am not going to give up and will soon be a deep learning professional IT architect. Thank you Amazon for your enticing marketing and despicable software services! Lesson learned :)
Getting started REALLY required
I need to get the following:
- CUDA (8.0)
- cuDNN (6)
- Tensorflow (latest)
It is extremely unclear how to get started activating just that.
Also - I may be mistaken, but only tensorflow was installed, not tensorflow-gpu, so I really don't see the point in the pre-installed environments here...
Deep Learning AMI Ubuntu: Free tier or not Free tier
Tried to launch the instance in subj, which is marked as 'free tier eligible' and got the message - '...not eligible for free usage tier'. Bug? User error? else?
Got Tensorflow working via Tensorflow3
Per the other reviews, the default tensorflow in ~/src/tensorflow was problematic.
However, we've had no trouble using the Python3+ compilation, even with Python2.7:
~/src/tensorflow3/
It's a reasonable work-around until an updated AMI is released.
Tensorflow does not work with Jupyter
import tensorflow fails from a jupyter notebook in both python2 and python3. It was successful on the python command line, however.
Cannot load tensorflow
Got it up and running and ran "import tensorflow" -- no luck though.
Traceback (most recent call last):
File "/home/ubuntu/deployments/data-pipelines/scratch/function_test.py", line 1, in
import tensorflow as tf
File "/usr/local/lib/python2.7/dist-packages/tensorflow/__init__.py", line 24, in
from tensorflow.python import *
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/__init__.py", line 49, in
from tensorflow.python import pywrap_tensorflow
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 52, in
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 41, in
from tensorflow.python.pywrap_tensorflow_internal import *
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in
_pywrap_tensorflow_internal = swig_import_helper()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
ImportError: libcusolver.so.8.0: cannot open shared object file: No such file or directory
Failed to load the native TensorFlow runtime.
See
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
Process finished with exit code 1
Tensorflow doesn't work.
I am using this AMI with a p2 xlarge instance. When I try to test tensorflow, I got the following message:
ImportError: libcusolver.so.8.0: cannot open shared object file: No such file or directory
Failed to load the native TensorFlow runtime. | https://aws.amazon.com/marketplace/reviews/reviews-list/B06VSPXKDX?sort=NEWEST&filter=ALL&page=1 | CC-MAIN-2021-31 | refinedweb | 728 | 50.23 |
Download ASCII_Art.zip (16 KB)
I’ve always wanted to write an ASCII Art program, if only for the fun of it. This post takes you through the different steps you need to write your own, or you can pull apart my source code. Full code listings are provided. I’ve tried to make this project as extensible as possible, so creating your own custom ASCII brush should be trivial.
Our first step is to read in an image and get the colour for each pixel
Bitmap bmp = (Bitmap)Bitmap.FromFile(inputPath); for (int y = 0; y < bmp.Height; y++) { for (int x = 0; x < bmp.Width; x++) { Color color = bmp.GetPixel(x, y); // todo... } }
Next, we need a method to convert our pixel’s colour into an ASCII character. We should wrap this logic up into an object, lets call this our “TextBrush”. Each text brush needs a method that takes a colour and returns a character. We can declare this using an interface:
public interface ITextBrush { char GetCharacter(Color color); }
We can now build our brush class, implementing the interface we just created:
public class DemoBrush : ITextBrush { public char GetCharacter(Color color) { float brightness = color.GetBrightness(); if (brightness > 0.9f) return ' '; else if (brightness > 0.8f) return '.'; else if (brightness > 0.7f) return '-'; else if (brightness > 0.6f) return '"'; else if (brightness > 0.5f) return '*'; else if (brightness > 0.4f) return '+'; else if (brightness > 0.3f) return '='; else if (brightness > 0.2f) return '#'; else if (brightness > 0.1f) return '$'; else return '&'; } }
The final part is to write the character to a file as we loop through the pixels:
using (TextWriter tw = new StreamWriter(outputPath)) { ITextBrush brush = new DemoBrush(); Bitmap bmp = (Bitmap)Bitmap.FromFile(inputPath); for (int y = 0; y < bmp.Height; y++) { for (int x = 0; x < bmp.Width; x++) { Color color = bmp.GetPixel(x, y); char c = brush.GetCharacter(color); tw.Write(c); } tw.WriteLine(); } }
This provides the basics of an ASCII Art program, but there are few things we need to fix up before this will work correctly. Firstly, this uses a one-character to one-pixel relationship which causes the ASCII images to be huge in comparison to the original JPEG. Secondly, a pixel is always square whereas a character usually occupies a rectangular space. This causes our image to be stretched vertically.
Both of these problems can be solved by resizing the image prior to processing. I usually reduce the width and height to 25% and scale the height down futher to 63% to correct the skew. The exact figures to use will depend on the font and size you choose. I'm using Lucinda Console at 8pt.
I used the following method to resize the bitmap:
private Bitmap ResizeBitmap(Bitmap bmp, int width, int height) { Bitmap result = new Bitmap(width, height); using (Graphics g = Graphics.FromImage((Image)result)) { g.DrawImage(bmp, 0, 0, width, height); } return result; }
Here’s the final result:
Full source code in C# can be downloaded here:
Download ASCII_Art.zip (16 KB) | https://codeoverload.wordpress.com/2010/07/17/ascii-art/ | CC-MAIN-2019-18 | refinedweb | 503 | 67.76 |
Warnings
From Nemerle Homepage
Nemerle has the concept of warning levels (specified by -warn:levelnumber). The default warning level is 4, which is quite conservative.
Some warnings are numbered. These warnings can be disabled using -nowarn:num option of ncc:
- N0028 - (level 4) 'function declaration' has the wrong signature to be an entry point
- N0105 - (level 3) The using directive for 'namespace' appeared previously in this namespace
- N0114 - (level 2) 'function1' hides inherited member 'function2'. To make the current method override that implementation, add the override keyword. Otherwise add the new keyword.
- N0168 - (level 3) The variable 'var' is declared but never used
- N10001 - (level 4) Cast is unnecessary
- N10002 - (level 5) Pedantic checks for illegal characters in input stream
- N10003 - (level 4) Other global unused member warnings
- N10004 - (level 5) Warnings about usage of bit operations on enums without correct attribute
- N10005 - (level 4) warnings about ignoring computed values
A possibly more up to date list can be displayed by providing the -warn-help option to ncc.
Nemerle > Tools > NCC | http://nemerle.org/Warnings | crawl-002 | refinedweb | 171 | 50.16 |
I have an auto reply sms Android application I built and I don't want the auto reply (sent sms) to show in the default messaging app. I have searched and searched and couldn't find an answer. Is there a way to bypass writing the sent sms into the default messaging app?
Here my BroadcastReciever I am using to get the data and send out the message
public class SmsReceiver extends BroadcastReceiver {
ParseUser user = ParseUser.getCurrentUser();
// Auto reply message composed of the current reply and url from that business
String msg = user.getString("myCurrentReply") + " " + user.getString("couponUrlChosen");
List smsFromList = user.getList("smsFrom");
String userName = (String) user.get("username");
@Override
public void onReceive(final]);
}
final String pno = smsMessage[0].getOriginatingAddress();
user.put("lastSmsFrom", pno);
user.saveInBackground();
// show first message
Toast toast = Toast.makeText(context, "Received SMS: " + smsMessage[0].getMessageBody(), Toast.LENGTH_LONG);
toast.show();
// Check Phone Number from SMS Received against Array in User Row
ParseQuery<ParseObject> query = ParseQuery.getQuery("_User");
Log.d("Username: ", userName);
query.whereEqualTo("username", userName);
query.whereContainedIn("lastSmsFrom", smsFromList);
query.findInBackground(new FindCallback<ParseObject>() {
public void done(List<ParseObject> smsList, ParseException e) {
if (e == null) {
Log.d("Errors", "none");
if (smsList.size() == 0) {
// Send SMS
sendSms(pno, msg);
// Add Phone number to smsFrom in currentUsers Row
user.addUnique("smsFrom", pno);
// Save Phone Number in Array
user.saveInBackground();
Log.d("List size: ", " " + smsList.size());
}
} else {
Log.d("Error Message: ",
e.getMessage());
}
Log.d("Already sent to this number today. ", " " + smsList.size());
}
});
}
private void sendSms(String phonenumber, String message) {
SmsManager manager = SmsManager.getDefault();
manager.sendTextMessage(phonenumber, null, message, null, null);
}
}
Prior to KitKat, SMS sent using
SmsManager require the app sending the message to insert it into the Provider, so it would just be a matter of omitting that.
Starting with KitKat, any app that is not the default SMS app and uses
SmsManager to send messages will have the messages automatically written to the Provider for it by the system. There's no way to prevent this, and, furthermore, the app won't be able to delete those messages, either, as it won't have write access to the Provider.*
The app that is the default SMS app is responsible for writing its outgoing messages, so it would be able to omit that step. The system does no automatic writes for the default SMS app.
* There is a security hole in 4.4 only, by which a non-default app can gain write access to the Provider. It is detailed in my answer here, but it will not work in versions after KitKat. | https://codedump.io/share/bPhhTlz55ex9/1/block-sent-sms-from-being-logged-in-default-messaging-app | CC-MAIN-2017-51 | refinedweb | 425 | 51.55 |
Join the community to find out what other Atlassian users are discussing, debating and creating.
Hi all,
we are using Jira Service Management with Insight, where we have put user's assets.
I'm looking for a way to generate some company standard documentation based on the data stored in Insight.
I found this plugin:
Is there some way to get data from Insight instead of Jira issues?
I resolved using a scriptrunner postfunction that read data from Insight, perform some logic, replace this data in a template with placeholder using SimpleTemplateEngine class and then attach the resulting document to the issue.
The template is in HTML, in this way the assignee can open directly in the browser and print in pdf (with a pdf printer) if it is necessary.
I would love to hear more about how you replaced data in a template with data from Insight
Hi Brandon,
in the scriptrunner postfunction I used something like this:
def assetListMap = getAssetMapFromInsightList(insightAssetList, insightObjectSchemaId, adminUsername)
def binding = [
data: new Date().format('dd/MM/yyyy') ,
reporter: issue.reporter.displayName,
assetList: assetListMap
]
def engine = new SimpleTemplateEngine()
def template = engine.createTemplate(templateFile.text).make(binding)
<% for (asset in assetList) { %>
<tr><td><%= asset.value1 %> <%= asset.value2 %></td></tr>
<% } %>
It really depends on what you need and how your data is organized.
Insight has some built-in reporting features (though I haven't explored them much) and some REST APIs.
And if you know what you are doing or are adventurous, you could fetch data directly from the db (though that is not generally recommended).
Some of the bigger/expensive reporting solutions like eazyBI might already understand the Insight data and allow you to generate reports.
But most of the more simple/cheap/free ones like the one you linked probably will not.
Thank you, Peter, but I don't want to create a report from Insight tables.
What I want is to create a new word document, or pdf, using a template filled with data from Insight.
For example, I want to give a new employee a new pc, a monitor and a tablet. I need to create a new document with the assets associated with this new user in Insight, give it to him to sign it, and store it in our archive.
Just use the built-in label template builder.
Create a template the size of a full page of paper and print. | https://community.atlassian.com/t5/Jira-questions/Generate-documents-from-Insight-tables/qaq-p/1675571 | CC-MAIN-2022-05 | refinedweb | 401 | 53.31 |
$ cnpm install meta-client
Spatial media like Virtual Reality or Augmented Reality is perceived in such a fundamentally different way than computer graphics as we know them that we need to find new ways to describe it. This is an approach.
Furthermore this is an attempt to create the most accessible virtual reality library possible.
This example is written in three lines that can't be anymore intuitive.
import {Ground, Cube, on} from 'meta-client'; new Ground(); on('touch', (data) => new Cube().set(data.position));
You need to have Node.js () installed.
(If you don't know how to use the terminal watch this.)
mkdir meta && cd meta
npm init && npm install parcel-bundler meta-client
touch index.html index.js
<html> <body> <script src="./index.js"></script> </body> </html>
import {Ground, Cube, on} from 'meta-client'; new Ground(); on('touch', (data) => new Cube().set(data.position));
parcel index.html
Open in your browser.
Alternatively you can also put all steps together in a single line like this:
touch index.html index.js && echo '<html><body><script src="./index.js"></script></body></html>' >> ./index.html && echo "import {Ground, Cube, on} from 'meta-client';\nnew Ground();\non('touch', (data) => new Cube().set(data.position));" >> ./index.js && npm init -y && npm i parcel-bundler meta-client && parcel index.html
It can be very useful to start with a working example.
To use the examples clone a full copy of Meta.js:
git clone
Navigate to the examples directory:
cd meta/examples
Select the example you want to work with:
cd 1
Install and start the example:
npm start
git clone npm install npm run build
Read the Wiki to learn how to use Meta.js.
Read the full code documentation.
Join the Slack channel to talk about (virtual) space.
As any other software this is based on thousands of layers of programming abstraction. The upper layers on which this is build on are Three.js (Javascript 3D library) and Oimo.js (Javascript physics engine).
I probably learned most about space from Walter Lewin.
I probably learned most about toys from Julian Summer Miller.
That's basically what brought me here.
MIT
Let's start to redefine space! | https://developer.aliyun.com/mirror/npm/package/meta-client | CC-MAIN-2020-40 | refinedweb | 365 | 60.41 |
Up to Design Issues
Reification in this context means the expression of something in a language using the language, so that it becomes treatable by the language. RDF graphs consist of RDF statements. If one wants to look objectively at an RDF graph and reason about it is using RDF tools, then it is useful, at least in theory, to have an ontology for describe RDF statements. This note described one suitable ontology.
When RDF extended to N3, then one way of discussing the semantics is to describe N3 documents in RDF. This document does both.
The namespace used is
<> , for
which here we use the
rei: prefix. Also, we use
the ex: prefix for the namespace
<>.
RDF terms are nodes in the RDF Graph. In RDF, these can be of three types: named nodes, blank nodes, and literals. We will also call named nodes symbols.
Named nodes are named by URI strings, so a named node can be defined simply by its URI string. The symbol which in N3 is written as <> would be described as the RDF node:
[ a rei:Symbol; rei:uri "" ]
Blank nodes (or Bnodes for short) are nodes do not have URIs. When describing a graph, we can say that a node is blank by saying that it is in the class rei:BNode.
[ a rei:Bnode ]
This blank node in the description is a description of a blank node in the original graph. They are node the same blank node. We could in fact name the blank node for the purposes of description:
ex:bnode1 a rei:BNode.
Literals in an RDF graph are defined only by their value, just as symbols are defined by their URIs. When using RDF to describe RDF, RDF literals can clearly be used to give the value:
[ a rei:Literal, rei:value "The quick brown fox"]
In fact, the domain of rei:value is rei:Literal, so it is not necessary to explicitly state that something is a literal, one can just write:
[rei:value "The quick brown fox"]
A RDF statement is defined by its three parts, known as
subject, predicate and object, each of which is a term. In
RDF, neither the subject nor the predicate may be a Literal.
The statement which in N3 is
ex:joe ex:name "James
Doe". would be described as
[ a rei:Statement; rei:subject [rei:uri ""]; rei:predicate [rei:uri ""]; rei:object [rei:value "James Doe"] ]
In fact, the fact that it is a rei:Statement would have been clear as the domains of rei:subject, rei:predicate and rei:object are all rei:Statement.
An RDF graph is a set of statements. RDF itself doesn't have
the concept of a set, it only has the concept of an ordered
list (RDF collection). However, the OWL relation owl:oneOf
related a class to a list of its members, and so we can form
a set the set containing 3 4 and 5 as
[ owl:oneOf (3 4
5)] . using this convention, we can describe an RDF
Graph as the set of statements. For example, the graph whose
contents which would be written, in N3 as
ex:joe ex:name "James Doe". ex:jane ex:name "Jane Doe".
would be described in this ontology as:
{ a rei:RDFGraph; statements [ owl:oneof ( [ a rei:Statement; rei:subject [rei:uri ""]; rei:predicate [rei:uri ""]; rei:object [rei:value "James Doe"] ] [ a rei:Statement; rei:subject [rei:uri ""]; rei:predicate [rei:uri ""]; rei:object [rei:value "Jane Doe"] ] )
Using the set may be ungainly, but it ensures that two RDFGraphs which contain the same statements are demonstrably the same in their reified form. (We envisage that further developments systems may have explicit processing for sets, and N3 syntax could even be extended to include set literal syntax, which would of course make this easier.)
The use of an explicit string as the URI for the subject above is also ungainly, compared with the use in the original N3 where a prefixed symbol can be used. Why is the string given explicitly, instead of writing it as symbol?
Let's suppose for a moment that we just use the symbol, not the string for the URI:
#Wrong: [ a rei:Statement; rei:subject ex:joe; rei:predicate ex:name; rei:object [rei:value "James Doe"] ]
This should be a description of an RDF statement. It must preserve the original graph, including the URIs it used. The statements which would be described as
[ rei:subject ex:joe; # Wrong rei:predicate ex:name; rei:object [rei:value "James Doe"]]
and
[ rei:subject ex:jd1; # Wrong rei:predicate ex:name; rei:object [rei:value "James Doe"]]
are different graphs, even if "" and "" are two URIs for the same person. However, if the system knows that <ex:jd1> and <ex:joe> are in fact thhe same person, then the second statement can be derived from the first. It is important (in our application) to be able to know which name a graph used for something. The form of reification which is provided by the original RDF specification is not suitable, because it loses that information.
N3 extends RDF to allow graphs themselves to be another form of literal node. A graph can be quoted inside another graph, as one of the terms of a statement:
ex:jane ex:knows { ex:joe ex:name "James Doe" }.
Jane knows "joe's name is 'James Doe'". As above, the quotation effect is important. Jane's knowledge is in these terms. Even though ex:jd1 and ex:joe may be the same person, Jane might not know that, and so may not know that ex:jd1's name is James Doe.
An N3 formula also introduces quantification. Variables are introduced by allowing a given set of symbols to be universally quantified over the formula, and another set to be universally quantified.
A formula is described by three sets: the set of statements (the graph), the set of universals and the set of existentials. The semantics of an N3 formula are that the universal quantification is applied to the result of applying the existential quantification to the conjunction of the statements. (a la forall x: exists c such that ...). The N3 formula
@keywords a. [] a car. { ?x a car } => { ?x a vehicle }.
(roughly, There is a car. Anything which is a car is a vehicle) is shorthand for
@keywords a. @forAll :x. @forSome :c. :c a car. {x a car } => {x a vehicle}.would be described as a formula whose universals were just x, whose existentials were just c, and whose statements was the implication - a statement whose subject and object were themselves formulae. This follows in the code below, obtained by passing the code above through
cwm --reify.The output is:
@prefix : <> . @prefix owl: <> . @keywords a. [ a <>; universals[owl:oneOf( "" ) ]; existentials [owl:oneOf( "" ) ]; statement [ owl:oneOf([ object [uri "" ]; predicate [uri "" ]; subject [uri "" ] ] [ object [ universals [ owl:oneOf () ] ]; existentials [ owl:oneOf () ]; statements [ owl:oneOf ( [ object [uri "" ]; predicate [uri "" ]; subject[ uri "" ] ] ) ]; predicate [uri "" ]; subject [ universals [owl:oneOf ()]; existentials [owl:oneOf () ]; statements [owl:oneOf ( [ object [uri "" ]; predicate [uri "" ]; subject [uri "" ] ] ) ]] ] ) ] ].
Note that in this mode, the formula is not only described, but it is also stated to be a Truth. To simply describe a formula as existing doesn't say anything. Formulae are abstract things, to say one exists doesn't add anything. Some would say, all formulae exist, just as all lists exist. However, to assert that one is true asserts its contents. The RDF file output above has, by definition of the terms in the reification namespace, the same meaning as the full N3 formula from which it is produced. It does to any agent which understands the meaning of the reification namespace.
Up to Design Issues
Tim BL | http://www.w3.org/DesignIssues/Reify.html | CC-MAIN-2015-32 | refinedweb | 1,295 | 70.43 |
Which of the following lines can be inserted at line 2 to print true? (Choose all that apply)
public class Main{ 1: public static void main(String[] args) { 2: // INSERT CODE HERE 3: } 4: private static boolean test(Predicate<Integer> p) { 5: return p.test(2); 6: } }
A, C, F.
Lambda expressions with one parameter can omit the parentheses around the parameter list, A and C are correct.
The return statement is optional when a single statement is in the body, therefor F is correct.
B is incorrect because a return statement must be used if braces are included around the body.
D and E are incorrect because the type is Integer in the predicate and int in the lambda. | http://www.java2s.com/Tutorials/Java/OCA_Mock_Exam_Questions/Q2-8.htm | CC-MAIN-2017-43 | refinedweb | 120 | 63.29 |
Featured Replies in this Discussion
I think you forgot to change URL to string, so you can't apply String method replaceAll() to url
This can help you
import java.net.URL; import java.net.MalformedURLException; public class ReplaceChar { public static void main(String[] args) { try { URL url = new URL(""); url = new URL(url.toString().replaceAll("e", "i")); System.out.println(url.toString()); } catch(MalformedURLException mue) { mue.printStackTrace(); } } }
Hi, I'm trying to use the replace() method to take out all instances of " "
, (space) with an underscore, ("_"). I'm doing this because I'm using URLs to connect to a servlet. I keep getting this error:
replace(char,char) in java.lang.String cannot be applied to (java.lang.String,java.lang.String)
url.replace(" ", "_");
This is how I wrote my code:
url = url.replace(" ", "_");
What am I doing wrong? Any suggestions would be most welcome.
Thanks,
chuck
Do you notice something about the highlighted protions of the error message?
change
url.replace(" ", "_"); to
url.replace(' ', '_');
method String.replace required char[] data type in arguments. Try:
url.replace(" ".toByteArray(), "".toCharArray());
P.S.: sorry my english
No it doesn't, and besides, as of Java 5 the way he tried it in the OP will work, but since this thread is a four year dead zombie, that doesn't matter anymore, and neither does the post I'm responding to. Killing this zombie now.
method String.replace required char[] data type in arguments. Try:
url.replace(" ".toByteArray(), "".toCharArray());
P.S.: sorry my english
Bad advice exactly showing that you did not read whole thread otherwise you would have seen that original poster then said that Java Microedition API was in use. Unlike J2SE JME has limited classes and that also mean less methods to use. | https://www.daniweb.com/software-development/java/threads/73139/string-replace-method | CC-MAIN-2015-27 | refinedweb | 299 | 67.45 |
02 March 2007 20:44 [Source: ICIS news]
ARLINGTON, Virginia (?xml:namespace>
?xml:namespace>
Flex-fuel vehicles can run on gasoline or ethanol or combinations of both.
Elisio Contini, head of the strategic management office for the Brazilian Ministry of Agriculture, told the annual US Department of Agriculture outlook forum that
Contini said that in 2003, when the Brazilian government decided to stimulate flex-fuel auto sales, there were only 48,000 flex-fuel autos sold in the country. By 2004, that figure had climbed to 330,000 and reached 865,000 flex-fuel vehicles sold in 2005.
Last year flex-fuel vehicles reached sales of 1.45m units, Contini said, nearly 80% of all auto sales in the country.
“The driving force in the Brazilian ethanol market is the flexible-fuel vehicle,” Contini said.
Contini said that
Some 51% | http://www.icis.com/Articles/2007/03/02/9010889/brazil-cites-flex-fuel-vehicle-for-ethanol-growth.html | CC-MAIN-2013-48 | refinedweb | 140 | 55.34 |
Splinter
Simplifies Web
App Testing
How To Secure And Test
A Few Tips For
Web ApplicAtions Scaling Up Web
Performance
Getting Started
With PHP,
The Popular
Programming
Language
Register Now! • info@2ndQuadrant.com
India +91 20 4014 7882 • USA +1 650 378 1218 • UK +44 870 766 7756
Experience Innovation
without lock-in!
2ndQPostgres
More speed
More reliability
More scalability
ISSN-2456-4885
Admin
29 A Primer on Software Defined
Networking (SDN) and the
OpenFlow Standard
35 Taming the Cloud:
Provisioning with Terraform
40 Visualising the Response
Time of a Web Server
Using Wireshark
42 DevOps Series Creating a
Virtual Machine for Erlang/
OTP Using Ansible
47 An Introduction to govcsim
(a vCenter Server Simulator)
57 A Glimpse of Microservices
with Kubernetes and Docker
51
Developers
59 Selenium: A Cost-Effective
Serverless Architectures:
Test Automation Tool for Demystifying Serverless Computing
Web Applications
65 Splinter: An Easy Way to Test
Web Applications
73 Crawling the Web with Scrapy
77 Five Friendly Open Source
Tools for Testing Web
Applications
81 Developing Research Based
Web Applications Using Red
Hat OpenShift
85 A Few Tips for Scaling Up
Web Performance
90 Regular Expressions in
Programming Languages:
61
The Story of C++
Using the Spring Boot Admin UI for
FOR U & ME Spring Boot Applications
88 Open Source Enables
PushEngage to Serve 20
Million Push Notifications
96
Each Day!
Eight Top-of-the-Line Open
R EGUL AR FEATURES
Source Game Development 06 FOSSBytes 18 New Products 108 Tips & Tricks
Tools
MiSSing iSSuES
back iSSuES
Kits ‘n’ Spares
new delhi 110020
ph: (011) 26371661, 26371662
nEwSStand diStribution
ph: 011-40596600
advErtiSEMEntS
mumbai
ph: (022) 24950047, 24928520
beNGaluRu
ph: (080) 25260394, 25260023
PuNe
ph: 08800295610/ 09870682995
GuJaRaT
ph: (079) 61344948
chiNa
OpenGurus SiNGaPORe
publicitas Singapore pte ltd
105 Communication Protocols for the ph: +65-6836 2272
Linux is TaiwaN
built on a lot Columns J.k. Media, ph: 886-2-87726780 ext. 10
of past 16 CodeSport
experience” uNiTeD STaTeS
20 Exploring Software: Importing E & tech Media
ph: +1 860 536 6677
GNUCash Accounts in GNUKhata E-mail: veroniquelamarque@gmail.com
24
M
,D interviews, verbatim quotes, or unless otherwise explicitly mentioned,
Karanbir Singh, project
RA
CD
,1
Tea
P4
leader, CentOS 3.0 unported license a month after the date of publication. refer to
This distro is designed to be fast, easy to use and provide a minimal
s:
m
nt
e-m
@e
Syst
bility whatsoever is taken for any loss due to publishing errors. articles
fy.in
ended
tended, and sh that cannot be used are returned to the authors if accompanied by a
Recomm
s unin oul
c, i d be
dis
self-addressed and sufficiently stamped envelope. but no responsibility
att
the
rib
on
ute
terial, if found
is taken for any loss or delay in returning the material. disputes, if any,
d to t
he complex n
September 2017
nab
atu
re
tio
ec
of
j Int
ob ern
Any t dat e
Note: a.
110
community project, supported by a non-profit organisation
comprising elected contributors.
In
one 1440 1150 uS$ 120
.
ent cas
kindly add ` 50/- for outside delhi cheques.
em e th
lac is D
rep VD
please send payments only in favour of eFY enterprises Pvt ltd.
free do
or a es n
work p y.in f ot
roperly, write to us at support@ef
non-receipt of copies may be reported to support@efy.in—do mention
your subscription number.
version. One of the initial contributors worked on the fix for npm dependency GNOME’s disk utility to get
errors and pm2 support. Another notable contribution improved the Web console by large file support in v3.26
adding personalisation options. Though GNOME’s disk utility will
Previous versions of PiCluster were used to display a server icon on specific receive an update to version 3.26 in
operations. However, this new build shows the operating system’s or distribution’s September, it is now expected to receive
logo for each server. There is also an automatic container failover that helps you to features such as disk resize and repair
automatically migrate a container to another host after three failed attempts. functions. The new version will also get
Many developers have started contributing to the PiCluster project. You can large file support to handle giant files.
access the PiCluster 2.0 code through its GitHub repository. It also includes a The new disk utility will be launched
detailed readme to help you deploy the tool effectively. as part of the GNOME 3.26 release.
Kai Lüke, the developer of GNOME
ActiveRuby debuts with over 40 gems and frameworks Disk Utility, has published a blog post
ActiveState, the open source languages company, has graduated its Ruby release that highlights the new features in the
to the first beta version. The upcoming release. The latest version
commercially supported Ruby is touted to offer a file system resize.
distribution is supposedly far better Generally, it is not possible to estimate
than other available options. the exact space occupied by a specific
Ruby is actively used by a file system. So the new disk utility
diverse set of developers around package will resize file systems that are
the world. The language is in partitions. The future releases will also
preferred for its complete, simple, receive improved support for both NTFS
extensible and portable nature. and FAT file system resizing.
ActiveRuby is based on Ruby v2.3.4 and includes over 40 popular gems and The updated GNOME disk utility
frameworks, including Rails and Sinatra. There is also seamless installation will also have the ability to update
and management of Ruby on Windows to reduce configuration time as well as the window for power state changes.
increase developer and IT productivity. Additionally, the new version will
Enterprise developers can adopt the latest Ruby distribution release internally to prompt users when it stops any running
host Web applications. The Canadian company claims that ActiveRuby is far more jobs while closing an app. It will debut
secure and scalable for enterprise needs. The beta release of the language has fixed with better support for probing and
some issues of gem management to enhance security. unmounting of volumes.
The new ActiveRuby version also includes non-GPL licensed gems. All major GNOME developers will enable an
libraries for database connectors, such as MongoDB, Cassandra, Redis, PostgreSQL app menu entry in the new disk utility.
and MySQL, are also included. Additionally, ActiveRuby beta introduces cloud This will help you create an empty disk
deployment capabilities with Amazon Web Services (AWS) along with all the image. Likewise, you will get the option
necessary integration features for AWS. to check the displayed UUIDs for selected
“For enterprises looking to accelerate innovation without compromising volumes. GNOME 3.26 is scheduled to go
on security, ActiveRuby gives developers the much-needed commercial- live on September 13. You can download
grade distribution,” said Jeff Rouse, director of product management, Disk 3.25.4, which has been released for
ActiveState, in a statement. testing. Its source tarball is available for
ActiveRuby is currently available only for Windows. The release for Mac and download, and you can use it with your
Linux is supposed to roll out later in 2017. You can download the beta through the GNU/Linux distribution.
official ActiveState website.
Developers ask Adobe to World’s first software-defined data centre gets launched in India
open source Flash Player Pi Datacenters, India’s native enterprise-class data centre and cloud services
Many developers have not provider, has launched Asia’s largest Tier IV-certified data centre in Amaravati,
welcomed Adobe’s decision to end Vijayawada. The company claims the
support for the Flash Player plugin new offering, called Pi Amaravati, is the
in 2020. Thus, a petition seeking world’s first software-defined data centre.
the open source availability of “Pi Amaravati is a major milestone
Flash Player has been released for the entire team,” said Kalyan
on GitHub. Muppaneni, founder and CEO, Pi
While Adobe may have plenty Datacenters. The new data centre uses
of reasons to kill Flash, there the OpenStack virtualisation framework
are a bunch of developers who to deliver an advanced computing, storage and networking experience. It is capable
want to save it. GitHub user Juha of offering league modular colocation and hosting services with a capacity of up to
Lindstedt, the developer who 5,000 racks. Also, the company’s enterprise cloud platform Habour1 is powered by
has filed the petition, believes open source provider SUSE.
that Flash is an important part Vijayawada-based Pi Datacenters has recently been awarded Uptime Institute
of the Internet’s history. Killing Tier IV design certification — known as the highest standard for infrastructure,
support for Flash means that future functionality and capacity.
generations would not be able to “With the launch of Pi Amaravati, we will be offering highly innovative and
access old games, websites and tailored solutions with Infrastructure-as-a-Service (IaaS), Platform-as-a-Service
experiments, Lindstedt has said. (PaaS), Disaster-Recovery-as-a-Service (DRaaS) and a host of other cloud-enabled
product and services to our esteemed partners,” Muppaneni said.
Along with launching the Pi Amaravati data centre, Pi Datacenters has entered
into a Memorandum of Understanding (MoU) with companies like PowerGrid,
IRCTC, Mahindra and Mahindra Finance, Deutsche Bank and Unibic. These
partnerships will expand open source developments in the data centre space.
as plain text. This allows them to create custom watermarks for their documents.
Linux gets a preview Additionally, a new context menu is available to help users with footnotes, endnotes,
of Microsoft’s Azure styles and sections.
Container Instances The new version of Calc has support for pivot charts. Users can customise
Microsoft is adding a new service pivot tables and comment via menu commands. Impress helps users in specifying
to its cloud portfolio dubbed fractional angles while duplicating objects. There is also an auto save feature for
Azure Container Instances. While settings to help in duplicating an operation. This is a part of Calc as well as Impress.
the development is yet to receive LibreOffice 5.4 is available for download for Mac OS, Linux and Windows
Windows support, a public through its official website. The organisation has also improved the LibreOffice
preview for Linux containers online package with better performance and a more responsive layout. You can
is out to help developers access the latest LibreOffice source code as Docker images.
create and deploy containers
without the hassle of managing OpenSUSE Leap 42.3 is out with new KDE Plasma
virtual machines. and GNOME versions
Microsoft claims that Azure OpenSUSE has released the new version of its Leap distribution. Debuted as
Container Instances (ACI) takes OpenSUSE Leap 42.3, the new release is based on SUSE Linux Enterprise (SLE) 12
only a few seconds to start. The Service Pack 3.
configuration window is highly The new update includes
customisable. Also, users simply hundreds of updated packages. There
need to select the exact memory is the new SUSE version that is
and count of CPUs that they need. powered by Linux kernel 4.4. The
Designed to work with development team has spent a good
Docker and Kubernetes, the new eight months in producing this rock-
service allows developers to solid Leap build.
utilise container instances and The most notable addition in
virtual machines simultaneously OpenSUSE Leap 42.3 is the KDE
in the same cluster. Microsoft is Plasma 5.8 LTS desktop environment.
also releasing ACI connector for Users have the option to either pick
Kubernetes to help the deployment the latest KDE version or go with
of clusters to ACIs. GNOME 3.20. There is also a provision to install other supported environments.
“While Azure Container Apart from the new desktop environment options, the OpenSUSE Leap
Instances are not orchestrators update comes with a server installation profile and includes a full-featured text
and are not intended to replace mode installer. The platform also officially supports Open-Channel solid-state
them, they will fuel orchestrators drives through the LightNVM full-stack initiative. Likewise, there are numerous
and other services as a container architectural improvements for 64-bit ARM systems.
building block,” said Corey The OpenSUSE team has provided PHP5 and PHP7 support in the latest Leap
Sanders, director of compute, distro. There is also an updated graphics stack based on Mesa 17, and GCC 4.8.5 as
Azure, in a statement. a default compiler. Considering the list of new changes, OpenSUSE 42.3 appears
The company executives to be an advanced Linux version. It also comes preloaded with packages for
are hoping that ACIs will be streaming media, editing graphics, creating animation, playing games and building
used for fast bursting and 3D printing projects.
scaling. Virtual machines can The new OpenSUSE Leap version is available for download for both 32-bit and
be deployed alongside the cloud 64-bit systems. Existing OpenSUSE Leap users can upgrade their systems using the
to deliver predictable scaling built-in update system.
so that workloads can migrate
back and forth between two Google blocks Android spyware family Lipizzan
infrastructure models. Google’s Android Security and Threat Analysis teams have jointly discovered a
Windows support for ACI is new spyware family that gets distributed through various channels including Play
likely to be released in the coming Store. Called Lipizzan, the software has been detected in 20 apps that have been
weeks. In the meantime, you can test downloaded on fewer than 100 devices.
it on your Linux container system. Unlike some of the earlier spyware, Lipizzan is a multi-stage spyware that can be
used to monitor and exfiltrate email, text messages, location, voice calls and media. It
Microsoft is now a part of “We hope these improvements will help you make your first contribution, start
the Cloud Native Computing a new project, or grow your community,” GitHub concluded in its blog.
Foundation First launched in October 2007, GitHub is so far used by more than 23
Continuing its developments around million people around the globe. The platform hosts over 63 million projects
open source, Microsoft has now with a worldwide employee base of 668 people.
joined the Cloud Native Computing
Foundation (CNCF). The latest Mozilla aims to enhance AI developments with
announcement comes days after open source human voices
the Redmond company entered While elite digital assistants like Alexa, Cortana, Google Assistant and Siri have
the board of the Cloud Foundry so far been receiving inputs from
Foundation. users via the spoken word, Mozilla
“Joining the Cloud Native is planning to enhance all such
Computing Foundation is another existing artificial intelligence (AI)
natural step on our open source developments by open sourcing
journey, and we look forward to human voices on a mass level. The
learning and engaging with the Web giant has already launched a
community on a deeper level as project called Common Voice to
a CNCF member,” said Corey build a large-scale repository of
Sanders, partner director, Microsoft, voice recordings for future use.
in a joint statement. Mozilla has started capturing human voices since June to build its open source
Microsoft has chosen the database. The database will be live later this year to “let anyone quickly and easily
Platinum membership of the CNCF. train voice-enabled apps” that go beyond Alexa, Google Assistant and Siri.
Gabe Monroy, a lead product “Experts think voice recognition applications represent the ‘next big thing’.
manager for containers on Microsoft The problem is the current ecosystem favours Big Tech and leaves out the next
Azure and former Deis CTO, is wave of innovators,” said Daniel Kessler, senior brand manager, Mozilla, in a
joining CNCF’s governing board. recent blog post.
Led by the core team members Tech companies are presently using different voices to teach computers to
of the Linux Foundation, CNCF understand the variety of languages for their solutions. But the data sets with the
has welcomed the new move voice collections are mostly proprietary as of now. Therefore, a large number
of Microsoft. The non-profit of developers have no access to voice recording samples to test their own
organisation considers it a voice recognition projects. This ultimately leads to a limited number of apps
“testament to the importance and understanding our speech.
growth” of cloud technologies and Things are appearing to be changing with Common Voice. “The time has
believes the Windows maker’s come for an open source data set that can change the game. The time is right for
commitment to open source Project Common Voice,” Kessler stated. Mozilla is asking individuals to donate
infrastructure is a ‘significant asset’ their voice recordings either on the Common Voice Web page or by downloading a
to its board. dedicated iOS app. Once you are ready with your recording, you need to read a set
“We are honoured to have of sentences that will be saved into the system.
Microsoft, widely recognised as The recorded voices, which would come in a variety of languages with various
one of the most important enterprise accents and demographics, will be provided to third-party developers.
technology and cloud providers in In addition to simply receiving voice donations, Mozilla has built a model by
the world, join CNCF as a platinum which users will validate the recordings that are stored in the system. This process
member. Its membership, along will help train an app’s speech-to-text conversion capabilities.
with other global cloud providers All this will enable not just one or two but 10,000 hours of validated audio
that also belong to CNCF, is that will power tons of AI models in the near future. Notably, recordings received
a testament to the importance through the Common Voice initiative will be integrated into the Firefox browser as
and growth of cloud native well. But the main purpose of this exercise is to provide a public resource.
technologies,” stated Dan Kohn,
executive director of the Cloud
Native Computing Foundation. For more news, visit
A
s we have been doing over the last couple of term in the document; or you can employ Tf-Idf
months, we will continue to discuss a few count for each term-review combination, etc.
more computer science interview questions You had used a random forests classifier for
in this column as well, particularly focusing on topics sentiment classification. Now you are told that your
related to data science, machine learning and natural vocabulary size is 100,000. Would this change your
language processing. It is important to note that many decision about which classifier to use?
of the questions are typically oriented towards practical 3. For problem (1), you had decided to use a support
implementation or deployment issues, rather than just vector machine classifier. However, now you are
concepts or theory. So it is important for interview told that instead of just doing binary classification
candidates to make sure that they get adequate of the reviews, you need to classify them as one of
implementation experience with machine learning/ five categories, namely: (a) strongly positive, (b)
NLP projects before their interviews. Data science weakly positive, (c) neutral, (d) weakly negative,
platforms such as Kaggle () host a and (e) strongly negative. You are given labelled
number of competitions that candidates can attempt data with these five categories now. Would you
to practice their skills on. Also, many of the data still continue to use the ‘support vector machine’
science or machine learning related academic computer (SVM) classifier? If so, can you explain how SVM
conferences host data challenge competitions such handles multi-class classification? If you decide to
as the KDD Cup (). Data switch from SVM to a different classifier, explain
science enthusiasts can sign on for these challenges and the rationale behind your switch.
hone their skills in solving real life problems. Let us 4. For the sentiment classification problem, other
now discuss a few interview questions. than the review text itself, you are now given
1. You are given 100,000 movie reviews that are additional data about the movies. This additional
labelled as positive or negative. You have been told data includes the reviewers’ names, address,
to perform sentiment analysis on the new incoming age, country of residence, date of review and the
reviews by classifying each review as positive or specific movie genre they are interested in. This
negative, which is a simple binary classification additional data contains both numeric and string
problem. Can you explain what features you would data, with some of the features being categorical.
use for this classification problem? Once you A country’s name is string data, and the movie
decide on your set of features, how would you go genre is string data which is actually categorical.
about selecting which classifier to use? What kind of data preprocessing would you do on
2. Let us assume that you decided to use the ‘bag this additional data to use it with your classifier?
of words’ approach in the above problem with 5. Generally, interviewers expect you to be familiar
each vocabulary term becoming a feature for your with some of the popular libraries that can be used
classifier. Essentially, you can construct a feature for data science. So some of the questions can
set where the dimensions of this set are the same be library-specific as well. In question (4), you
as the size of your vocabulary, and each feature may be asked to mention how you would convert
corresponds to a specific term in the vocabulary. categorical data to numeric form. Can you write a
The feature value can either be the count of the piece of Python code to do this conversion?
term or merely the presence or absence of the 6. Let us assume that you decided to use a SVM
classifier for the sentiment classification problem. You features which are on widely varying scales, you have
find that your classifier takes a long time to fit the training decided to do feature scaling. Should you do data scaling
data. How would you reduce the training time? List all the once for the entire training data set and then perform the
possible approaches. k-fold cross validation? Or should you do the feature
7. One of our readers suggested feature scaling/data scaling within each fold of cross-validation? Explain the
normalisation as a preprocessing step before you train reason behind your choice.
your model always. Is she correct? Is feature scaling or 13. You are given a data set which has a large number of
normalisation always needed in all types of classifiers? features. You are told that only a handful of these features
Why do you think feature scaling can help achieve faster are relevant in predicting the output variable. Will you use
convergence of your learning procedure? One of the well- Lasso regression or ridge regression in this case? Explain
known methods of feature scaling is Min-Max scaling. the rationale behind your choice. As a follow-up question,
By feature scaling, you are actually throwing away your when would you prefer ridge regression over Lasso?
knowledge of the maximum and minimum values that the 14. Decision tree classifiers are very popular in supervised
feature can take. Wouldn’t the loss of this information affect machine learning problems. Two well-known tree classifiers
the accuracy of your classifier on unseen data? If you are are random forests and gradient boosted decision trees. Can
using a decision tree classifier or random forests, should you you explain the difference between the two of them? As
still do feature scaling? If yes, explain why. a follow-up question, can you explain ensemble learning
8. Scikit-learn is a popular machine learning library available methods in general? When would you opt for an ensemble
in Python, which provides ready-made implementations classifier over a non-ensemble classifier?
of several classifiers such as decision tree, support vector 15. You are given a data set in which many of the variables
machine, random forests, logistic regression, multilayer are categorical string variables. You decided to encode the
perceptron, etc. These classifiers provide a ‘predict’ function, categorical variables with One Hot Encoding. Consider that
which predicts the output for a given data instance. They you have a variable called ‘country’, which can take any of
also provide a ‘predict_proba’ function, which returns the the 20 values. With One Hot encoding, you end up creating
probability for each sample (data instance) belonging to a 20 new feature variables in place of the single ‘country’
specific output class. For instance, in the case of the movie variable. On the other hand, if you use label encoding,
review sentiment prediction task, with two classes positive you convert the categorical string variable to a categorical
and negative, the ‘predict_proba’ function would return the numerical variable. Which of the two methods leads to the
probability of the sample belonging to the positive sentiment ‘curse of the dimensionality’ problem? When would you
category and negative sentiment category. When would prefer to go for One Hot encoding vs label encoding?
you use the ‘predict_proba’ function in your sentiment Please do send me your answers to the above questions. I
classification task? will discuss the solutions to these questions in next month’s
9. In the sentiment classification problem on the movie reviews column. I also wanted to alert readers about a new deep
data, you found that some of the reviews did not have the learning specialisation course by Prof. Andrew Ng coming
date, country of reviewer and the movie genre. How would up soon on the Coursera platform (.
you handle these missing data? Note that these features were org/specializations/deep-learning). If you are interested in
not numeric; so what kind of data imputation would make becoming familiar with deep learning, there is no better teacher
sense in this case? than Prof. Ng whose machine learning course on Coursera is
10. In the movie reviews training labelled data set, you are now being taken by more than a million students.
given certain additional data features that include: (a) the star If you have any favourite programming questions/software
rating reviewers give to the movie, (b) whether they would like topics that you would like to discuss on this forum, please send
to watch it again, and (c) whether they liked the movie. Would them to me, along with your solutions and feedback, at sandyasm_
you use these additional features in your training data to train AT_yahoo_DOT_com. Till we meet again next month, wishing all
your model? If not, explain why you wouldn’t. our readers wonderful and productive days ahead.
11. What is the data leakage problem in machine learning
and how do you avoid it? Does the scenario mentioned in
question (10) fall under the data leakage category? Detailed By: Sandya Mannarswamy
information on data leakage and its avoidance can be found The author is an expert in systems software and is currently
working as a research scientist at Conduent Labs India (formerly
in this well-written and must-read paper ‘Leakage in data
Xerox India Research Centre). Her interests include compilers,
mining: formulation, detection, and avoidance’ which was programming languages, file systems and natural language
presented at the KDD 2011 conference and is available at processing. If you are preparing for systems software interviews,. you may find it useful to visit Sandya’s LinkedIn group ‘Computer
Science Interview Training India’ at
12. You are using k-fold cross validation for selecting the hyper-
groups?home=&gid=2339182
parameters of your model. Given that your training data has
The prices, features and specifications are based on information provided to us, or as available
on various websites and portals. OSFY cannot vouch for their accuracy. Compiled by: Aashima Sharma
Importing GNUCash
Anil Seth
Accounts in GNUKhata
gkcore is the REST API core engine of GNUKhata. The GNUKhata
app comprises two applications — gkcore and gkwebapp. The
objective of this tutorial is to get to know the API.
G
NUKhata is an application developed using the return requests.post(gkhost + route,data=jsondata,headers
Pyramid Web Framework. It comprises two Web =hdrs).json()
applications – a core application called gkcore, and a
Web application called gkwebapp. You may easily get started def getOrg():
with the installation and development by referring to https:// gkdata = getJsonResponse(‘organisations’)[‘gkdata’]
gitlab.com/gnukhata/gkwebapp/wikis/home. first_org = gkdata[0]
As a way of learning how to extend GNUKhata, route= ‘/’.join([‘orgyears’,first_org[‘orgname’], first_
you may consider importing data from GNUCash into org[‘orgtype’]])
GNUKhata. Since the core and user interface are two gkdata = getJsonResponse(route)[‘gkdata’]
separate applications, a good way to learn the core return gkdata[0][‘orgcode’]
application interface is to create a utility program which will
add the GNUCash data. def orgLogin(orgcode):
The utility program will first need to log into the core gkdata = {‘username’:’anil’, ‘userpassword’:’pswd’,’orgc
server, and then issue the commands to add the needed data. ode’:orgcode}
Make sure that you are able to run the core and the Web return postJsonResponse(‘login’,json.dumps(gkdata))
applications; use the latter to create an organisation and an [‘token’]
admin user for the organisation. It is important to keep in mind orgcode = getOrg()
that the gkcore application needs to be run using the gkadmin gktoken = orgLogin(orgcode)
user, assuming that you are following the steps from the wiki
article; otherwise, it will not be able to access the database. Adding accounts
GNUCash can export the accounts and transactions in CSV
The login process files. In the current article, you may extract the accounts into a
You may examine gkwebapp/views/startup.py to understand file, accounts.csv. The Python CSV modules make it very easy
the logic of the steps needed for logging in. The process to handle a CSV file. The first row contains the column labels
involves selecting an organisation first, and then supplying the and should be ignored. You may use the DictReader for more
credentials of a user for that organisation. complex processing of the file. For this application, in which
In order to keep the code as simple as possible, as the only a few columns are needed, the CSV reader is adequate.
objective is to learn the API, select the first organisation. The There are a few differences in the top level account/group
login credentials are hard-coded. In case of any errors, the names of GNUCash and GNUKhata. So, you need to create a
utility will just crash and not attempt any error handling. dictionary to map the names from GNUCash to the ones used
Once the login is successful, a token is issued. This token in GNUKhata.
will authorise all subsequent calls to the core server. You will Some groups in the level below ‘Assets’ in GNUCash
notice that the calls to the core server are simple get or post appear as top level groups in GNUKhata, e.g., ‘Current
requests. The data objects transferred between the two are Assets’ and ‘Fixed Assets’. You may ignore ‘Assets’ from the
JSON objects. account hierarchy when transferring the data.
As before, the code below ignores error handling and
import requests, json assumes ‘all is well’:
gkhost = ‘’
def getJsonResponse(route,hdrs=None): import csv
return requests.get(gkhost + route, headers = hdrs).json() def addSubGroup(name,parent,header):
data = json.dumps({‘groupname’:name,
def postJsonResponse(route,jsondata,hdrs=None): ‘subgroupof’:parent})
is built on a
lot of past experience
Q How did you start your
journey with the CentOS
project?
Managing a Linux
distribution for a long
Q Why was there a need for
CentOS Linux when Fedora
and Red Hat Enterprise Linux
It was in late 2004. I was not one of time requires immense already existed in the market?
the founders of CentOS but showed up community effort. But The Fedora project was still getting
on the scene in its early days. At that what is the key to sorted out around then. Its team had
time, we had a small team and a lot of success in a market that a clear mandate to try and build an
machines running Red Hat Linux 7.3 includes hundreds of upstream-friendly Linux distribution
and Red Hat Linux 9. competitive options? that was going to move fast and
With Red Hat moving down the Also, what are the help the overall Linux ecosystem
path towards Red Hat Enterprise challenges in building mature. Red Hat Enterprise Linux,
Linux, a model that didn’t work well a brand around an on the other hand, has been built
for us, I started looking at options. We open source offering? for the commercial medium to large
explored Debian and SUSE initially, Karanbir Singh, project organisations, looking for value above
but found management and lifecycle leader, CentOS, the code. This left a clear gap in the
on each of them hard to map out answers these questions ecosystem for a community-centric,
workflow into. and outlines the future manageable, predictable enough Linux
It was during this time that I came of the platform that distribution that the community itself,
across the Whitebox Linux effort has been leading Web small vendors, and niche users around
and then the CentOS Project. Both developments, in an the mainstream could consume.
had the same goal, but the CentOS exclusive conversation Initially, the work we did was quite
team was more inclusive and seemed with Jagmeet Singh focused on the specific use cases that
more focused on its goals. So, in late of OSFY. the developers and contributors had.
September 2004, I joined the CentOS All of us were doing specific things, in
IRC channel and then, in November, specific ways and CentOS Linux fitted
I joined the CentOS mailing list as a of a lot of code, written in many in well. But as we started to mature, we
contributor. And I am still contributing languages — each with its own licence, saw great success in specific verticals,
13 years down the road! build process and management. starting from academia and education
Three main strategies saw us past institutions to Web hosting, VoIP (Voice
Karanbir Singh,
project leader,
CentOS
Participation from the target contributors don’t always have the time most of my focus is around enablement
audience for the specific media or to work on each request, but if you look and making sure that contributors and
release is a very critical requirement. at the CentOS Forums, almost every developers have the resources they need
And because this typically comes question gets a great result. to succeed. The other 70 per cent of my
through the common project resources, There is also a lot of diversity in the time is spent as a consulting engineer
it also means that the people doing groups. The CentOS IRC channel idles at Red Hat, working with service
this work are well engaged in the core at over 600 users during the day, but a teams, helping build best practices in
project scope and Linux distribution large number of users never visit the operations roles and modern system
areas, allowing them to bridge the two forums. Similarly, the CentOS mailing patterns for online services.
sides nicely. lists include over 50,000 people, but a Additionally, I have been involved
At the moment, there are dozens large number of them are never reaching in some of the work going on in the
of different kinds of CentOS releases, the IRC channel or the forums. containers world and its user stores,
including atomic hosts, minimal which includes DevOps, SRE-Patterns,
installs, DVD ISOs, cloud images, CI and CD, among others.
vagrant images and containers. Each
of these comes through a group that is
well invested in the specific space. Q What are the major
differences you’ve observed
being a part of a corporate entity
O
penFlow, the first SDN standard, is a simulator with SDN technology. An OpenFlow switch is
communication protocol in software defined a package that routes packets in the SDN environment.
networking (SDN). It is managed by the Open The data plane is referred to as the switch and the control
Networking Foundation (ONF). The SDN controller or the plane is referred to as the controller. The OpenFlow switch
‘brain’ interacts with the forwarding (data) plane of the interacts with the controller and the switch is managed by the
networking devices like routers and switches via OpenFlow controller via the OpenFlow protocol.
APIs. It empowers the network controllers to decide the The fundamental components of the OpenFlow switch
path of network packets over a network of switches. The (as shown in Figure 2) incorporate at least one flow table,
OpenFlow protocol is required to move network control out a meter table, a group table and an OpenFlow channel to
of exclusive network switches and into control programming an exterior controller. The flow tables and group table
that is open source and privately overseen. perform the packet scanning and forwarding function
Software-defined networking uses southbound APIs and based on the flow entries configured by the controller.
northbound APIs. The former are used to hand over information The routing decisions made by the controller are deployed
to the switches and routers. OpenFlow is the first southbound in the switch’s flow table. The meter table is used for the
API. Applications use the northbound APIs to interact. measurement and control of the rate of packets.
OpenFlow
App App
Software
Hardware
- Data Plane
- Control Plane
have been tested on Ubuntu 16.04 and might change for other
versions or distributions. Figure 2: OpenFlow switch components
Before that, there are a few bundles to be introduced on
the system: In the ns-3.26 directory, download the repository of
OFSwitch13, as follows:
$ sudo apt-get install build-essential gcc g++ python git
mercurial unzip cmake $ hg clone-
$ sudo apt-get install libpcap-dev libxerces-c-dev libpcre3- module src/ofswitch13
dev flex bison $ cd src/ofswitch13
$ sudo apt-get install pkg-config autoconf libtool libboost- $ hg update 3.1.0
dev $ cd ../..
$ patch -p1 < src/ofswitch13/utils/ofswitch13-src-3_26.patch
In order to utilise ofsoftswitch13 as a static library, you $ patch -p1 < src/ofswitch13/utils/ofswitch13-doc-3_26.patch
need to introduce the Netbee library, as the ofsoftswitch13
library code relies upon it. The file ofswitch13-src-3_26.patch will allow OFSwitch
to get raw packets from nodes (devices). To do this, it will
$ wget create a new OpenFlow receive callback at CsmaNetDevice
downloads/nbeesrc.zip and virtualNetDevice. The file ofswitch13-doc-3_26.patch is
$ unzip nbeesrc.zip (for unzipping) optional but preferable.
$ cd netbee/src/ After successful installation, configure the module, as follows:
$ cmake .
$ make $ ./waf configure --with-ofswitch13=path/to/ofsoftswitch13
$ sudo cp ../bin/libn*.so /usr/local/lib $ ./waf configure --enable-examples --enable-tests
$ sudo ldconfig
$ sudo cp -R ../include/* /usr/include/ Now, we’re all set. Just build the simulator using the
following command:
Now, clone the repository of the ofsoftwsitch13
library, as follows: $ ./waf
$ git clone Enjoy the ns3.26 simulator with the power of SDN, i.e.,
$ cd ofsoftswitch13 OFSwitch 1.3.
$ ./boot.sh
$ ./configure --enable-ns3-lib Simulating the basic network topology with SDN
$ make based OFSwitch
In this section of the article, we’ll simulate a basic network
Integrating OFSwitch with ns-3 topology with three hosts, a switch and a controller.
To install ns-3.26, use the following command: Figure 3 demonstrates the topology of the network
that we want to create. It includes three hosts, one switch
$ hg clone and one controller.
Here, host2 pings the other two Here, ofswitch13 domain comes into
hosts—host1 and host3. Whenever either Controller action.
of the hosts makes a ping request, it is
forwarded to the switch. This is indicated Ptr<OFSwitch13InternalHelper> of13Helper=Cr
by arrows shown in blue. As this is the eateObject<OFSwitch13InternalHelper>();
Switch
first request, the switch’s flow table will of13Helper->InstallController
not contain any entry. This is known as (controllerNode);
a table miss. Thus, the request will be Host 1
//to install
Host 3
On Udemy:
On Pluralsight:
T
erraform is a tool to create and manage infrastructure it anywhere in your executable’s search path and all is ready to
that works with various IaaS, PaaS and SaaS service run. The following script could be used to download, unzip and
providers. It is very simple to set up and use, as there verify the set-up on your GNU/Linux or Mac OS X nodes:
aren’t multiple packages, agents and servers, etc, involved.
You just declare your infrastructure in a single (or multiple) HCTLSLOC=’/usr/local/bin’
file using a simple configuration language (or JSON), and HCTLSURL=’’
that’s it. Terraform takes your configurations, evaluates the # use latest version shown on
various building blocks from those to create a dependency downloads.html
graph, and presents you a plan to create the infrastructure. TRFRMVER=’x.y.z’
When you are satisfied with the creation plan, you apply the
configurations and Terraform creates independent resources in if uname -v | grep -i darwin 2>&1 > /dev/null
parallel. Once some infrastructure is created using Terraform, then
it compares the current state of the infrastructure with the OS=’darwin’
declared configurations on subsequent runs, and only acts else
upon the changed part of the infrastructure. Essentially, it is OS=’linux’
a CRUD (Create Read Update Destroy) tool and acts on the fi
infrastructure in an idempotent manner.
wget -P /tmp --tries=5 -q -L “${HCTLSURL}/
Installation and set-up terraform/${TRFRMVER}/terraform_${TRFRMVER}_${OS}_amd64.zip”
Terraform is created in Golang, and is provided as a static sudo unzip -o “/tmp/terraform_${TRFRMVER}_${OS}_amd64.zip” -d
binary without any install dependencies. You just pick the “${HCTLSLOC}”
correct binary (for GNU/Linux, Mac OS X, Windows, rm -fv “/tmp/terraform_${TRFRMVER}_${OS}_amd64.zip”
FreeBSD, OpenBSD and Solaris) from its download site, unzip terraform version
Concepts that you need to know the comparison of states, it only shows or applies the
You only need to know a few concepts to start using difference required to bring the infrastructure to the
Terraform quickly to create the infrastructure you desire. desired state as per its configuration. In this way, it
‘Providers’ are some of the building blocks in Terraform creates/maintains the whole infra in an idempotent
which abstract different cloud services and back-ends to manner at every apply stage. You could mark various
actually CRUD various resources. Terraform gives you resources manually to get updated in the next apply
different providers to target different service providers phase using the taint operation. You could also clean
and back-ends, e.g., AWS, Google Cloud, Digital Ocean, up the infra created, partially or fully, with the destroy
Docker and a lot of others. You need to provide different operation.
attributes applicable to the targeted service/ back-end
like the access/secret keys, regions, endpoints, etc, to Working examples and usage
enable Terraform to create and manage various cloud/ Our first example is to clarify the syntax for various
back-end resources. Different providers offer various sections in Terraform configuration files. Download
resources which correspond to different building blocks, the code example1.tf from
e.g., VMs, storage, networking, managed services, etc. article_source_code/sept17/terraform.zip. The code is
So only a single provider is required to make use of all a template to bring up multiple instances of AWS EC2
the resources implemented in Terraform, to create and VMs with Ubuntu 14.04 LTS and an encrypted EBS data
manage infrastructure for a service or back-end. There volume, in a specified VPC subnet, etc. The template
are ‘provisioners’ that correspond to different resources also does remote provisioning on the instance(s) brought
to initialise and configure those resources after their up by transferring a provisioning script and doing some
creation. The provisioners mainly do tasks like uploading remote execution.
files, executing remote/local commands/scripts, running Now, let’s dissect this example, line by line, in
configuration management clients, etc. order to practically explore the Terraform concepts.
You need to describe your infrastructure using a The lines starting with the keyword variable are
simple configuration language in single or multiple starting the blocks of input variables to store values.
files, all with the .tf extension. The configuration model The variable blocks allow the assigning of some initial
of Terraform is declarative, and it mainly merges all the values used as default or no values at all. In case of no
.tf files in its working directory at runtime. It resolves default values, Terraform will prompt for the values at
the dependencies between various resources by itself runtime, if these values are not set using the option -var
to create the correct final dependency graph, to bring ‘<variable>=<value>’. So, in our example, sensitive
up independent resources in parallel. Terraform could data like AWS access/private keys are not being put in
use JSON as well for its configuration language, but the template as it is advisable to supply these at runtime,
that works better when Terraform configurations are manually or through the command options or through
generated by automated tools. The Terraform format is environment variables. The environment variables
more human-readable and supports comments, so you should be in the form of TF_VAR_name to let Terraform
could mix and match .tf and .json configuration files read it. The variables could hold string, list and map
in case some things are human coded and others are types of values, e.g., storing a map of different amis
tool generated. Terraform also provides the concepts of and subnets for different AWS regions as demonstrated
variables, and functions working on those variables, to in our example. The string value is contained in double
store, assign and transform various things at runtime. quotes, lists in square brackets and maps in curly braces.
The general workflow of Terraform consists of two The variables are referenced, and their values are
stages —to plan and apply. The plan stage evaluates extracted through interpolation at different places using
the merged (or overridden) configs, and presents a plan the syntax ${var.<variable name>}. You could explore
before the operator about which resources are going everything about Terraform variables on the official
to get created, modified and deleted. So the changes variables help page.
required to create your desired infrastructure are pretty It’s easy to guess that the block starting with the
clear at the plan stage itself and there are no surprises at keyword provider is declaring and supplying the
runtime. Once you are satisfied with the plan generated, arguments for the service/back-end. The different
the apply stage initiates the sequence to create the providers take different arguments based upon the
resources required to build your declared infrastructure. service/back-end being used and you could explore those
Terraform keeps a record of the created infra in a state in detail on the official providers page. The resource
file (default, terraform.tfstate) and on every further keyword contains the main meat in any Terraform
plan-and-apply cycle, it compares the current state configuration. We are using two AWS building blocks
of the infra at runtime with the cached state. After in our example: aws_instance to bring up instances
and aws_route53_record to create cname records for we execute the command terraform plan -var ‘num_
the instances created. Every resource block takes up nds=”3”’ after exporting the TF_VAR_aws_access_key
some arguments to customise the resource(s) it creates and TF_VAR_aws_access_key, in the working directory
and exposes some attributes of the resource(s) created. where the first example config was created:
Each resource block starts with resource <resource
type> <resource id>, and the important thing is that the + aws_instance.test.0
<resource type> <resource id> combination should be ...
unique in the same Terraform configuration scope. The + aws_instance.test.1
prefix of each resource is linked to its provider, e.g., all ...
the AWS prefix resources require an AWS provider. The + aws_instance.test.2
simple form of accessing the attribute of a resource is ...
<resource type>.<id>.<attribute>. Our example shows + aws_route53_record.test.0
that the public_ip and public_dns attributes of the created ...
instances are being accessed in route53 and output blocks. + aws_route53_record.test.1
Some of the resources require a few post-creation ...
actions like connecting and running local and/or remote + aws_route53_record.test.2
commands, scripts, etc, on AWS instance(s). The
connection block is declared to connect to that resource, Plan: 6 to add, 0 to change, 0 to destroy.
e.g., by creating a ssh connection to the created instances
in our example. The provisioner blocks are the mechanisms If there is some error in the configuration, then that
to use the connection to upload file(s) and the directory to will come up in the plan phase only and Terraform
the created resource(s). The provisioners also run local or dumps the parsing errors. You can explicitly verify the
remote commands, while Chef runs concurrently. You could configuration for any issue using the terraform validate
explore those aspects in detail on the official provisioners command. If all is good, then the plan phase dumps
help page. Our example is uploading a provisioning script the resources it’s going to create (indicated by the +
and kicking that off remotely over ssh to provision the sign before the resources’ names, in green colour) to
created instances out-of-the-box. Terraform provides some converge to the declared model of the infrastructure.
meta-parameters available to all the resources, like the Similarly, the Terraform plan output represents the
count argument in our example. The count.index keeps resources it’s going to delete in red (indicated by
track of the current resource being created to reference that the – sign) and the resources it will update in yellow
now or later, e.g., we are creating a unique name tag for (indicated by the ~ sign). Once you are satisfied with
each instance created, in our example. Terraform deducts the plan of resources creation, you can run terraform
the proper dependencies as we are referencing the attribute apply to apply the plan and actually start creating the
of aws_instance in aws_route53_record; so it creates the infrastructure.
instances before creating their cname records. You could Our second example is to get you more comfortable
use meta-variable depends_on in cases where there is no with Terraform, and use its advanced features to
implicit dependency between resources and you want to create and orchestrate some non-trivial scenarios.
ensure that explicitly. The above-mentioned variables help The code example2.tf can be downloaded from http://
the page provide detailed information about the meta- opensourceforu.com/article_source_code/sept17/
variables too. terraform.zip. It actually automates the task of bringing
The last block declared in our example configuration up a working cluster out-of-the-box. It brings up a
is the output block. As is evident by the name itself, the configurable number of multi-disk instances from the
output could dump the raw or transformed attributes of cluster payload AMI, and then initiates a specific order
the resources created, on demand, at any time. You can of remote provisioners using null_resource, some
also see the usage of various functions like the format and provisioners on all the nodes and some only on a specific
the element in the example configuration. These functions one, respectively.
transform the variables into other useful forms, e.g., the In the example2.tf template, multiple null_resource
element function is retrieving the correct public_ip based are triggered in response to the various resources
upon the current index of the instances created. The official created, on which they depend. In this way, you can
interpolation help page provides detailed information about see how easily we can orchestrate some not-so-trivial
the various functions provided by Terraform. scenarios. You can also see the usage of depends_on
Now let’s look at how to decipher the output being meta-variable to ensure a dependency sequence between
dumped when we invoke different phases of the Terraform various resources. Similarly, you can mark those
workflow. We’ll observe the following kind of output if resources created by Terraform that you want to destroy
or those resources that you wish to create afresh using module to use the same code to provision test and/or
the commands terraform destroy and terraform taint, production clusters. The usage of the module is simply
respectively. The easy way to get quick information about supplying the required variables to it in the manner shown
the Terraform commands and their options/arguments is by below (after running terraform get to create the necessary
typing terraform and terraform <command name> -h. link for the module code):
The recent versions of Terraform have started to
provide data sources, which are the resources to gather module “myvms” {
dynamic information from the various providers. The source = “../modules/awsvms”
dynamic information gathered through the data sources ami_id = “${var.ami_id}”
is used in the Terraform configurations, most commonly inst_type = “${var.inst_type}”
using interpolation. A simple example of a data source is key_name = “${var.key_name}”
to gather the ami id for the latest version of an ami and subnet_id = “${var.subnet_id}”
use that in the instance provisioning configurations as sg_id = “${var.sg_id}”
shown below: num_nds = “${var.num_nds}”
hst_env = “${var.hst_env}”
data “aws_ami” “myami” { apps_pckd = “${var.apps_pckd}”
most_recent = true hst_rle = “${var.hst_rle}”
root_size = “${var.root_size}”
filter { swap_size = “${var.swap_size}”
name = “name” vol_size = “${var.vol_size}”
values = [“MyBaseImage”] zone_id = “${var.zone_id}”
} prov_scrpt= “${var.prov_scrpt}”
} sub_dmn = “${var.sub_dmn}”
}
resource “aws_instance” “myvm” {
ami = “${data.aws_ami.myami.id} You also need to create a variables.tf in the location of
… your module source, requiring the same variables you fill
} in your module. Here is the module variables.tf to pass the
variables supplied from the caller of the module:
Code organisation and reusability
Although our examples show the entire declarative variable “ami_id” {}
configuration in a single file, we should break it into variable “inst_type” {}
more than one file. You could break your whole config variable “key_name” {}
into various separate configs based upon the respective variable “subnet_id” {}
functionality they provide. So our first example could variable “sg_id” {}
be broken into variables.tf that keeps all the variables variable “num_nds” {}
blocks, aws.tf that declares our provider, instances. variable “hst_env” {}
tf that declares the layout of the AWS VMs, route53. variable “apps_pckd” {}
tf that declares the aws route 53 functionality, and variable “hst_rle” {}
output.tf for our outputs. To keep things simple, use and variable “root_size” {}
maintain, keep everything related to a whole task being variable “swap_size” {}
solved by Terraform in a single directory along with variable “vol_size” {}
sub-directories that are named as files, scripts, keys, variable “zone_id” {}
etc. Terraform doesn’t enforce any hierarchy of code variable “prov_scrpt” {}
organisation, but keeping each high level functionality variable “sub_dmn” {}
in its dedicated directory will save you from unexpected
Terraform actions in spite of unrelated configuration The Terraform official documentation consists of a few
changes. Remember, in the software world, “A detailed sections for modules usage and creation, which should
little copying is better than a little dependency,” provide you more information on everything related to modules.
as things get fragile and complicated easily with
each added functionality. Importing existing resources
Terraform provides the functionality of creating As we have seen earlier, Terraform caches the properties
modules to reuse the configs created. The cluster of the resources it creates into a state file, and by default
creation template shown above is actually put in a doesn’t know about the resources not created through it.
But recent versions of Terraform have introduced a feature The official Terraform documentation provides clear
to import existing resources not created through Terraform examples to import the various resources into an existing
into its state file. Currently, the import feature only updates Terraform infrastructure. But if you are looking to include
the state file, but the user needs to create the configuration the existing AWS resources in the AWS infra created by
for the imported resources. Otherwise, Terraform will show Terraform in a more automated way, then take a look at the
the imported resources with no configuration and mark Terraforming tool link in the References section.
those for destruction.
Let’s make this clear by importing an AWS instance, Note: Terraform providers are no longer distributed as
which wasn’t brought up through Terraform, into some part of the main Terraform distribution. Instead, they are
Terraform-created infrastructure. You need to run the installed automatically as part of running terraform init.
command terraform import aws_instances.<Terraform The import command requires that imported resources
Resource Name> <id of the instance> in the directory be specified in the configuration file. Please see terraform
changelog
where a Terraform state file is located. After the successful
v0.10.0/CHANGELOG.md for these.
import, Terraform gathers information about the instance and
adds a corresponding section in the state file. If you see the
Terraform plan now, it’ll show something like what follows: Missing bytes
You should now be feeling comfortable about starting to
- aws_instance .<Terraform Resource Name> automate the provisioning of your cloud infrastructure. To
be frank, Terraform is so feature-rich now that it can’t be
So it means that now you need to create a corresponding fully covered in a single or multiple articles and deserves a
configuration in an existing or new .tf file. In our example, dedicated book (which has already shaped up in the form of
the following Terraform section should be enough to not let an ebook, ‘Terraform Up & Running’). So you could further
Terraform destroy the imported resource. take a look at the examples provided in its official Git repo.
Also, the References section offers a few pointers to some
resource “aws_instance” “<Terraform Resource Name>” { excellent reads to make you more comfortable and confident
ami = “<AMI>” with this excellent cloud provisioning tool.
instance_type = “<Sizing info>” Creating on-demand and scalable infrastructure in
the cloud is not very difficult if some very simple basic
tags {
Please note that you only need to mention the Terraform some other management pieces to create an immutable
resource attributes that are required as per the Terraform infrastructure workflow that can tame any kind of modern
document. Now, if you see the Terraform plan, the earlier cloud infrastructure. The ‘Terraform Up and Running’ ebook
shown destruction plan goes away for the imported resource. is already out in the form of a print book.
You could use the following command to extract the attributes
of the imported resource to create its configuration:
References
sed -n ‘/aws_instance.<Terraform Resource Name>/,/}/p’
[1] Terraform examples:
terraform.tfstate | \ terraform/tree/master/examples
grep -E ‘ami|instance_type|tags’ | grep -v ‘%’ | sed ‘s/^ [2] Terraforming tool:
[3] A Comprehensive Guide to Terraform:.
*//’ | sed ‘s/:/ =/’
gruntwork.io/a-comprehensive-guide-to-terraform-
b3d32832baca#.ldiays7wk
Please pay attention when you import a resource into [4] Terraform Up & Running:.
terraformupandrunning.com/?ref=gruntwork-blog-
your current Terraform state and decide not to use that comprehensive-terraform
going forward. In which case, don’t forget to rename your
terraform.state.backup as terraform.state file to roll back
By: Ankur Kumar
to the previous state. You could also delete that resource
block from your state file, as an alternative, but it’s not a The author is a systems and infrastructure developer/
architect and FOSS researcher, currently based in the
recommended approach. Otherwise, Terraform will try to
US. You can find some of his other writings on FOSS
delete the imported but not desired resource and that could at:.
be catastrophic in some cases.
W
ireshark is a cross-platform network analysis tool used bounding box in Figure 1 for available interfaces.
to capture packets in real-time. Wireshark includes In this tutorial, we are going to capture Wi-Fi packets, so
filters, flow statistics, colour coding, and other the option ‘Wi-Fi’ has been selected (if you wish to capture
features that allow you to get a deep insight into network traffic the packets using Ethernet or any other interface, select the
and to inspect individual packets. Discovering the delayed HTTP corresponding options).
responses for a particular HTTP request from a particular PC is a Step 2: Here, we make a request to.
tedious task for most admins. This tutorial will teach readers how wikipedia.org and, as a result, Wikipedia sends an HTTP
to discover and visualise the response time of a Web server using response of ‘200 OK’, which indicates the requested
Wireshark. OSFY has published many articles on Wireshark, action was successful. ‘200 OK’ implies that the response
which you can refer to for a better understanding of the topic. contains a payload, which represents the status of the
Step 1: Start capturing the packets using Wireshark on a requested resource (the request is successful). Now filter
specified interface to which you are connected. Refer to the all the HTTP packets as shown in Figure 2, as follows:
syntax: http
DevOps Series
Creating a Virtual Machine for
Erlang/OTP Using Ansible
.........
E
rlang is a programming language designed by Ericsson The IP address of the guest CentOS 6.8 VM is added to
primarily for soft real-time systems. The Open Telecom the inventory file as shown below:
Platform (OTP) consists of libraries, applications
and tools to be used with Erlang to implement services that erlang ansible_host=192.168.122.150 ansible_connection=ssh
require high availability. In this article, we will create a test ansible_user=bravo ansible_password=password
virtual machine (VM) to compile, build, and test Erlang/OTP
from its source code. This allows you to create VMs with An entry for the erlang host is also added to the /etc/hosts
different Erlang release versions for testing. file as indicated below:
The Erlang programming language was developed by
Joe Armstrong, Robert Virding and Mike Williams in 1986 192.168.122.150 erlang
and released as free and open source software in 1998. It
was initially designed to work with telecom switches, but is A ‘bravo’ user account is created on the test VM, and is
widely used today in large scale, distributed systems. Erlang added to the ‘wheel’ group. The /etc/sudoers file also has the
is a concurrent and functional programming language, and is following line uncommented, so that the ‘bravo’ user will be
released under the Apache License 2.0. able to execute sudo commands:
steps, and we shall go through each one of them. The version - name: Download and extract Erlang source tarball
of Erlang/OTP can be passed as an argument to the playbook. unarchive:
Its default value is the release 19.0, and is defined in the src: “{{ ERL_VERSION }}.tar.
variable section of the playbook as shown below: gz”
dest: “{{ ERL_DIR }}”
vars: remote_src: yes
ERL_VERSION: “otp_src_{{ version | default(‘19.0’) }}”
ERL_DIR: “{{ ansible_env.HOME }}/installs/erlang” The ‘configure’ script is available in the sources, and
ERL_TOP: “{{ ERL_DIR }}/{{ ERL_VERSION }}” it is used to generate the Makefile based on the installed
TEST_SERVER_DIR: “{{ ERL_TOP }}/release/tests/test_server” software. The ‘make’ command will build the binaries from
the source code.
The ERL_DIR variable represents the directory where the
tarball will be downloaded, and the ERL_TOP variable refers - name: Build the project
to the top-level directory location containing the source code. command: “{{ item }} chdir={{ ERL_TOP }}”
The path to the test directory from where the tests will be with_items:
invoked is given by the TEST_SERVER_DIR variable. - ./configure
Erlang/OTP has mandatory and optional package - make
dependencies. Let’s first update the software package environment:
repository, and then install the required dependencies as ERL_TOP: “{{ ERL_TOP }}”
indicated below:
After the ‘make’ command finishes, the ‘bin’ folder in
tasks: the top-level sources directory will contain the Erlang ‘erl’
- name: Update the software package repository interpreter. The Makefile also has targets to run tests to verify
become: true the built binaries. We are remotely invoking the test execution
yum: from Ansible and hence -noshell -noinput are passed as
name: ‘*’ arguments to the Erlang interpreter, as shown in the .yaml file.
update_cache: yes
- name: Prepare tests
- name: Install dependencies command: “{{ item }} chdir={{ ERL_TOP }}”
become: true with_items:
package: - make release_tests
name: “{{ item }}” environment:
state: latest ERL_TOP: “{{ ERL_TOP }}”
with_items:
- wget - name: Execute tests
- make shell: “cd {{ TEST_SERVER_DIR }} && {{ ERL_TOP }}/bin/erl
- gcc -noshell -noinput -s ts install -s ts smoke_test batch -s
- perl init stop”
- m4
- ncurses-devel You need to verify that the tests have passed successfully
- sed by checking the $ERL_TOP/release/tests/test_server/index.
- libxslt html page in a browser. A screenshot of the test results is
- fop shown in Figure 1.
The built executables and libraries can then be installed
The Erlang/OTP sources are written using the ‘C’ on the system using the make install command. By default,
programming language. The GNU C Compiler (GCC) the install directory is /usr/local.
and GNU Make are used to compile the source code. The
‘libxslt’ and ‘fop’ packages are required to generate the - name: Install
documentation. The build directory is then created, the source command: “{{ item }} chdir={{ ERL_TOP }}”
tarball is downloaded and it is extracted to the directory with_items:
mentioned in ERL_DIR. - make install
become: true
- name: Create destination directory environment:
file: path=”{{ ERL_DIR }}” state=directory ERL_TOP: “{{ ERL_TOP }}”
server”
tasks:
- name: Update the software package repository
become: true
yum:
name: ‘*’
update_cache: yes
An Introduction to govcsim
(a vCenter Server Simulator)
govcsim is a vCenter Server and ESXi API based simulator that offers a quick fix
solution for prototyping and testing code. It simulates the vCenter Server model and
can be used to create data centres, hosts, clusters, etc.
Application
Development
Open Source Day
and You Cloud
(talks on hybrid app development
and Big Data
(Success Stories) for the enterprise use)
Asia’s #1 Conference
on Open Source
Register Now!
$ vcsim References
[1] Ansible documentation:
Now, govcsim is working. You can check
[2] Govcsim:
out the various methods available by visiting master/vcsim on your favourite browser.
By: Abhijeet Kasurde
Testing govcsim with Ansible
Now, let’s try to write a simple Ansible Playbook, which The author works at Red Hat and is a FOSS evangelist. He
loves to explore new technologies and software. You can
will list down all VMs emulated by govcsim. The complete contact him at abhijeetkasurde21@gmail.com.
code is given in Figure 3. You can read up more about
Serverless Architectures:
Demystifying Serverless Computing
Serverless architectures refer to applications that depend a lot on third party
services known as BaaS (Backend as a Service), or on custom code which
runs on FaaS (Function as a Service).
I
n the 1990s, Neal Ford (now at ThoughtWorks) was will end the discussion by looking at the limitations of the
working in a small company that focused on a technology serverless approach.
called Clipper. By writing an object-oriented framework
based on Clipper, DOS applications were built using dBase. Why serverless?
With the expertise the firm had on Clipper, it ran a thriving Most of us remember using server machines of one form or
training and consulting business. Then, all of a sudden, this another. We remember logging remotely to server machines
Clipper-based business disappeared with the rise of Windows. and working with them for hours. We had cute names for the
So Neal Ford and his team went scrambling to learn and adopt servers - Bailey, Daisy, Charlie, Ginger, and Teddy - treating
new technologies. “Ignore the march of technology at your them well and taking care of them fondly. However, there
peril,” is the lesson that one can learn from this experience. were many problems in using physical servers like these:
Many of us live inside ‘technology bubbles’. It is easy to Companies had to do capacity planning and predict their
get cozy and lose track of what is happening around us. All future resource requirements.
of a sudden, when the bubble bursts, we are left scrambling Purchasing servers meant high capital expenses (capex)
to find a new job or business. Hence, it is important to stay for companies.
relevant. In the 90s, that meant catching up with things like We had to follow lengthy procurement processes to
graphical user interfaces (GUIs), client/server technologies purchase new servers.
and later, the World Wide Web. Today, relevance is all about We had to patch and maintain the servers … and so on.
being agile and leveraging the cloud, machine learning, The cloud and virtualisation provided a level of flexibility
artificial intelligence, etc. that we hadn’t known with physical servers. We didn’t have
With this background, let’s delve into serverless to follow lengthy procurement processes, or worry about
computing, which is an emerging field. In this article, readers who ‘owns the server’, or why only a particular team had
will learn how to employ the serverless approach in their ‘exclusive access to that powerful server’, etc. The task of
applications and discover key serverless technologies; we procuring physical machines became obsolete with the arrival
of virtual machines (VMs) and the cloud. The architecture The solution is to set up some compute capacity to process
we used also changed. For example, instead of scaling up data from a database and also execute this logic in a language
by adding more CPUs or memory to physical servers, we of choice. For example, if you are using the AWS platform,
started ‘scaling out’ by adding more machines as needed, you can use DynamoDB for the back-end, write programming
but, in the cloud. This model gave logic as Lambda functions, and
us the flexibility of an opex-based AWS expose them through the AWS API
(operational expenses-based) Lambda
Gateway with a load balancer. This
revenue model. If any of the VMs entire set-up does not require you to
went down, we got new VMs Key provision any infrastructure or have
spawned in minutes. In short, we Apache
OpenWhisk
Serverless MS Azure any knowledge about underlying
started treating servers as ‘cattle’ Platforms Functions servers/VMs in the cloud. You can
and not ‘pets’. use a database of your choice for
However, the cloud and Google the back-end. Then choose any
Cloud
virtualisation came with their own Functions programming language supported
problems and still have many in AWS Lambda, including Java,
limitations. We are still spending a Figure 1: Key serverless platforms Python, JavaScript, and C#. There is
lot of time managing them — for no cost involved if there aren’t any
example, bringing VMs up and down, based on need. We have users using the MovieBot. If a blockbuster like ‘Baahubali’ is
to architect for availability and fault-tolerance, size workloads, released, then there could be a huge surge in users accessing
and manage capacity and utilisation. If we have dedicated VMs the MovieBot at the same time, and the set-up would
provisioned in the cloud, we still have to pay for the reserved effortlessly scale (you have to pay for the calls, though).
resources (even if it’s just idle time). Hence, moving from a Phew! You essentially engineered a serverless application.
capex model to an opex one is not enough. What we need is to With this, it’s time to define the term ‘serverless’.
only pay for what we are using (and not more than that) and Serverless architectures refer to applications that significantly
‘pay as you go’. Serverless computing promises to address depend on third-party services (known as Backend-as-a-
exactly this problem. Service or BaaS) or on custom code that’s run in ephemeral
The other key aspect is agility. Businesses today need containers (Function-as-a-Service or FaaS).
to be very agile. Technology complexity and infrastructure Hmm, that’s a mouthful of words; so let’s dissect
operations cannot be used as an excuse for not delivering this description.
value at scale. Ideally, much of the engineering effort should Backend-as-a-Service: Typically, databases (often NoSQL
be focused on providing functionality that delivers the flavours) hold the data and can be accessed over the cloud,
desired experience, and not in monitoring and managing the and a service can be used to help access that back-end.
infrastructure that supports the scale requirements. This is Such a back-end service is referred to as BaaS.
where serverless shines. Function-as-a-Service: Code that processes the requests
(i.e., the ‘programming logic’ written in your favourite
What is serverless? programming language) could be run on containers that are
Consider a chatbot for booking movie tickets - let’s call it spun and destroyed as needed. They are known as FaaS.
MovieBot. Any user can make queries about movies, book The word ‘serverless’ is misleading because it literally
tickets, or cancel them in a conversational style (e.g., “Is means there are no servers. Actually, the word implies, “I don’t
‘Dunkirk’ playing in Urvashi Theatre in Bengaluru tonight?” care what a server is.” In other words, serverless enables us
in voice or text). to create applications without thinking about servers, i.e., we
This solution requires three elements: a chat interface can build and run applications or services without worrying
channel (like Skype or Facebook Messenger), a natural about provisioning, managing or scaling the underlying
language processor (NLP) to understand the user’s intentions infrastructure. Just put your code in the cloud and run it! Keep
(e.g., ‘book a ticket’, ‘ticket availability’, ‘cancellation’, etc), in mind that this applies to Platform-as-a-Service (PaaS) as
and then access to a back-end where the transactions and data well; although you may not deal with direct VMs with PaaS,
pertaining to movies is stored. The chat interface channels you still have to deal with instance sizes and capacity.
are universal and can be used for different kinds of bots. NLP Think of serverless as a piece of functionality to run
can be implemented using technologies like AWS Lex or IBM — not in your machine but executed remotely. Typically,
Watson. The question is: how is the back-end served? Would serverless functions are executed in an ‘event-driven’
you set up a dedicated server (or a cluster of servers), an API fashion — the functions get executed as a response to events
gateway, deploy load balancers, or put in place identity and or requests on HTTP. In the case of the MovieBot, the
access control mechanisms? That’s costly and painful, right! Lambda functions are invoked to serve user queries as and
That’s where serverless technology can help. when user(s) interact with it.
Use cases Functions has good support for a wider variety of languages
With serverless architecture, developers can deploy certain and integrates with Microsoft’s Azure services. Google’s
types of solutions at scale with cost-effectiveness. We have Cloud Functions is currently in beta. One of the key
already discussed developing chatbots - it is a classic use open source players in serverless technologies is Apache
case for serverless computing. Other key use cases for the OpenWhisk, backed by IBM and Adobe. It is often tedious
serverless approach are given below. to develop applications directly on these platforms (AWS,
1) Three-tier Web applications: Conventional single page Azure, Google and OpenWhisk). The serverless framework is
applications (SPA), which rely on REpresentative State a popular solution that aims to ease application development
Transfer (REST) based services to perform a given on these platforms.
functionality, can be re-written to leverage serverless Many solutions (especially open source) focus on
functions front-ended by an API gateway. This is a abstracting away the details of container technologies like
powerful pattern that helps your application scale Docker and Kubernetes. Hyper.sh provides a container
infinitely, without concerns of configuring scale-out or hosting service in which you can use Docker images
infrastructure resources.. directly in serverless style. Kubeless from Bitnami, Fission
2) Scalable batch jobs: Batch jobs were traditionally run from Platform9, and funktion from Fabric8 are serverless
as daemons or background processes on dedicated VMs. frameworks that provide an abstraction over Kubernetes.
More often than not, this approach hit scalability and had Given that serverless architecture is an emerging approach,
reliability issues - developers would leave their critical technologies are still evolving and are yet to mature. So you
processes with Single Points of Failure (SPoF). With the will see a lot of action in this space in the years to come.
serverless approach, batch jobs can now be redesigned
as a chain of mappers and reducers, each running as Join us at the India Serverless Summit 2017
independent functions. Such mappers and reducers will These are the best of times, and these are the worst of
share a common data store, something like a blob storage times! There are so many awesome new technologies to
or a queue, and can individually scale up to meet the data catch up on. But, we simply can’t. We have seen a pro-
processing needs. gression of computing models - from virtualisation, IaaS,
3) Stream processing: Related to scalable batch jobs is the PaaS, containers, and now, serverless - all in a matter of
pattern of ingesting and processing large streams of data a few years. You certainly don’t want to be left behind.
for near-real-time processing. Streams from services So join us at the Serverless Summit, India’s first
confluence on serverless technologies, being held on
like Kafka and Kinesis can be processed by serverless
October 27, 2017 at Bengaluru. It is the best place to
functions, which can be scaled seamlessly to reduce hear from industry experts, network with technology
latency and increase the throughput of the system. This enthusiasts, as well as learn about how to adopt server-
pattern can elegantly handle spiky loads as well. less architecture. The keynote speaker is John Willis,
4) Automation/event-driven processing: Perhaps the first director of ecosystem development at Docker and a
application of serverless computing was automation. DevOps guru (widely known for the book ‘The DevOps
Functions could be written to respond to certain alerts Handbook’ that he co-authored).
or events. These could also be periodically scheduled to Open Source For You is the media partner and the
augment the capabilities for the cloud service provider Cloud Native Computing Foundation is the community
through extensibility. partner for this summit. For more details, please visit the
The kind of applications that are best suited for serverless website.
architectures include mobile back-ends, data processing
systems (real-time and batch) and Web applications. Challenges in going serverless
In general, serverless architecture is suitable for any Despite the fact that a few large businesses are already
distributed system that reacts to events or process workloads powered entirely by serverless technologies, we should keep
dynamically, based on demand. For example, serverless in mind that serverless is an emerging approach. There are
computing is suitable for processing events from IoT (Internet many challenges we need to deal with when developing
of Things) devices, processing large data sets (in Big Data) serverless solutions. Let us discuss them in the context of the
and intelligent systems that respond to queries (chatbots). MovieBot example mentioned earlier.
Debugging
Serverless technologies Unlike in typical application development, there is no
There are many proprietary and a few open source serverless concept of a local environment for serverless functions. Even
technologies and platforms available for us to choose from. fundamental debugging operations like stepping-through,
AWS Lambda is the earliest (announced in late 2014 and breakpoints, step-over and watch points are not available with
released in 2015) and the most popular serverless technology, serverless functions. As of now, we need to rely on extensive
while other players are fast catching up. Microsoft’s Azure logging and instrumentation for debugging.
When MovieBot provides an inconsistent response or Further, maintaining all the dependent packages, versioning
does not understand the intent of the user, how do we debug them, etc, is a practical challenge as well.
the code that is running remotely? For situations such as this, Another challenge is the lack of support for widely used
we have to log numerous details: NLP scores, the dialogue languages from serverless platforms. For instance, as of May
responses, query results of the movie ticket database, etc. 2017, you can write functions in C#, Node.js (4.3 and 6.10),
Then we have to manually analyse and do detective work to Python (2.7 and 3.6) and Java 8 on AWS Lambda. How about
find out what could have gone wrong. And, that is painful. other languages like Go, PHP, Ruby, Groovy, Rust or any
State management others of your choice? Though there are solutions to write
Although serverless is inherently stateless, real-world serverless functions in these languages and execute them, it
applications invariably have to deal with state. Orchestrating is harder to do so. Since serverless technologies are maturing
a set of serverless functions becomes a significant challenge with support for a wider number of languages, this challenge
when there is a common context that has to be passed will gradually disappear with time.
between them. Serverless is all about creating solutions without thinking or
Any chatbot conversation represents a dialogue. It worrying about servers; think of it as just putting your code in
is important for the program to understand the entire the cloud and running it! Serverless is a game-changer because
conversation. For example, for the query, “Is ‘Dunkirk’ it shifts the way you look at how applications are composed,
playing in Urvashi Theatre in Bengaluru tonight?” if the written, deployed and scaled. If you want significant agility
answer from MovieBot is “Yes”, then the next query from in creating highly scalable applications while remaining
the user could be, “Are two tickets available?” If MovieBot cost-effective, serverless is what you need. Businesses across
confirms this, the user could say, “Okay, book it.” For this the world are already providing highly compelling solutions
transaction to work, MovieBot should remember the entire using serverless computing technologies. The applications
dialogue, which includes the name of the movie, the theatre’s serverless has range from chatbots to real-time stream
location, the city, and the number of tickets to book. This processing from IoT (Internet of Things) devices. So it is not
entire dialogue represents a sequence of stateless function a question of if, but rather, when you will adopt the serverless
calls. However, we need to persist this state for the final approach for your business.
transaction to be successful. This maintenance of state
external to functions is a tedious task.
References
Vendor lock-in
Although we talk about isolated functions that are [1] ‘Build Your Own Technology Radar’, Neal Ford, http://
nealford.com/memeagora/2013/05/28/build_your_own_
executed independently, we are in practice tied to the SDK
technology_radar.html
(software development kit) and the services provided by [2] ‘Serverless Architectures’, Martin Fowler, https://
the serverless technology platform. This could result in martinfowler.com/articles/serverless.html
vendor lock-in because it is difficult to migrate to other [3] ‘Why the Fuss About Serverless?’ Simon Wardley,.
gardeviance.org/2016/11/why-fuss-about-serverless.html
equivalent platforms.
[4] ‘Serverless Architectural Patterns and Best Practices’,
Let’s assume that we implement the MovieBot on the AWS Amazon Web Services,
Lambda platform using Python. Though the core logic of the watch?v=b7UMoc1iUYw
bot is written as Lambda functions, we need to use other related Serverless technologies
services from the AWS platform for the chatbot to work, such • AWS Lambda:
as AWS Lex (for NLP), AWS API gateway, DynamoDB (for • Azure Functions:
data persistence), etc. Further, the bot code may need to make • Google Cloud Functions:
functions/
use of the AWS SDK to consume the services (such as S3 or • Apache OpenWhisk:
DynamoDB), and that is written using boto3. In other words, • Serverless framework:
for the bot to be a reality, it needs to consume many more serverless
services from the AWS platform than just the Lambda function • Fission:
• Hyper.sh:
code written in plain Python. This results in vendor lock-in • Funktion:
because it is harder to migrate the bot to other platforms. • Kubeless:
Other challenges
Each serverless function code will typically have third
party library dependencies. When deploying the serverless By: Ganesh Samarthyam, Manoj Ganapathi and Srushit Repakula
function, we need to deploy the third party dependency The authors work at CodeOps Technologies, which is a
packages as well, and that increases the deployment package software technology, consulting and training company based
size. Because containers are used underneath to execute the in Bengaluru. CodeOps is the organiser of the upcoming India
Serverless Summit, scheduled on October 27, 2017. Please
serverless functions, the increased deployment size increases
check for more details.
the latency to start up and execute the serverless functions.
‘M
These services are independently deployable and scalable.
icroservices’ is a compound word made of Each service also provides a kind of contract allowing for
‘micro’ and ‘services’. As the name suggests, different services to be written in different programming
microservices are the small modules that provide languages. They can also be managed by different teams.
some functionality to a system. These modules can be
anything that is designed to serve some specific function. The architecture of microservices
These services can be independent or interrelated with each Microservices follows the service-oriented architecture
other, based on some contract. in which the services are independent of users, products
The main function of microservices is to provide and technologies. This architecture allows one to build
isolation between services — a separation of services from applications as suites of services that can be used by other
servers and the ability to run them independently, with the services. This architecture is in contrast to the monolithic
interaction between them based on a specific requirement. architecture, where the services are built as a single unit
To achieve this isolation, we use containerisation, which will comprising a client-side user interface, databases and
be discussed later. The idea behind choosing microservices server-side applications in a single frame — all dependent
is to avoid correlated failure in a system where there on one another. The failure of one can bring down the
is a dependency between services. When running all whole system.
microservices inside the same process, all services will be The microservices architecture mainly consists of
killed if the process is restarted. By running each service the client-side user interface, databases and server-side
in its own process, only one service is killed if that process applications as different services that are related in some
is restarted, but restarting the server will kill all services. way to each other but are not dependent on each other. Each
By running each service on its own server, it’s easier to layer is independent of the other, which in turn leads to easy
maintain these isolated services, though there is a cost maintenance. The architecture is represented in Figure 2.
associated with this option. This architecture is a form or system that is built by
plugging together components, somewhat like in a real
How microservices are defined world composition where a component is a unit of software
The microservices architecture develops a single application that is independently replaceable and upgradeable. These
as a suite of small services, each running in its own process microservices are easily deployable and integrated into
and communicating with lightweight mechanisms, often an one another. This gives rise to the possibility of continuous
HTTP resource API. integration and continuous deployment.
doesn’t have to
install and configure
AMQP
User Interface HTTP
complex databases
Relational
nor worry about
Services HTTP
HTTP DB
switching between
Module 1 Module 2 Module 3 Module 4 incompatible
language toolchain
HTTP HTTP
Key / Value versions. When an
Store
app is dockerised,
Database 2
HTTP
HTTP that complexity
Database 1
Database 3
Client Side User Interface Microservices Database is pushed into
microservices - application databases
containers that
Figure 1: Microservices - application databases Figure 2: Microservices architecture are easily built,
shared and run. It
is a tool that is designed to benefit both developers and
APP
A
APP
A*
APP
B
Containers are isolated systems administrators.
but share the OS and, where
Bins/ Bins/ Bins/ appropriate, the bins/libraries
VM
Libs Libs Libs
....result is significantly faster deployment, How well does Kubernetes go with Docker?
Guest Guest Guest
much less overhead, easier migration,
faster restart
Before starting the discussion on Kubernetes, we must
OS OS OS first understand orchestration, which is to arrange various
APP A*
APP B*
APP B*
APP B*
APP B
Docker
Container
Hyprevisor (Type 2) means the process of integrating two or more applications
Host OS Host OS
and/or services together to automate a process, or
Server Server
synchronise data in real-time.
Figure 3: Virtual machines vs containers The intermediate path connecting two or more services
is done by orchestration, which refers to the automated
What’s so good about microservices? arrangement, coordination and management of software
With the advances in software architecture, microservices containers. So what does Kubernetes do then? Kubernetes
have emerged as a different platform compared to other is an open source platform for automating deployments,
software architecture. Microservices are easily scalable scaling and operations of application containers across
and are not limited to a language; so you are free to clusters of hosts, providing container-centric infrastructure.
choose any language for the services. The services Orchestration is an idea whereas Kubernetes implements
are loosely coupled, which in turn results in ease of that idea. It is a tool for orchestration. It deploys containers
maintenance and flexibility, as well as reduced time in inside a cluster. It is a helper tool that can be used to
debugging and deployment. manage a cluster of containers and treat all servers as a
single unit. These containers are provided by Docker.
Microservices with Docker and Kubernetes The best example of Kubernetes is the Pokémon Go
Docker is a software technology that provides containers, App, which runs on a virtual environment of Google
which are a computer virtualisation method in which Cloud, in a separate container for each user. Kubernetes
the kernel of an operating system allows the existence uses a different set-up for each OS. So if you want a tool
of multiple isolated user-space instances, instead of just that will overcome Docker’s limitations, you should go
one. Everything required to make a piece of software run with Kubernetes.
is packaged into isolated containers. With microservices, To conclude, we may say that microservices is growing
containers play the same role of providing virtual very fast, the reason being its features of independence and
environments to different processes that are running, being isolation which give it the power to easily run, test and be
deployed and undergoing testing, independently. deployed. This is just a small summary of microservices,
Docker is a bit like a virtual machine, but rather than about which there is a lot more to learn.
creating a whole virtual operating system, Docker allows
applications to use the same kernel as the system that it’s
running on and only requires applications to be shipped By: Astha Srivastava
with things not already running on the host computer. The author is a software developer. Her areas of expertise
The main idea behind using Docker is to eliminate the are C, C++, C#, Java, JavaScript, HTML and ASP.NET. She has
recently started working on the basics of artificial intelligence.
‘works on my machine’ type of problems that occur
She can be reached at asthasri25@gmail.com.
when collaborating on code with co-workers. Docker
Selenium:
A Cost-Effective Test Automation
Tool for Web Applications
Selenium is a software testing framework. Test authors can write tests in it without
learning a test scripting language. It automates Web based applications efficiently and
provides a recording/ playback system for authoring tests.
S
elenium is a portable software-testing framework for suite and is the easiest one to learn. It is a Firefox plugin that
Web applications that can operate across different you can install as easily as any other plugin. It allows testers to
browsers and operating systems. It is quite similar record their actions as they go through the workflow that they
to HP Quick Test Pro (or QTP, now called UFT) except that need to test. But it can only be used with the Firefox browser,
Selenium focuses on automating Web based applications. as other browsers are not supported. The recorded scripts can
Testing done using this tool is usually referred to as Selenium be converted into various programming languages supported
testing. Selenium is not just a single tool but a set of tools by Selenium, and the scripts can be executed on other browsers
that helps the tester to automate Web based applications more as well. However, for the sake of simplicity, the Selenium IDE
efficiently. It has four components: should only be used as a prototyping tool. If you want to create
1. The Selenium integrated development environment (IDE) more advanced test cases, either use Selenium RC or WebDriver.
2. The Selenium remote control (RC)
3. WebDriver Selenium RC
4. The Selenium grid Selenium RC or Selenium Remote Control (also known as
Selenium RC and WebDriver are merged into a single Selenium 1.0) was the flagship testing framework of the
framework to form Selenium 2. Selenium 1 is also referred whole Selenium project for a long time. It works in a way
to as Selenium RC. Jason Huggins created Selenium in 2004. that the client libraries can communicate with the Selenium
Initially, he named it JavaScriptTestRunner, and later changed RC server that passes each Selenium command for execution.
this to Selenium. It is licensed under Apache License 2.0. In Then the server passes the Selenium command to the browser
the following sections, we will learn about how Selenium and using Selenium-Core JavaScript commands. This was the
its components operate. first automated Web testing tool that allowed people to
use a programming language they preferred. Selenium RC
The Selenium IDE components include:
The Selenium IDE is the simplest framework in the Selenium 1. The Selenium server, which launches and kills the
browser, interprets and runs the Selenese commands The WebDriver uses a different underlying framework,
passed from the test program, and acts as an HTTP while Selenium RC uses a JavaScript Selenium-Core
proxy, intercepting and verifying HTTP messages passed embedded within the browser, which has its limitations.
between the browser and Application Under Test (AUT). WebDriver directly interacts with the browser without any
2. Client libraries that provide the interface between each intermediary. Selenium RC depends on a server.
programming language and the Selenium RC server.
Selenium RC is great for testing complex AJAX based Architecture
Web user interfaces under a continuous integration system. It The architecture of WebDriver is explained in Figure 1.
is also an ideal solution for users of Selenium IDE who want
to write tests in a more expressive programming language
than the Selenese HTML table format.
Web Application
Selenese commands
Selenese is the set of Selenium commands which is used to
test Web applications. The tester can test the broken links,
the existence of some object on the UI, AJAX functionality,
the alert window, list options and a lot more using Selenese.
There are three types of commands:
1. Actions: These are commands that manipulate the state
of the application. Upon execution, if an action fails, the
execution of the current test is stopped. Some examples are:
click(): Clicks on a link, button, checkbox or radio button.
contextMenuAt (locator, coordString): Simulates the Selenium Web Driver
user by clicking the ‘Close’ button in the title bar of a
popup window or tab.
2. Accessors: These evaluate the state of the application and
store the results in variables which are used in assertions.
Some examples are: Selenium Test
assertErrorOnNext: Pings Selenium to expect an error on (Java, C#, Ruby, Python,
Perl, Php, Java Script)
the next command execution with an expected message.
storeAllButtons: Returns the IDs of all buttons on the page.
3. Assertions: These enable us to verify the state of an Figure 1: Architecture of Selenium WebDriver
application and compare it against the expected. It is
used in three modes, i.e., assert, verify and waitfor. Some The differences between WebDriver and Selenium RC are
examples are: given in Table 1.
waitForErrorOnNext(message): Wait for error, used with
the accessor assertErrorOnNext. Table 1
verifySelected (selectLocator, opti onLocator):Verifies WebDriver Selenium RC
that the selected item of a drop-down satisfies Architecture is simpler, Architecture is complex, as it
optionSpecifier. as it controls the browser depends on the server.
from the OS level.
Selenium WebDriver It supports HtmlUnit. It does not support HtmlUnit.
Selenium WebDriver is a tool that automates the testing of Web WebDriver is faster, as it It is slower, as it uses
applications and is popularly known as Selenium 2.0. It is a Web interacts directly with the JavaScript to interact
automation framework that allows you to execute your tests browser. with RC.
against different browsers. WebDriver also enables you to use a Less object-oriented Purely object-oriented and
programming language in creating your test scripts. The following APIs and cannot be used can be used for iPhone/An-
programming languages are supported by Selenium WebDriver: for mobile testing. droid application testing.
1. Java WebDriver is not ready Selenium RC can support
2. .NET to support new brows- new browsers and have
3. PHP ers and does not have a built-in commands.
4. Python built-in command for the
5. Perl automatic generation of
test results.
6. Ruby
A
s part of developing microservices, many of us use us an Admin UI dashboard to administer Spring Boot
the features of Spring Boot along with Spring Cloud. applications. This module crunches the data from Actuator
In the microservices world, we may have many end points, and provides insights about all the registered
Spring Boot applications running on the same or different applications in a single dashboard.
hosts. If we add SpringActuator ( We will demonstrate the Spring Boot admin features in
spring-boot/docs/current/reference/htmlsingle/#production- the following sections.
ready) to the Spring Boot applications, we get a lot of out-of- As a first step, create a Spring Boot application that will
the-box end points to monitor and interact with applications. be a Spring Boot Admin Server module by adding the Maven
The list is given in Table 1. dependencies given below:
The end points given in Table 1 provide a lot of
insights about the Spring Boot application. But if you <dependency>
have many applications running, then monitoring each <groupId>de.codecentric</groupId>
application by hitting the end <artifactId>spring-boot-admin-server</artifactId>
points and inspecting the JSON <version>1.5.1</version>
response is a tedious process. </dependency>
To avoid this hassle, the Code <dependency>
Centric team came up with the <groupId>de.codecentric</groupId>
Spring Boot Admin (. <artifactId>spring-boot-admin-server-ui</artifactId>
com/codecentric/spring-boot- <version>1.5.1</version>
Figure 1: Spring Boot logo admin) module, which provides </dependency>
Table 1
ID Description Sensitive default
actuator Provides a hypermedia-based ‘discovery page’ for the other endpoints. Requires True
Spring HATEOAS to be on the classpath.
auditevents Exposes audit events information for the current application. True
autoconfig Displays an auto-configuration report showing all auto-configuration candidates True
and the reason why they ‘were’ or ‘were not’ applied.
beans Displays a complete list of all the Spring Beans in your application. True
configprops Displays a collated list of all @ConfigurationProperties. True
dump Performs a thread dump. True
env Exposes properties from Spring’s ConfigurableEnvironment. True
flyway Shows any Flyway database migrations that have been applied. True
health Shows application health information (when the application is secure, a simple False
‘status’ when accessed over an unauthenticated connection or full message
details when authenticated).
info Displays arbitrary application information. False
loggers Shows and modifies the configuration of loggers in the application. True
liquibase Shows any Liquibase database migrations that have been applied. True
metrics Shows ‘metrics’ information for the current application. True
mappings Displays a collated list of all @RequestMapping paths. True
shutdown Allows the application to be gracefully shut down (not enabled by default). True
trace Displays trace information (by default, the last 100 HTTP requests). True
Add the Spring Boot Admin Server configuration by adding throws Exception {
@EnableAdminServer to your configuration, as follows: // Page with login form is served as /login.html and does
a POST on /login
package org.samrttechie; http.formLogin().loginPage(“/login.
html”).loginProcessingUrl(“/login”).permitAll();
import org.springframework.boot.SpringApplication; // The UI does a POST on /logout on logout
import org.springframework.boot.autoconfigure. http.logout().logoutUrl(“/logout”);
SpringBootApplication; // The ui currently doesn’t support csrf
import org.springframework.context.annotation.Configuration; http.csrf().disable();
import org.springframework.security.config.annotation.web.
builders.HttpSecurity; // Requests for the login page and the static
import org.springframework.security.config.annotation.web. assets are allowed
configuration.WebSecurityConfigurerAdapter; http.authorizeRequests()
.antMatchers(“/login.html”, “/**/*.
import de.codecentric.boot.admin.config.EnableAdminServer; css”, “/img/**”, “/third-party/**”)
.permitAll();
@EnableAdminServer // ... and any other request needs to be authorized
@Configuration http.authorizeRequests().
@SpringBootApplication antMatchers(“/**”).authenticated();
public class SpringBootAdminApplication {
// Enable so that the clients can
public static void main(String[] args) { authenticate via HTTP basic for registering
SpringApplication. http.httpBasic();
run(SpringBootAdminApplication.class, args); }
} }
// end::configuration-spring-security[]
@Configuration
public static class SecurityConfig extends }
WebSecurityConfigurerAdapter {
@Override Let us create more Spring Boot applications to monitor
protected void configure(HttpSecurity http) through the Spring Boot Admin Server created in the above
artifactId>
<version>1.5.1</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</
artifactId>
</dependency>
Figure 2: Admin server UI Add the property given below to the application.
properties file. This property tells us where the Spring Boot
Admin Server is running. Hence, the clients will register
with the server.
spring.boot.admin.url=
spring.boot.admin.notify.slack.webhook-url=.
Figure 3: Detailed view of Spring Boot Admin slack.com/services/T8787879tttr/B5UM0989988L/0000990999VD1hV
t7Go1eL //Slack Webhook URL of a channel
steps. All Spring Boot applications that we now create will act spring.boot.admin.notify.slack.message=”*#{application.
as Spring Boot Admin clients. To make the application an admin name}* is *#{to.status}*” //Message to appear in the
client, add the dependency given below along with the actuator channel
dependency. In this demo, I have created three applications:
Eureka Server, Customer Service and Order Service. Since we are managing all the applications with the
Spring Boot Admin, we need to secure its UI with a login
<dependency> feature. Let us enable the login feature to the Spring Boot
<groupId>de.codecentric</groupId> Admin Server. I am going with basic authentication here.
<artifactId>spring-boot-admin-starter-client</ Add the Maven dependencies give below to the Admin
Server module, as follows: authenticating. Hence, add the properties given below to the
admin client’s application.properties files.
<dependency>
<groupId>org.springframework.boot</groupId> spring.boot.admin.username=admin
<artifactId>spring-boot-starter-security</artifactId> spring.boot.admin.password=admin123
</dependency>
<dependency>
<groupId>de.codecentric</groupId> There are additional UI features like Hystrix and
<artifactId>spring-boot-admin-server-ui-login</ Turbine UI, which we can enable in the dashboard. You
artifactId> can find more details at
<version>1.5.1</version> spring-boot-admin/1.5.1/#_ui_modules. The sample code
</dependency> created for this demonstration is available on.
Add the properties given below to the application.properties com/2013techsmarts/SpringBoot_Admin_Demo.
file.
driver.findElement(By.cssSelector(<css selector>));
Selenium locators
Locator is a command that instructs the Selenium IDE which To locate by XPath, type:
GUI element it needs to work on. Elements are located in
Selenium WebDriver with the help of findElement() and driver.findElement(By.xpath(<xpath>));
findElements() methods provided by the WebDriver and
WebElement class. The findElement() method returns a Limitations of Selenium
WebElement object based on a specified search criteria or Selenium does have some limitations which one needs to be
ends up throwing an exception. The findElements() method aware of. First and foremost, image based testing is not clear-
returns a list of WebElements matching the search criteria. If cut compared to some other commercial tools in the market,
these are not found, it returns an empty list. while the fact that it is open source also means that there is no
The different types of locators are: guaranteed timely support. Another limitation of Selenium is
1. ID that it supports Web applications; therefore, it is not possible
2. Name to automate the testing of non-browser based applications.
3. Link Text Selenium is a power testing framework to conduct
4. CSS Selector functional and regression testing. It is open source software
5. DOM and supports various programming environments, OSs and
6. XPath popular browsers. Selenium WebDriver is used to conduct
batch testing, cross-platform browser testing, data driven
To locate by ID, type: testing, etc. It is also very cost-effective when automating
Web applications; and for the technically inclined, it provides
driver.findElement(By.id(<element ID>)); the power and flexibility to extend its capability many times
over, making it a very credible alternative to other test
To locate by name, type: automation tools in the market.
driver.findElement(By.name(<element name>));
By: Neetesh Mehrotra
The author works in TCS as a systems engineer. His areas of
To locate by Link Text, type:
interest are Java development and automation testing. You can
contact him at mehrotra.neetesh@gmail.com.
driver.findElement(By.linkText(<linktext>));
E
very one of us makes mistakes—some of these might to-use built-in functions for the most frequently performed
be trivial and be ignored, while a few that are serious tasks. A newbie can easily use Splinter and automate any
can’t be ignored. Hence, it’s always a good practice specific process with just a limited knowledge of Python
to verify and validate what we do in order to eliminate scripting. It acts as an easily usable abstraction layer on top of
the possibility of error. So is the case with any software different available automation tools like Selenium and makes
application. The development of a software application is it easy to write automation tests. We can easily automate a
complete only when it’s fully verified and validated (its plethora of tasks such as opening a browser, clicking on any
functionality, performance, user interface, etc). Only then is it specific link or accessing any link, just with one or two lines
ready for release. Carrying out all such validations manually of code using Splinter, while in the case of other open source
is quite time consuming; so, machines perform such repetitive tools like Selenium, this is a long and complex process.
tasks and processes. This is called automation testing. It saves Splinter even allows us to find different elements of any
a lot of time while it reduces the risk of any further error Web application using its different properties like tag name,
caused by human intervention. text or ID value, xpath, etc. Since Splinter is an open source
There are different automation tools and frameworks tool, it’s quite easy to get clarifications on anything that’s not
available, of which Splinter is one. It lets us automate clear. It is supported by a large community. It even has well
different manual tasks and processes associated with any maintained documentation which makes it easy for any newbie
Web-based software application. In a Web application, we to master this tool. Apart from all this, Splinter supports various
need to automate the sequence of different actions performed, inbuilt libraries making the task of automation easier. We can
right from opening the Web browser to checking if it’s easily manage different actions performed on more than one
loading properly for different actions that involve interactions Web window at the same time as well as navigate through the
with the application. Splinter is quite good in automating a history of the page, reload the page, etc.
sequence of actions. It is an open source tool used for testing
different Web applications using Python. The tasks needed Features of Splinter
to be performed by Splinter are written in Python. It lets us 1. Splinter has got one of the simplest APIs among open
automate various browser actions, such as visiting URLs as source tools used for automating different tasks on Web
well as interacting with their different items. It has got easy- applications. This makes it easy to write automated tests
for any Web application. This helps to confirm if the given response is okay or not.
2. It supports different Web drivers for various browsers. 14. It is possible to manipulate the cookies that are using
These drivers are the Firefox Web driver for Mozilla the cookies’ attributes from any browser instance.
Firefox, Chrome’s Web driver for Google Chrome, The cookie’s attribute is actually an instance of a
PhantomJs Web driver for PhantomJs, zope.testbrowser CookieManager class which manipulates cookies, such as
for Zopetest and a remote Web driver for different adding and deleting them.
‘headless’ (with no GUI) testing. 15. One can create new drivers using Splinter. For instance, if
3. Splinter also allows us to find different elements in any we need to create a new Splinter browser, we just need to
Web page by their Xpath, CSS, tag value, name, ID, implement a test case (extending test.base.BaseBrowsertests).
text or value. In case we need more accurate control of All this will be present in a Python file, which will act as a
the Web page or we need to do something more, such driver for any future usage.
Chrome
testing with Selenium as its back-end, and for ‘headless’ Browser-based Firefox
Splinter
testing with zope.testbrowser as its back-end. Selenium HTTP
Remote
A Webdriver Server
5. It has extensive support for using iframes and interacts Test
Code P Web
Driver
Remote
Sauce Labs (IE)
I
with them by just passing the iframe’s name, ID or index Headless
value. There is also Chrome support for various alerts and PhantomJS
zope.testbrowser
prompts in the Splinter 0.4 version.
6. We can easily execute JavaScript in different drivers
which support Splinter. We can even return the result of Figure 1: Flow diagram for Splinter acting as an abstraction layer
the script using an inbuilt method called evaluate_script. (Image source: googleimages.com)
7. Splinter has got the ability to work with AJAX and
asynchronous JavaScript using various inbuilt methods. Drivers supported by Splinter
8. When we use Splinter to work with AJAX and Drivers play a significant role when it comes to any Web
asynchronous JavaScript, it’s a common experience application. In Splinter, a Web driver helps us open that specific
to have some elements which are not present in application whose driver we are using. Different types of
HTML code (since they are created using JavaScript, drivers are supported by Splinter, based on the way any specific
dynamically). In such cases, we can use various inbuilt application is accessed and tested. There are browser based
methods such as is_element_present or is_text_present drivers, which help to open specific browsers; apart from that
for checking the existence of any specific element we have headless drivers, which help in headless testing and
or text. Splinter will actually load the HTML and then there are remote drivers, which help to connect to any Web
the JavaScript in the browser, and the check will be application present on a remote machine. Here is a list of drivers
performed before JavaScript is processed. that are supported by Splinter.
9. The Splinter project has full documentation for its APIs Browser based drivers:
and this is really important when we have to deal with Chrome WebDriver
different third party libraries. Firefox WebDriver
10. We can also easily set up a Splinter development Remote WebDriver
environment. We need to make sure we have some basic Headless drivers
development tools in our machine, before setting up an Chrome WebDriver
entire environment with just one command. Phantomjs WebDriver
11. There is also a provision for creating a new Splinter zope.testbrowser
browser in an easy and simple way. We just need to Django client
implement a test case for this. Flask client
12. Using Splinter, it’s possible to check the HTTP status Remote driver
code that a browser visits. We can use the status_code. Remote WebDriver
is_success method to do the work for us. We can compare
the status code directly. Prerequisites and installation of Splinter
13. Whenever we use the visit method, Splinter actually To install Splinter, Python 2.7 or above should be installed on the
checks if the given response is a success or not, and if it is system. We can download Python from
not, then Splinter raises an HttpResponseError exception. sure you have already set up your development environment.
$ [sudo] pip install splinter #Inbuilt function browser.fill uses the tag name for Email
and Password input box i.e. email and pass respectively to
For installing under-development source-code: identify it
To get Splinter’s latest and best features, just run the browser.fill(‘email’, user_email)
following given set of commands from a terminal: browser.fill(‘pass’, user_pass)
#selects the login button using its id value present on the
$ git clone git://github.com/cobrateam/splinter.git Facebook page to click and log in with the given details
$ cd splinter button = browser.find_by_id(‘u_0_d’)
$ [sudo] python setup.py install button.click()
else:
Writing sample code to automate a process print(“Facebook web application NOT FOUND”)
using Splinter
As already stated, even a newbie without much knowledge Some important built-in functions used in Splinter
of programming can automate any specific task using Table 1 lists some of Splinter’s significant built-in
Splinter. Let’s discover how one can easily make Splinter functions that can be used while automating any process
perform any specific task automatically on a Web for a Web application.
application. The credit for the ease of coding actually goes
to the different inbuilt functions that Splinter possesses. We Setting up the Splinter development environment
just need to incorporate all such built-in functions or library When it comes to programming in Splinter, we have
files with the help of a few lines of code. Additionally, already seen that it’s easier than other open source Web
we need to apply logic while coding to validate different application testing tools. But we need to set up a development
scenarios from different perspectives. Let’s have a look at environment for it, wherein we can easily code or automate
one of the sample code snippets that has been written for a specific process using Splinter. This is not a tough task. We
Splinter. Here, we make use of the name and ID values of just need to make sure that we have some basic development
different elements present on the Web page to identify that tools, library files and a few add-on dependencies on our
specific Web element. machine, which will ultimately help us code in an easier and
Scenario for sample code: Login to a Facebook account better way. We can get the required tools and set up the entire
using the user’s email ID and password. environment using just a few commands.
Lets’ have a look at the different development tools
#imports the Browser library for Splinter required to set up the environment.
from splinter import Browser Basic development tools: If you are using the Mac OS,
install the Xcode tool. It can be downloaded from the Mac
# takes the email address from user as input to login to his/ Application Store (on the Mac OS X Lion) or even from the
her Facebook account Apple website.
user_email = raw_input(“enter users email address “) If you are using a Linux computer, install some of the
# takes the password from user as input to login to his/her basic development libraries and the headers. On Ubuntu, you
Facebook account can easily install all of these using the apt-get command.
user_pass = raw_input(“enter users password “) Given below is the command used for this purpose.
# loads the Firefox browser $ [sudo] apt-get install build-essential python-dev libxml2-
browser= Browser(‘firefox’) dev libxslt1-dev
# stores the URL for Facebook in url variable
url = “” Pip and virtualenv: First of all, we need to make sure
that we have Pip installed in our system, with which we
#navigates to facebook website and load that in the Firefox manage all the Splinter development dependencies. It lets us
browser program our task and makes the system perform any activity
I
f you refer to any Web technology survey to check the <!DOCTYPE HTML>
market share of different server side scripting languages, <html>
you will be surprised to know that PHP is used by an <head>
average 70 per cent of the websites. According to w3techs. <title>Example</title>
com, “PHP is used by 82.7 per cent of all the websites </head>
whose server-side programming language we know.” In the <body>
early stages, even Facebook servers deployed PHP to run <?php
their social networking application. Nevertheless, we are echo “I am PHP script!”;
not concerned about the Web traffic hosted by PHP these ?>
days. Instead, we will delve deep into PHP to understand its </body>
development, its history, its pros and cons and, in the end, </html>
we will have a sneak peek into some of the open source IDEs
which you can use for rapid development. In the above example, you can see how easily PHP can
First, let’s understand what PHP is. It is an abbreviated be embedded inside HTML code just by enclosing it inside
form of ‘Hypertext Pre-processor’. Confused about the <?php and ?> tags, which allows very cool navigation
sequence of the acronym? Actually, the earlier name of PHP between HTML and PHP code. It differs from client-side
was ‘Personal Home Page’ and hence the acronym. It is a scripting languages like JavaScript in that PHP code is
server side programming language mainly used to enhance executed on the server with the help of a PHP interpreter, and
the look and feel of HTML Web pages. A sample PHP code only the resultant HTML is sent to the requester’s computer.
embedded into HTML looks like what follows: Though it can do a variety of tasks, ranging from creating
forms to generating dynamic Web content to sending and called ‘Personal Home Page Tools’ in order to maintain
receiving cookies, yet there are three main areas where PHP his personal Web pages. The succeeding year, these tools
scripts are usually deployed. were released under the name of ‘Personal Home Page/
Server-side scripting: This is the main usage and target Forms Interpreter’ as CGI binaries. They were enabled to
area of PHP. You require a PHP parser, a Web browser and provide support for databases and Web forms. Once they
a Web server to make use of it, and then you will be able were released to the whole world, PHP underwent a series of
to view the PHP output of Web pages on your machine’s developments and modifications, and the result was that the
browser. second version of ‘Personal Home Page/Forms Interpreter’
Command line scripting: PHP scripts can also be run was released in November 1997. Moving on, PHP 3, 4 and 5
without any server or browser but with the help of a PHP were released in 1998, 2000 and 2004, respectively.
parser. This is most suited for tasks that take a lot of Today, the most used version of PHP is PHP 5, with
time — for example, sending newsletters to thousands of approximately 93 per cent of the websites using PHP making
records, taking backups from databases, and transferring use of it, though PHP 7 is also available in the market. In
heavy files from one location to another. 2010, PHP 5.4 came out with Unicode support added to it.
Creating desktop applications: PHP can also be used to
develop desktop based applications with graphical user The pros and cons of PHP
interfaces (GUI). Though it has a lot of pain points, you Before going further into PHP development, let’s take a look
can use PHP-GTK for that, if you want to. PHP-GTK is at some of the advantages and disadvantages of using it in
available as an extension to PHP. Web development.
Fact: Did you know that PHP has a mascot just like Advantages
sports teams? The PHP mascot is a big blue elephant Availability: The biggest advantage of PHP is that it is
named elePHPant. available as open source, due to which one can find a
large developer community for support and help.
PHP and HTML – similar but different Stability: PHP has been in use since 1995 and thus it’s
PHP is often confused with HTML. So to set things straight, quite stable compared to other server side scripting
let’s take a look at how PHP and HTML are different and languages since its source code is open and if any bug is
similar at the same time. As we all know, HTML is a markup found, it can be readily fixed.
language and is the backbone for front-end Web pages. On Extensive libraries: There are thousands of libraries
the other hand, PHP works in the background, on the server, available which enhance the abilities of PHP—for
where HTML is deployed to perform tasks. Together, they are example, PDFs, graphs, Flash movies, etc. PHP makes use
used to make Web pages dynamic. For better understanding, of modules, so you don’t have to write everything from
let’s look at an example where you display some content on a the beginning. You just need to add the required module in
Web page using HTML. Now, if you want to do some back- your code and you are good to go.
end validation on the database, then you will use PHP to do Built-in modules: Using PHP, one can connect to the
it. So both HTML and PHP have different assigned roles and database effortlessly using its built-in modules, which
they complement each other perfectly. Listed below are some drastically reduce the development time and effort of
of the similarities and differences that will make this clear. Web developers.
Cross-platform: PHP is supported on all platforms, so
Similarities Differences you don’t have to worry whether your code written in
Windows OS will work on Linux or not.
Compatible with HTML is used on the front-end
Easy to use: For beginners, learning PHP is easy because
most of the brows- whereas PHP is back-end
ers supporting their technology. of its cool syntax, which is somewhat similar to the C
technologies. programming language, making it even simpler for those
familiar with C.
Can be used on all PHP is a programming language,
operating systems. whereas HTML is called a markup
language and is not included in Disadvantages
the category of programming Not suitable for huge applications: Though PHP has a
languages because it can’t do lot of advantages in Web page development, it still can’t
calculations like ‘1+1=2’. be used to build complicated and huge Web applications
since it does not support modularity and, hence, the
History and development maintenance of the app will be a cumbersome task.
The development of PHP dates back to 1994 when a Danish- Security: Security of data involved in Web pages is
Canadian programmer Rasmus Lerdorf created Perl scripts of paramount concern. The security of PHP can be
compromised due to its open source nature since anyone available packages. It is known for its sleek, feature-
can view its source code and detect bugs in it. So you rich and lightweight interface. It is also supported on all
have to take extra measures to ensure the security of your operating systems. Some of the packages which can be used
Web page if you are dealing with sensitive data. to convert it into an IDE are Sublime PHP Companion,
PHPCS, codIntel, PHPDoc, Simple PHPunit, etc. It can be
Fact: It is estimated that there are approximately 5 million downloaded as open source from sublimetext.com.
PHP developers worldwide, which is a testament to its power. 5. PHP Designer: This IDE is only available for Windows
users. It is very fast and powerful, with full support for
Open source IDEs for PHP development PHP, HTML, JavaScript and CSS. It is used for fast Web
The choice of IDE plays an important role in the development development due to its features like intelligent syntax
of any program or application but this aspect is often highlighting, object-oriented programming, code templates,
neglected. A good and robust IDE comes packed with loads code tips and debug manager, which are all wrapped into
of features and packages to enable rapid development. a sleek and intuitive interface that can also be customised
Automatic code generation, refactoring, organising imports, according to various available themes. It also supports
debugging, identifying dead code and indentation are some various JavaScript frameworks such as JQuery, ExtJs and
of the advantages a powerful IDE can provide. So let’s take Yui. An open source version of it is available and you can
a look at some dominant open source IDEs that can be very read more about it on its official website.
useful in PHP development. 6. NuSphere PHP IDE: PHpED is the IDE developed by
1. NetBeans: Most of you must be aware of NetBeans NuSphere, a Nevada based company which entered the
in Java development but it can also be used for PHP market way back in 2001. The current available version of
development. The biggest advantage of NetBeans is that it PHpED is 18.0 which provides support for PHP 7.0 and
supports many languages like English, Chinese, Japanese, almost all PHP frameworks. This tool also has the ability
etc, and can be installed smoothly on any operating to run unit tests for the developed projects and comes
system. Some of the features that differentiate it from the packaged with the support for all Web based technologies.
rest are smart code completion, refactoring, try/catch code You can download PHpED from NuSphere’s website
completion and formatting. It also has the capability to.
configure various PHP frameworks like Smarty, Doctrine, 7. Codelobster: Codelobster also provides a free IDE for PHP
etc. You can download it from netbeans.org. development. Though it is not used too often, it is catching
2. Eclipse: Eclipse tops the list of popular IDEs. If you up fast. By downloading the free version, you get support
have worked with Eclipse earlier, then you will feel at for PHP, JS, HTML and CSS. It can be integrated with
home using Eclipse PDT for PHP development. It can be various frameworks such as Drupal, WordPress, Symfony
downloaded from eclipse.org/pdt. Some of its features and Yii. You can download it from.
are syntax highlighting, debugging, code templates,
validating syntax and easy code management through Writing the first PHP program
Windows Explorer. It is a cross-platform IDE and works Having read about PHP, its history and various IDEs, let’s
on Windows, Linux and Mac OS. Since it is developed in write our first PHP program and run it using XAMPP. Though
Java, you must have it installed in your machine. there is no official information about the full form of XAMPP,
3. PHPStorm: PHPStorm, developed by JetBrains (the it is usually assumed to stand for cross-platform (X),
same company that developed IntelliJ IDEA for Java), is Apache (A), MariaDB (M), PHP (P) and Perl (P). XAMPP
mainly used for professional purposes but is also available is an open source, widely used Web server developed by
licence-free for students, teachers and open source apachefriends.org, which can be used to create a local HTTP
projects. It has the most up-to-date set of features for rapid server on machines with a few clicks. We will also be using
development since it provides support for leading front- it in our tutorial Figure 1: Apache service started on XAMPP control panel
<!DOCTYPE html>
<html> Addition of Two Number
<head>
Number 1:
<title>Sum</title>
</head> Number 2:
<body>
<h3>Addition Of Two Numbers</h2>
CALCULATE SUM
<form>
<div>Number 1:</div> Figure 3: PHP program when run on the browser
<input type=”text” name=”num1”/>
<div>Number 2:</div> Now type localhost/ PHPDevelopment and it will
<input type=”text” name=”num2”/> list all the files in your directory on your browser, as
<div><br><input type=”submit” value=”CALCULATE SUM”></ shown in Figure 2.
div><br> Click on AddTwoNumbers.php and you will be directed
</form> to the required page, where you can perform the addition
of two numbers.
<?php Here, you can see that the form has been created using
if (isset($_GET[‘num1’]) && isset($_GET[‘num2’])) { HTML and the corresponding addition of the numbers is
done using PHP. Now start your Web development using
$num1 = $_GET[‘num1’]; PHP. You can also make use of the various frameworks
$num2 = $_GET[‘num2’]; available to simplify development and lessen your coding
$sum = $num1 + $num2; time. Happy coding!
echo “Sum of $num1 and $num2 is $sum”;
} By: Vinayak Vaid
?> The author works as an automation engineer at Infosys Limited,
Pune. He has worked on different testing technologies and
</body> automation tools like QTP, Selenium and Coded UI. He can be
contacted at vinayakvaid91@gmail.com.
</html>
S
crapy is one of the most powerful and popular Python following file structure:
frameworks for crawling websites and extracting
structured data useful for applications like data “scrapy_first/
analysis, historical archival, knowledge processing, etc. -scrapy.cfg
To work with Scrapy, you need to have Python installed on scrapy_first/
your system. Python can be downloaded from. -__init__.py
- items.py
Installing Scrapy with Pip -pipelines.py
Pip is installed along with Python in the Python/Scripts/folder. -settings.py
To install Scrapy, type the following command: -spiders/
-__init__.py”
pip install scrapy
In the folder structure given above, ‘scrapy_first’ is the
The above command will install Scrapy on your machine root directory of our Scrapy project.
in the Python/Lib/site-packages folder. A spider is a class that describes how a website will be
scraped, how it will be crawled and how data will be extracted
Creating a project from it. The customisation needed to crawl and parse Web
With Scrapy installed, navigate to the folder in which you pages is defined in the spiders.
want to create your project, open cmd and type the command
below to create the Scrapy project: A spiders’s scraping life cycle
1. You start by generating the initial request to crawl the
scrapy startproject scrapy_first first URL obtained by the start_requests() method,
which generates a request for the URLs specified in the
The above command will create a Scrapy project with the start_urls, and parses them using the parse method as a
scrapy startproject myproject This command will create a Scrapy project in the project directory specified; else
[project_dir] with the name of project, if project_dir is not mentioned.
scrapy genspider spider_name This command needs to be run from the root directory of the project, to create a
[domain.com] spider with allowed_domain as domain.com.
Scrapy bench This runs a quick benchmark test, to tell you Scrapy’s maximum possible speed in
crawling Web pages, given your hardware.
scrapy check Checks spider contracts.
scrapy crawl [spider] This command instructs the spider to start crawling the Web pages.
scrapy edit [spider] This command is used to edit the spider using the editor specified in the EDITOR
environment variables or EDITOR setting.
scrapy fetch [url] This command downloads the contents of the URL and stores them in a standard
output file.
scrapy list Lists the available spiders in the project.
scrapy parse [url] This is the default callback used by Scrapy to process downloaded responses,
when their requests don’t specify a callback.
scrapy runspider file_name.py Runs a spider self-contained in a Python file without having to create a project.
scrapy view [url] Opens the URL in the browser as seen by the spider.
callback to get a response. that XMLSpider iterates over nodes and CSVSpider iterates over
2. In the callback, after the parsing is done, either of the rows with the parse_rows() method.
three dicts of content — request object, item object or Having understood the different types of spiders, we are
iterable — is returned. This request will also contain the ready to start writing our first spider. Create a file named
callback and is downloaded by Scrapy. The response is myFirstSpider.py in the spiders folder of our project.
handled by the corresponding callback.
3. In callbacks, parsing of page content is performed using import scrapy
the XPath selectors or any other parser libraries like lxml, class MyfirstspiderSpider(scrapy.Spider):
and items are generated with parsed data. name = “myFirstSpider”
4. The returned items are then persisted into the database allowed_domains = [“opensourceforyou.com”]
or the item pipeline, or written to a file using the start_urls = (
FeedExports service. ‘-
Scrapy is bundled with three kinds of spiders. django-app/’,
BaseSpider: All the spiders must inherit this spider. It is )
the simplest one, responsible for start_urls / start_request()
and calling of the parse method for each resulting response. def parse(self, response):
CrawlSpider: This provides a convenient method for page = response.url.split(“/”)[-2]
crawling links by defining a set of rules. It can be overridden filename = ‘quotes-%s.html’ % page
as per the project’s needs. It supports all the BaseSpider’s with open(filename, ‘wb’) as f:
attributes as well as an additional attribute, ‘rules’, which is a f.write(response.body)
list of one or more rules. self.log(‘Saved file %s’ % filename)
XMLSpider and CSVSpider: XMLSpider iterates over the
XML feeds through a certain node name, whereas CSVSpider In the above code, the following attributes have been defined:
is used to crawl CSV feed. The difference between them is 1. name: This is the unique name given to the spider in the project.
2. allowed_domains: This is the base address of the URLs define the item Fields in our project’s items.py file. Add the
that the spider is allowed to crawl. following lines to it:
3. Start_requests(): The spider begins to crawl on the
requests returned by this method. It is called when the title=item.Field()
spider is opened for scraping. url=item.Field()
4. Parse(): This handles the responses downloaded for
each request made. It is responsible for processing the Our code will look like what’s shown in Figure:
Items: Items are used to collect the scraped data. They Now run the spider and our output will look like
are regular Python dicts. Before using an item we need to what’s shown in Figure 2.
We can find data.xml in our project’s root folder, as shown from scrapy.mail import MailSender
in Figure 3. mailer=MailSender()
mailer.send(to=[‘abc@xyz.com’],subject=”Test Subject ”
,body=”Test Body”, cc=[‘cc@abc.com’])
T
he term ‘Web application’ or ‘Web app’ is often Types of testing
confused with ‘website’. So let’s get that doubt cleared 1. Functional testing: Functional testing is a superset
—a Web application is a computer app that is hosted validating all those features and functionalities that the
on a website. A website has some fixed content while a Web application is meant to perform. It includes testing the
application performs various definite actions based on the business logic around the set rules. Listed below are some
users’ inputs and actions. of the common checkpoints:
Tests links to a page from external pages.
Web application testing Validates the response to a form submission.
Web application testing involves all those activities that Checks, creates, reads, updates, deletes (CRUD) tasks.
software testers perform to certify a Web app. This testing Verifies that the data retrieved is correct.
has its own set of criteria and checkpoints, based on the Identifies database connectivity and query errors.
development model, to decide whether the actions are part 2. Browser compatibility testing: Because of the
of expected behaviour or not. availability of cross-platform browser versions, it
has become necessary to validate if the application guidelines, including fonts, frames and borders.
is supported on other browser versions without Checks that images load correctly and in their proper size.
compatibility issues. If the application is not behaving With the increasing need to analyse the performance
properly on certain browsers, it is good to mention the of your Web app, it is a good idea to evaluate some of the
supported versions to avoid customer complaints. Below popular open source performance testing tools.
are some of the common checkpoints:
Checks browser rendering of your application’s user interface. Why choose open source performance
Checks the browser’s security settings for cross-domain test tools?
access and hacks. 1. No licensing costs – a commercial load testing tool can
Verifies consistent functioning of the app across multiple really burn a hole in your pocket when you want to test
versions of a browser. with a large number of virtual users.
Checks user interface rendering on different-sized mobile 2. Generates (almost) an infinite amount of load on the Web
device screens, including screen rotation. app without charging users any additional licensing costs.
Verifies that the application operates correctly when the The only limitation would be the resources available.
device moves in and out of the range of network services. 3. Enables you to create your own plugins to extend the
3. Performance testing: Performance testing focuses analysis and reporting capabilities.
on checking how an application behaves under extra 4. Integrates with other open source and commercial tools to
load, which refers to the number of users accessing drive end-to-end test cycles.
the application simultaneously. It is good to see which
particular feature is breaking down under the given load. Popular open source Web application
Listed below are some of the common checkpoints: test tools
Checks the server’s response to the browser form Licensed tools have their own benefits but open source always
submit requests. stands out because of the ease of use. Here are some popular
Identifies changes in performance over a period of time. open source Web app test tools that are easily available and
Tests for functions that stop working at higher loads. simple to use as well.
Identifies how an application functions after a system
crash or component failure. 1. JMeter: Load and performance tester
Identifies forms and links that operate differently under JMeter is a pure Java desktop application designed to
higher loads. load-test functional behaviour and measure performance.
4. Security testing: Securing user data is a critical task and It can be used to test performance both on static and
Web apps should not leak data. Testing ensures that the dynamic resources (files, Servlets, Perl scripts, Java objects,
app works only with a valid login, and that after logout, databases and queries, FTP servers and more). It can be
the data remains secure and pressing the ‘back’ key does used to simulate a heavy load on a server, network or object
not resume the session. Given below are some of the to test its strength or to analyse the overall performance
common checkpoints: under different load types. JMeter was originally used for
Checks whether the app operates on certain URLs testing Web and FTP applications. Nowadays, it is used for
without logging. functional tests, database server tests, etc.
Tests basic authentication using false user names and
password credentials. The pros of JMeter
Tests if the app functions correctly upon invalid URL A very lightweight tool that can be installed easily.
attribute values. As it is an open source tool, you need not be worried
Checks how the app functions with invalid input fields, about the licence.
including text fields. There are multiple plugins that are available in the market Figure 1: JMeter
and can be installed easily, according to requirements. Capybara uses the same DSL to drive a variety of
Offers caching and offline analysis/replaying of test results. browsers and headless drivers.
Seleni- GitHub project, Test automation Free use, Integrated into ALM, Ma- Adobe Flash, Ajax,
umHQ Google Code framework; open source ven; standalone applica- .NET, DOM, Java GUI,
Projects, Testing tool tion; Web based Android apps, Silver-
SeleniumHQ light, CSS, HTML, HTTP,
Xpath
Capybara GitHub project Test automation Free use, COM API; Tool extension; Web, Web services
framework; open source Web based
Testing tool
Sahi Pro Tyto Software Test automation Commercial, trial Command Adobe Flex, Ajax, Java,
framework; line PHP, RubyOnRails,
Testing tool HTTPS, JavaScript
of the enhanced features like test distribution and report and a sophisticated analytics dashboard. WebLOAD
customisation. Sahi runs as a proxy server; the proxy has built-in flexibility, allowing QA and DevOps teams
settings are configured to point to Sahi’s proxy and then to create complex load testing scenarios thanks to
inject JavaScript event handlers into Web pages. native Java scripting. WebLOAD supports hundreds
of technologies – from Web protocols and enterprise
The pros of Sahi applications to network and server technologies.
Sahi can achieve most of the automation with the
available functions and variables. It has all the inbuilt The pros of WebLOAD
APIs required for complex tasks. Sahi also has multi- It has native JavaScript scripting.
browser support. UI wizards enhance the script.
It does not require additional tools to run and execute the It supports many technologies.
tests. All the tests run from the inbuilt Sahi Controller. It offers easy-to-reach customer support.
OpenShift is a Kubernetes based container application, designed for container based software
deployment and management. It is an application development and hosting platform, which
automates management and enables the developer to focus on the app itself.
T
he increase in the volume, velocity and the variety Features of Red Hat OpenShift
of data from multiple channels demands high Red Hat OpenShift is one of the leading cloud services
performance computing resources that can process providers in the PaaS (Platform as a Service) paradigm. It
heterogeneous Big Data. It is not always possible to purchase provides multiple platforms to cloud users with the flexibility
costly computing resources like high performance multi- to develop, deploy and execute applications on the cloud.
core processors with supercomputing powers, huge memory OpenShift has high performance data centres with enormous
devices and related technologies to process, visualise and processing power to work with different programming
make predictions on the datasets related to live streaming languages, which include Java, PHP, Ruby, Python, Node.js,
and real-time supercomputing applications. To cope with Perl, Jenkins Server, Ghost, Go and many others.
and work with such technologies, cloud services are used, A beginner can use the Free Tier of Red Hat OpenShift
whereby computing resources can be hired on demand and for the development, deployment and execution of new
billed for as per usage. cloud apps on the online platform provided by it. Any of
There are a number of cloud services providers in the the programming languages mentioned can be used for the
global market with different delivery models including development of apps with real-time implementation.
Infrastructure-as-a-Service (IaaS), Platform-as-a-Service
(PaaS) and Software-as-a-Service (SaaS). Nowadays, Developing PHP research based
there are some new keywords in the cloud delivery space, Web applications
like Network-as-a-Service (NaaS), Database-as-a-Service OpenShift provides multiple programming language options
(DBaaS), Testing-as-a-Service (TaaS) and many others. Each to cloud users for the development of apps. With each
of these cloud delivery approaches has different resources, programming language, OpenShift delivers multiple versions
which are used for different applications. so that the compatibility issues can be avoided at later stages.
Figure 3: Starter and pro plans for cloud users on OpenShift the commands that should be executed on the local
command prompt or in the Linux Shell.
<?php
error_reporting(0);
require_once(‘TwitterAPIExchange.php’);
Figure 4: Dashboard of OpenShift to create new applications $settings = array(
‘oauth_access_token’ => “XXXXXXXXXXXXXXXXXXXXXXXXX”,
The cloud applications can be uploaded using ‘oauth_access_token_secret’ => “
mapping with GIT via a local command prompt XXXXXXXXXXXXXXXXXXXXXXXXX “,
(Windows CMD or Linux Terminal). OpenShift specifies ‘consumer_key’ => “ XXXXXXXXXXXXXXXXXXXXXXXXX “,
Figure 10: Committing the changes as a permanent write operation on the cloud
Figure 8: Mapping of the command prompt with GIT to upload the code
on the live cloud
T
he World Wide Web has evolved into the primary of Web pages is provided by Yahoo (.
channel to access both information and services in the com/performance/rules.html). There are other informative
digital era. Though network speed has increased many resources, too, such as BrowserDiet (
times over, it is still very important to follow best practices en/#html). Various other factors that contribute to Web page
when designing and developing Web pages to provide optimal optimisation are shown in Figure 1.
user experiences. Visitors of Web pages/applications expect
the page to load as quickly as possible, irrespective of the Content optimisation
speed of their network or the capability of their device. When responding to end user requests, the most time is taken
Along with quick loading, another important parameter is up by the downloading of components such as images, scripts,
to make Web applications more responsive. If a page doesn’t Flash and style sheets.
meet these two criteria, then users generally move out of it The greater the number of HTTP requests, the more the
and look for better alternatives. So, both from the technical time required for the page to load and its responsiveness
and economical perspectives, it becomes very important to lessens. A critical mechanism to reduce the number of
optimise the responsiveness of Web pages. HTTP requests is to reduce the number of components in
Optimisation cannot be thought of just as an add-on after the Web page. This may be achieved by combining several
completing the design of the page. If certain optimisation components. For example, all scripts can be combined,
practices are followed during each stage of Web page many CSS style sheets can be merged together, etc.
development, these will certainly result in a better performance. Minimising the DNS lookup is another important factor
This article explores some of these best practices to optimise in optimisation. The primary role of Domain Name
the performance of the Web page/application. Systems is the mapping of human readable domain names
Web page optimisation is an active research domain in to IP addresses. DNS lookups generally take somewhere
which there are contributions from so many research groups. between 20 and 120 milliseconds. Minimising the number
An easy-to-use Web resource to start with the optimisation of unique host names will reduce the number of DNS
Table 1
.icon-test { background-image: url(‘data:image/png;bas
e64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1B
Tool Description
MVEUAAACnej3aAAAAAXRSTlMAQObYZgAAAApJRE FUCNdjYAAAAAIAAeIhvDM Apache This load testing tool is popular in the
AAAAASUVORK5CYII%3D’); } JMeter Java community.
Locust This load testing tool can be used to
specify user behaviour with Python.
The capability to handle millions of
Images generally contain data that are not required in Web
simultaneous user requests can be
usage. For example, the EXIF metadata can be stripped
tested with this tool.
before uploading to the server.
There are many tools to help you optimise images, such Wrk This is an HTTP benchmarking tool.
as TinyPNG, Compressor.io, etc. There are command line HTTPerf Different types of HTTP workloads shall
based tools also, such as jpegtran, imgopt, etc. be generated and tested. There are
various ports available for HTTPerf:
Performance analysis tools • HTTPerf.rb: Ruby interface
There are many tools available to analyse the performance of • HTTPerf.py: Python Port
Web pages. Some of these tools are illustrated in Figure 2. • HTTPerf.js: JavaScript Port
• Gohttperf: Go port
There are component-specific tools, too. For example, for
benchmarking JavaScript, the following tools may be used:
JSPerf Benchmarking Web servers
Benchmark.js Benchmarking of Web servers is an important mechanism in
JSlitmus Web page/application optimisation. Table 1 provides a sample
Matcha list of tools available for benchmarking Web servers.
Memory-stats.js The Web optimisation domain is really huge. This
For PHP, tools such as PHPench and php-bench could be article has just provided a few start-up pointers, using which
harnessed. interested developers can proceed further in understanding the
advanced technicalities of the topic.
Minifiers
As stated earlier, minifying is one of the optimisation References
techniques, for which there are many tools. For HTML, the
[1] Yahoo Best Practices:
following Minifiers could be tried out:
performance/rules.html
HTMLCompressor [2] Browser Diet:
HTMLMinifier [3]-
HTML_press wpo#analyzers
[4]
Minimize
Tools#optimize-your-images
Some of the tools used for Minifying JavaScript and CSS
are listed below:
Uglifyjs2 By: Dr K.S. Kuppusamy
CSSmin.js The author is assistant professor of computer science,
Clean-css School of Engineering and Technology at Pondicherry Central
University. He has vast experience in teaching and research
JShrink
(in academia and industry). He can be reached via mail at
JSCompress kskuppu@gmail.com.
YUI Compressor
T
he digital marketing world has early stage, which is when we started The prime reason behind the
been ruled by the electronic direct considering building PushEngage and mountainous growth of PushEngage is
mailer (eDM) for quite some time. went on to create an automated marketing the ease of its deployment on any website.
But the eDM is now making way for push platform for browser-based push Local search site AskLaila, which receives
notifications. Be it an e-commerce site or notifications, available to all,” recalls Ravi over a million monthly unique visits, claims
an online news publication, Web masters Trivedi, founder and CEO, PushEngage. that notifications through PushEngage can
are deploying push notifications to grow With a small team of just 10 be deployed in as early as ten minutes.
their traffic as well as enhance their brand. employees, Trivedi’s PushEngage The service has also helped the company
But what does a push notification provider handles marketing automation through retain its users. “With PushEngage
deploy to serve a bulk of notifications? notifications for more than 6,000 clients notifications, we have been able to reach
Well, it is usually an open source solution! around the world. The total client base out to users who are not active on the site
Bengaluru-based PushEngage is sends over 20 million notifications on and provide them with helpful offers or
among the few early adopters of push a daily basis. All of that comes from 40 information,” says Nitin Agrawal, director
notifications. The company had built an servers that run in the cloud, and use of engineering, Asklaila.com.
in-house product to test the success rate a mix of proprietary and open source
of notifications circulated over-the-air solutions at the back-end. Bringing community offerings
at the time when Google offered the to the mainstream
A 10-member team handles more
same support to Chrome in April 2015. Trivedi tells Open Source For You that
than 6,000 global clients to serve
The initial results were strong enough to while his company had initially chosen
over 20 million push notifications on
commercialise the product. a daily basis! components that helped to scale better,
“We saw robust results even at the along with a faster development time,
I
nterpreted languages often have weakly typed variables 1983. A book titled ‘The C++ Programming Language’, first
which don’t require prior declaration. The additional benefit published in 1985 by Stroustrup himself, and its subsequent
of weakly typed variables is that they can be used to hold editions became the de facto standard for C++ until 1998, when
different types of data. For example, the same variable can the language was standardised by the International Standards
hold an integer, a character, or a string. Due to these qualities, Organization (ISO) and the International Electrotechnical
scripts written in such languages are often very compact. But Commission (IEC) as ISO/IEC 14882:1998, informally called
this is not the case with compiled languages, for which you C++98. The next three standards of C++ are informally called
need a lot of initialisation; and with strongly typed variables, C++03, C++11 and C++14. Hopefully, by the time this article
the code is often longer. Even if the regular expression syntax gets published, the latest standard of C++, informally called
for interpreted and compiled languages is the same, how they C++17, would have been released and will have some major
are used in real programs is different. So, I believe it is time changes to C++. After this, the next big changes in C++ will
to discuss regular expressions in compiled languages. In this take place with a newer standard, informally known as C++20,
article, I will discuss the regular expression syntax of C++. which is set to be released in 2020.
The first three standards of C++, namely the de facto
Standards of C++ standard of C++, C++98 and C++03, do not have any inbuilt
People often fail to notice the fact that programming mechanism for handling regular expressions. Things changed
languages like C and C++ have different standards. This with C++11 when native support for regular expressions was
is quite unlike languages like Perl and Python, for which added with the help of a new header file called <regex>. In
the use of regular expressions is highly warranted due to fact, the support for regular expressions was one of the most
the very nature of these programming languages (they are important changes brought in by this standard. C++14 also
scripting languages widely used for text processing and Web has provision for native support of regular expressions, and it
application development). is highly unlikely that C++17 or any future standards of C++
For a language like C++, heavily used for high- will quash the support for handling regular expressions. One
performance computing applications, system programming, problem we might face in this regard is that the academic
embedded system development, etc, many felt that the community in India mostly revolves around the C++98
inclusion of regular expressions was unnecessary. Many standard, which doesn’t support regular expressions. But this
of the initial standards of C++ didn’t have a natural way is just a personal opinion and I don’t have any documented
for handling regular expressions. I will briefly discuss the evidence to prove my statement.
different standards of C++ and which among them has
support for regular expressions. The C++11 standard
C++ was invented by Bjarne Stroustrup in 1979 and was Unlike C++03 and C++14, for which the changes were
initially known as ‘C with Classes’ and later renamed C++ in minimal, C++11 was a major revision of C++. GCC 5 fully
supports the features of C++11and C++14. The latter has programs. This and all the other C++ programs and text files
become the default standard for GCC 6. There were many used in this article can be downloaded from opensourceforu.
changes made to the core language by the standard C++11. com/article_source_code/September17C++.zip.
The inclusion of a new 64-bit integer data type, called long
long int, is a change made in C++ by the C++11 standard. #include <iostream>
Earlier, C++ only had 32-bit integers called long int. External #include <regex>
templates were also added to C++ by this standard.
Many more changes were made to the core of the C++ using namespace std;
language by the C++11 standard, but the changes were not int main( )
limited to the core alone — the C++ standard library was {
also enhanced by the C++11 standard. Changes were made to char str[ ] = “Open Source For You”;
the C++ standard library in such a way that multiple threads regex pat(“Source”);
could be created very easily. New methods for generating if( regex_search(str,pat) )
pseudo-random numbers were also provided by the C++11 {
standard. A uniform method for computing the return type of cout << “Match Found\n”;
function objects was also included by the C++11 standard. }
Though a lot of changes have been made to the standard else
library in C++11, the one that concerns us the most is the {
inclusion of a new header file called <regex>. cout<<”No Match Found\n”;
}
Regular expressions in C++11 return 0;
In C++, support for regular expressions is achieved by }
making changes to the standard library of C++. The header
file called <regex> is added to the C++ standard library to I’m assuming that the syntax of C is quite well known
support regular expressions. The header file <regex> is also to readers, who will understand the simple C++ programs
available in C++14 and, hence, what we learn for C++11 also we discuss in this article, so no further skills are required.
applies to C++14. There are some additions to the header Now let us study and analyse the program. The first two lines
file <regex> in C++14, which will be discussed later in this #include <iostream> and #include <regex> include the two
article. There are three functions provided by the header header files <iostream> and <regex>. The next line of code
file <regex>. These are regex_match( ), regex_search( ) using namespace std; adds the std namespace to the program
and regex_replace( ). The function regex_match( ) returns a so that cout, cin, etc, can be used without the help of the scope
match only if the match is found at the beginning of a string, resolution operator (::). The line int main( ) declares the only
whereas regex_search( ) searches the entire string for a function in this program, the main( ) function.
match. The function regex_replace( ) not only finds a match, This is one problem we face when programming
but it replaces the matched string with a replacement string. languages like C++ or Java are used. You need to write a lot
All these functions use a regular expression to denote the of code to set up the environment and get things moving.
string to be matched. This is one reason why you should stick with languages like
Other than these three functions, the header file <regex> Perl or Python rather than C++ or Java if your whole aim is
also defines a number of classes like regex, wregex, etc, and to process a text file. But if you are writing system software
a few iterator types like regex_iterator and regex_token_ and want to analyse a system log file, then using regular
iterator. But to simplify and shorten our discussion, I will expressions in C++ is a very good idea.
only cover the class regex and the three functions, regex_ The next line of code char str[ ] = “Open Source For
search( ), regex_match( ) and regex_replace( ). I believe You”; initialises a character array called str[ ] with a string in
it is impossible to discuss all the features of the header file which we will search for a pattern. In this particular case, the
<regex> in a short article like this, but the topics I will cover character array is initialised with the string Open Source For
are a good starting point for any serious C++ programmer to You. If you wish to replace the line of code
catch up with professional users of regular expressions. Now char str[ ] = Open Source For You with string str =
let us see how regular expressions are used in C++ with the “Open Source For You”; and thereby use an object str of
help of a small C++ program. string class of C++ instead of a character array, the program
will still work equally well. Remember that the string class of
A simple C++ program using regular C++ is just an instance of the template class basic_string.
expressions This modified program called string.cc is also available
The code below shows a C++ program called regex1.cc. I for downloading. On execution with the commands g++
am sure you are all familiar with the .cc extension of C++ string.cc and ./a.out, the program string.cc will also behave
this case, the word Source appears as the second word in the
string Open Source For You, and hence no match is found by
the function regex_match( ). Figure 1 shows the output of the
programs regex1.cc and regex2.cc.
kind of regular expression syntax being used in C++. Sometimes while (getline(file, str))
it is better to attack the problem directly than beat around the {
bush. But even then, it is absolutely essential to know the regular if( regex_search(str,pat) )
expression syntax used with C++. Otherwise, this article may {
just be a set of ‘Do-It-Yourself’ instructions. C++11 regular cout << str <<”\n”;
expressions support multiple regular expression styles like }
ECMAScript syntax, AWK script syntax, Grep syntax, etc. }
ECMAScript is a scripting language and JavaScript is return 0;
the most well-known implementation of ECMAScript. The }
syntax used by ECMAScript is not much different from the
other regular expression flavours. There are some minor On execution with the following commands, g++ regex7.cc
differences, though. For example, the notation \d used in Perl and ./a.out the program prints those lines containing numbers
style regular expressions to denote decimal digits is absent alone. Figure 4 shows the output of the program regex7.cc.
in ECMAScript style regular expressions. Instead, a notation Except for the line of code regex pat(“^[[:digit:]]+$”);
like [[:digit:]] is used there. I am not going to point out which defines the regular expression pattern to be searched,
any other such difference but just keep in mind that C++11 there is no difference between the working of the programs
supports multiple regular expression styles and some of the regex5.cc and regex7.cc. The caret symbol ^ is used to denote
styles differ from the others, slightly. that the match should happen at the very beginning and the
dollar symbol $ is used to denote that the match should occur
A practical regular expression for C++ at the end. In the middle, there is the regular expression
Now let us discuss a practical regular expression with which [[:digit:]]+ which implies one or more occurrences of
we can find out some real data rather than finding out ‘strings decimal digits, the same as [0-9]+. So, the given regular
starting with abc and ending with xyz’. Our aim is to identify expression finds a match only if the line of text contains only
those lines that contain only numbers. Consider the text file decimal digits and nothing more. Due to this reason, lines of
file2.txt with the following data to test our regular expressions: text like AA111, 22.22, a1234z, etc, are not selected.
abcxyz
a1234z
111222
123456
aaaaaaa Figure 4: Regular expressions for numbers
zzzzzzz
AA111 Now it is time for us to wind up the discussion. Like the
111 previous two articles in this series, I have covered the use of
2222 regular expressions in a particular programming language as
33333 well as some other aspects of the programming language that
22.22 will affect the usage of regular expressions. In this article, the
BBBB lengthy discussion about the standards of C++ was absolutely
essential, without which you might blindly apply regular
Now consider the program regex7.cc with the expressions on all standards of C++ without considering
following code: the subtle differences between them. The topics on regular
expressions discussed in this article may not be comprehensive
#include <iostream> but they provide an adequate basis for any good C++
#include <string> programmer to build up from. In the next part of this series
#include <fstream> we will discuss the use of regular expressions in yet another
#include <regex> programming language, maybe one that is much used on the
Internet and the World Wide Web.
using namespace std;
By: Deepu Benson
int main() The author has nearly 16 years of programming experience.
{ He is a free software enthusiast and his area of interest
ifstream file(“file2.txt”); is theoretical computer science. The author maintains a
string str; technical blog at
and can be reached at deepumb@hotmail.com.
regex pat(“^[[:digit:]]+$”);
O
pen source game development is generally looked GDevelop
upon as a tech enthusiast’s hobby. Rapid advancements GDevelop is an open source, cross-platform game creator
in technology, combined with the various innovations platform designed for novices. There is no requirement for
being launched every day, have put tech experts and gamers in any sort of programming skills. GDevelop is a great platform
a win-win situation. Open source provides interoperability, high to develop all sorts of 2D and 3D games. It consists of several
quality and good security in game devlopment. Little wonder editors on which games can be created. The list is as follows.
then that open source platforms are already being used for quite Project manager: This displays the open games in the editor,
a few successful and complex games. allowing developers to set and organise the scenes. Users can
The following points highlight some of the advantages of select the scene to be edited and modify the parameters like
the open source gaming platforms. the title, background colour, text, etc. It also gives access to
Better quality and more customised software: With the the image bank editor of the games, and allows the user to
source code being available on open source gaming select the extensions to be utilised by the game.
platforms, professional developers can customise features Image bank editor: This allows the user to manage all
and add varied plugins as per their own requirements, sorts of images via objects. It supports transparency
which is beneficial for game development companies. integrated in the image.
Say good bye to licensing: With completely open source Scene editor: This allows users to organise the scene at
platforms, there is no requirement for any sort of licensing. the start, positioning the object in the scene.
So apart from no licence costs, other issues like tracking and Object editor: This allows the creation of objects to be
monitoring are also avoided. displayed on the stage, like text and 3D box objects. It
Lower cost of hardware: Open source gaming platforms also has the ‘Particle Transmitter’ object, which allows
in Linux involve lower hardware costs compared to developers to use particles in the game with ease.
Windows. With the advantages of easy portability Layer editor: This allows users to manage the interface
and high compression, Linux requires low hardware that remains motionless, while allowing the camera of the
configurations. So, game development costs are lower rest of the game to move or zoom.
and even legacy hardware systems can be used for game Event editor: This allows users to animate the scene,
development. depending on the conditions and actions that will be
Let’s take a look at the top open source gaming performed on the objects of the scene.
development platforms, which give developers numerous The events are compiled by GDevelop in machine code
options to explore and choose from, as per their requirements. — the mechanism is simple and similar to writing C++ code.
Features Features
It comprises various objects which can be used readily — Nice and clean interface: Has a visual editor, a dynamic
text objects, 3D boxes, own customised shapes via Shape scene system, a user friendly content creation interface,
Painter, the particle engine, dynamic lights and shadows, a visual shader editing tool and live editing on mobile
custom collision masks, etc. devices.
Adds behaviours to the objects through the physics Efficient in 2D game design because of a dedicated
engine, pathfinding, top-down movement, platformer 2D engine, custom 2D physics engine, and a flexible
engine, draggable objects and the automation of tasks. kinematic controller.
Offers advanced design features and interfaces through High-end 3D game development by importing animation
the scene editor, multiple layers, the debugger and models from 3DS Max, Maya and Blender; has skeleton
performance profilers. deforms and blend shapes, lighting and shadow mapping,
Other features include HTML 5 support, sound and music HDR rendering, anti-aliasing, etc.
effects, and integration with the joystick and keyboard. Flexible animation engine for games, enabled by the visual
animation editor, frame-based or cut-out animation, custom
Latest version: 4.0.94 transition curves and tweens, and animation tree support.
Official website: Other features include a Python-like scripting language, a
powerful debugger and an easy C++ API.
DRIVING
TECHNOLOGY
INNOVATION &
INVESTMENTS
Colocated
shows
Colocated
shows
Colocated
shows
The Themes
• Profit from IOT • Rapid Prototyping & Production
• Table Top Manufacturing • LEDs & LED Lighting
To get more details on how exhibiting at IEW 2018 can help you achieve your sales & marketing goals,
boxes, labels, menus, buttons and common elements. ActionScript 3 library that is very similar to the traditional
Physics engine: It supports 2D physics engines like Flash architecture. It recreates the Flash display list
BOX2D and Chipmunk. architecture on top of the accelerated graphics hardware.
Audio: It supports sound effects and background music. It is a very compact framework but comprises various
Network support: HTTP with SSL, WebSocket API, packages and classes. The following are the sets of tools that
XMLHttpRequest API, etc. are integrated with Starling for application development:
Display programming: Every object is a display object.
Latest version: 3.15.1 Images and textures
Official website: Dynamic text
Event handling
Animation
Asset management
Special effects
Utilities
Features
It is based on Stage3D and supports multiple platforms
like Android, iOS, Web browsers, OS X, etc.
It has low configuration requirements in terms of CPU,
memory and GPU.
It has lower battery consumption.
Figure 3: Cocos2d-x user interface Has effective object organisation via hierarchical trees,
i.e., parent-child relationship.
Delta Engine Highly powerful and efficient event system using
Delta Engine is an open source 2D and 3D app and game ActionScript.
development engine maintained by Delta Engine company. Supports texture atlases, filters, stencil masks, blend
The applications and games can be developed in an easy modes, Tweens, multi-touch, bitmap fonts and 3D effects.
manner through Visual Studio.net or the Delta engine editor.
Delta Engine supports various languages and frameworks Latest version: 2.2
like C# OpenGL, C# OpenTK, C# GLFW, C# XNA, C# Official website:
SharpDX, C# SlimDX, C# MonoGame, LinuxOpenGL,
MacOpenGL and WebGL. Panda 3D
It supports various platforms like Windows OS, OS X, Panda 3D is an open source framework for rendering and
Linux, Android, Android TV and Linux. developing 3D games using C++ and Python programs.
The entire gaming engine is written in C++ and makes use
Features of automatic wrapper generators to expose the complete
It supports 3D features like 3D model importing, a functionality of the engine in the Python interface. It supports
particle effect editor, etc. OpenGL and DirectX.
Content like images, sounds, music and 3D models is Panda 3D includes various tools like scene graph
saved directly using the Delta engine. browsing, performance monitoring, animation optimisers and
Supports physical simulation; most code is many more.
interchangeable for both 2D and 3D simulation.
Supports integration of external libraries and frameworks Features
like the 2D Sprite animation library, Spine. Hassle-free installation and supports Windows, OS X and
App Builder tool integrated in the editor supports Linux. No need for any sort of compilation.
building, deployment and launching of apps on a mobile Full Python integration and highly optimised via C++.
device. Comes with various OpenGL and DirectX features like
GLSL, a powerful interface between shaders and engine,
Latest version: 0.9.11 and supports render-to-texture and multiple render targets.
Official website: Other features include shader generation, 3D pipeline,
support for OpenAL Audio Engine, FMOD Audio Engine
Starling and Miles Audio Engine.
Starling is an open source 2D game development framework Has support for the Bullet physics engine, ODE physical
that supports both mobile and desktop platforms. It is a pure engine and PhysX physics engine.
I
n 2008, the number of connected devices in operation divided into three constituents.
exceeded the number of humans connected to the Internet. It 1. The hardware: This makes up the ‘things’ part of IoT
is estimated that by 2025 there will be more than 50 billion and usually has a small microcontroller with sensors/
connected devices generating a revenue of US$ 11 trillion. actuators and firmware running on it, which is responsible
Though the term, the Internet of Things or IoT, was first coined for how it functions. A good example of this would
back in 1999, the buzzword has started becoming a feasible be a smart fitness tracker with, say, an ARM Cortex
reality in recent years. As we can see, the consumer electronics M4 microcontroller and an Inertial Measurement Unit
market is already flooded with smart and connected LED bulbs, (accelerometers or gyroscopes) sending data to your
home automation solutions and intelligent vehicles. Meanwhile, smartphone via Bluetooth.
the Do-It-Yourself (DIY) hobbyist sector is seeing ultra-low 2. The software: Firmware running on the device, mobile
power and high performance SoCs with built-in Wi-Fi, LoRa or applications, cloud applications, databases, device
Bluetooth communication features. management/implementation, the frontend to display data or
The prices of radio chips are now as low as US$ 5 and there an algorithm which gives intelligence to your IoT project—
are tons of new products, unimaginable before but now a reality, all come under the software portion of the IoT stack.
as was seen at this year’s Consumer Electronics Show (CES),
Las Vegas and Mobile World Congress (MWC), Barcelona.
Products like a smart toothbrush that learns your brushing habits,
connected drones that can follow and record you while you are
in the middle of an adventurous moment like river rafting, or a
simple over-the-air (OTA) software update that can turn your
car into a smart self-driving vehicle. With IoT and artificial
intelligence backing it up, the possibilities are endless.
3. The cloud: The ability to stream and store data over is overall a lightweight protocol that runs on embedded
the Internet, visualise it in a Web browser and control devices and mobile platforms, while connecting to highly
the device remotely from any part of the world is all scalable enterprise and Web servers over wired or wireless
because of the cloud, which virtually makes the data networks. It is useful for connections with remote embedded
available anytime, anywhere. systems, where a small code footprint is required and/
There are innumerable ways to get into the IoT space, right or network bandwidth is at a premium or connectivity is
away. In this article, I’ll talk about communication protocols unpredictable. It is also ideal for mobile applications that
for the IoT space, which can be used for communication require a small size, low power usage, minimised data
between machines or between a machine and server. Due packets, and efficient distribution of information to one or
to constraints in processing capabilities and the low power many receivers. It is an ISO standard (ISO/IEC PRF 20922)
requirements of IoT devices (which are generally meant to be protocol. The good performance and reliability of MQTT is
deployed in environments with constant battery power) with demonstrated by Facebook Messenger, Amazon IoT (AWS-
limited bandwidth capabilities, a need was felt for dedicated IoT), IBM Node-Red, etc—organisations that are using it to
standards and protocols especially designed for IoT. Since serve millions of people daily.
those who manufacture IoT devices and those who create the An MQTT-SN or MQTT sensor network allows you to use
IoT platforms are different, this required industry standards and MQTT over a wireless sensor network, which is not generally
protocols that were not high on power consumption, bandwidth a TCP/IP based model. The MQTT broker can be run locally or
usage, or processing power and could be adopted easily by all deployed on the cloud. It is further enhanced with features like
IoT players—hardware manufacturers, software developers or user name/password authentication, encryption using Transport
cloud solutions/service providers. Layer Security (TLS) and Quality of Service (QoS).
When developing and deploying an IoT project, it’s MQTT implementation: MQTT can be implemented with
important to answer questions like: a broker and MQTT clients. The good news is that both can
How do my devices talk to each other or to me? be found open sourced in the Mosquitto package, which is
Do I want the stability of a wired network or the freedom an open source MQTT broker available as a package for
of a wireless one? Linux, OSX or Windows machines. It runs an MQTT broker
What are my constraints? Data rates, battery power or daemon, which listens for MQTT translations on Port 1883 of
poor networks? TCP (by default). To install it on Debian based machines (like
What communication options do I have? Ubuntu 16.04, Raspbian Jessie, etc), simply run the following
command from the terminal:
Enter the world of IoT communications
This section covers a list of IoT communication protocols. # sudo apt-get install mosquito mosquito-clients
1. MQTT (Message Queue Telemetry Transport)
MQTT is my preferred IoT protocol, which I use for This will install and run MQTT broker on your Debian
almost all my IoT automation based Linux machine and provide
projects. It was created about 15 clients the utilities mosquitto_pub and
years back for monitoring remote mosquitto_sub, which can be used to
sensor nodes, and is designed test and use it.
to conserve both power and On the device/client side, Eclipse
memory. It is based on the ‘Publish IoT provides a great open sourced
Subscribe’ communication model, implementation of MQTT and
where a broker is responsible for MQTT-SN version 3.1.1 in the form
relaying messages to MQTT clients. of a library known as Eclipse PAHO,
This allows multiple clients to which is available for almost all
post messages and receive updates modern programming languages like
on different topics from a central C, C++, Java, Python, Arduino, etc,
server known as the MQTT broker. or can be used over WebSockets. For
This is similar to subscribing to a Figure 2: The MQTT model more details or the API reference,
YouTube channel, where you get visit.
notified whenever a new video is posted. The table in Figure 3 compares HTTP and MQTT, clearly
Using MQTT, a connected device can subscribe to any showing why the latter is a winner in the IoT space.
number of topics hosted by an MQTT broker. Whenever a 2. CoAP (Constrained Application Protocol)
different device publishes data on any of those topics, the Constrained Application Protocol (CoAP) is an Internet
server sends out a message to all connected subscribers application protocol for constrained devices (defined in RFC
of those topics, alerting them to the new available data. It 7228). It enables constrained devices to communicate with
#firejail firefox All the fields of the above configuration are self-
explanatory. Now, you can search the IDLE application in
There are many options available along with Firejail. Ubuntu Dash and also lock it to the launcher.
You can get all the details from the manual page of the
command. —Narendra Kangralkar, narendrakangralkar@gmail.com
The output will show all the commands that were $ export PS1=”\e[0;34m[\u@\h \W]\$ \e[m “
executed during the Script session.
Here is a list of colour codes:
— Pritam Nipane, Blue: 0;34
pritamnipane@gmail.com Green: 0;32
Cyan: 0;36
Creating a desktop launcher in Red: 0;31
Ubuntu Unity Purple: 0;35
Unity launchers are actually files stored in your Brown: 0;33
computer with a ‘.desktop’ extension. To create —Rajeeb Senapati,
a desktop launcher, create the .desktop file using Rajeeb.koomar@gmail.com
a text editor and save it under the ~/.local/share/
applications/ directory. For example, given below is
the .desktop file for Python IDLE.
Share Your Linux Recipes!
[bash]$ cat ~/.local/share/applications/idle.desktop The joy of using Linux is in finding ways to get around
problems—take them head on, defeat them! We invite you to
[Desktop Entry]
share your tips and tricks with us for publication in OSFY so
Version=1.0 that they can reach a wider audience. Your tips could be related
Type=Application to administration, programming, troubleshooting or general
Name=IDLE tweaking. Submit them at. The
sender of each published tip will get a T-shirt.
Icon=/media/jarvis/partition/icons/python-icon.png
DVD Of The
Test and secure your applications.
MOnTh
BackBox Linux 5 Live
This distro is designed to be fast, easy to use and provide
a minimal yet complete desktop environment. It is a
penetration testing and security assessment oriented Linux
Dri
ve
Live (64-bit) distribution, which offers a network and systems analysis
5
OM
M
,D
VD
-R
toolkit. It includes some of the most commonly known/
used security and analysis tools, ranging from Web
RA
GB
CD
,1
Tea
P4
m
nts
e-m
a minimal yet complete desktop environment. It is a penetration
me
ail:
uire
fy.in
ended
tended, and sh
Recomm
s unin oul
c, i db
dis
rib
on
ute
terial, if found
d to t
September 2017
community project, supported by a non-profit organisation
nab
atu
re
tio
ec
of
bj Int
o ern
Any t dat e
Note:
On Udemy:
Working
Step with you
2: We pay Graph
anAlgorithms
advance inofPython
₹ 5,000/hour upon
course approval
Building Regression Models in TensorFlow | https://fr.scribd.com/document/375975328/Open-Source-for-You-September-2017 | CC-MAIN-2019-35 | refinedweb | 30,356 | 54.02 |
#include <IpSumSymMatrix.hpp>
Inheritance diagram for Ipopt::SumSymMatrix:
For each term in the we store the matrix and a factor.
Definition at line 24 of file IpSumSymMatrix.hpp.
Constructor, initializing with dimensions of the matrix and the number of terms in the sum.
Destructor.
Default Constructor.
Copy Constructor.
Method for setting term iterm for the sum.
Note that counting of terms starts at 0.
Method for getting term iterm for the sum.
Note that counting of terms starts at 0.
Return the number of terms.umSymMatrix.hpp.
std::vector storing the matrices for each term.
Definition at line 92 of file IpSumSymMatrix.hpp.
Copy of the owner_space as a SumSymMatrixSpace.
Reimplemented from Ipopt::SymMatrix.
Definition at line 95 of file IpSumSymMatrix.hpp. | http://www.coin-or.org/Doxygen/CoinAll/class_ipopt_1_1_sum_sym_matrix.html | crawl-003 | refinedweb | 122 | 54.18 |
Hello,
on Android, every time I rotate the screen, the sketch resets / restarts. Is there a way to disable this or should I fix the orientation and use gyroscope to detect rotation?
If this is duplicate, could you send me a link to the other post?
Hello,
on Android, every time I rotate the screen, the sketch resets / restarts. Is there a way to disable this or should I fix the orientation and use gyroscope to detect rotation?
If this is duplicate, could you send me a link to the other post?
Please Post some code, so we can See where the Problem lies.
@FurryNightShade====
that s absolutely normal!
either you fix the orientation (portrait or landscape) in the manifest (you can also do that by code) or you choose to handle yourself what is supposed to happens onConfigurationChanged().
@Lexyth, well, it is any app. Just draw text displaying frameCount and see for yourself.
@akenaton, I’ll take a look at onConfigurationChanged(). If it is too complicated, I’ll just use gyroscope.
I have a workaround but without code i can’t do any more than explain stuff. Try saving your variables to files every second then when your sketch reloads load those files.
This is normal. You can read what happens in the life cycle here
However there are some things going on in Processing for Android that are bugs, I think.
In the code below I use empty life cycle functions but I am not allowed to use onPause(), because if I do and press the home button, and call the app back, no text will appear but only a blue background. If I use the back button it will.
Also it is a good idea to use the orientation function in onCreate(), otherwise the app often crashes.
Another thing is that you need to use super.on Resume in onResume() otherwise the app turns black or crashes.
import android.os.Bundle; void setup() { background(0,0,255); textAlign(CENTER); textSize(45); text("Hello world", width/2, height/2); } public void settings() { fullScreen(); //orientation(LANDSCAPE); } void draw() { } void onCreate(Bundle savedInstanceState) { orientation(LANDSCAPE); } void onStart() { } // void onPause() { // } void onStop() { } void onResume() { super.onResume(); // Is necessary because otherwise screen will turn black } void onRestart() { }
The onPause() bug is now marked as core bug on github.
I could share you guys some ideas, maybe far too complex for beginners.
Try to serialise your drawings in the override method onStop(), for example you can use google Gson library(a library serialise object to string) with android sharedPreference.
In setup method, you can initial these drawings from sharedPreference.
If you want to rotate without lose your drawings, for example you design a drawing app for large screen tablet, this will be the a easy solution.
You are right. But I think most of Processing users are makers, artists , or learning to code. I think that was what Processing was made for. Android is a very complex structure, and personally I want to keep it as simple as possible just with some widgets. I’ve looked into Gson here but I don’t understand how you would like to set it up. Maybe you could post a small example about how to solve the rotation problem. That would be great!
It is not necessarily to use Gson. I said for example.
A simple situation could be, you have a list of objects, these objects contains x,y co-ordinates. In its own draw method it has somethings like circle(x,y,radius).
When you rotate the screen, these objects disappear.
Then you need some techniques to serialise these objects(e.g. Gson) and re-initialise them in setup and call its own draw method.
Then you need understand the lifecycle of android, that’s why I say put serialisation in onStop().
As an old saying goes “Give somebody a fish and you feed them for a day. Teach somebody to fish and you feed them for a lifetime.”
There’s no straight solution for these questions, it all depends on your situation. So I just share the idea here.
OK I got it. You are talking abaut the subject matter the topic started with. Good idea! But it can’t solve the onPause () problem I had in mind.
// alright I made a sketch that it is IMPOSSIBLE
// TO RESET!! If you want it to reset you must
// use something with the time.
// also it will only work in app mode not sketch
// mode. sorry if I’m too descriptive but I
// just got out of a class that had beginners in it
// Hello this is a small app I made // that will not reset when rotated // but it MUST BE RUN AS AN APP! // NOT AS A PREVIEW!!! // Our first line here makes a // variable called orientation. // you must have this line. // right now it does nothing. // it stands for a word but we havent // decided which word yet. String orientation; // our second line here creates a // file name. Im not sure how it // works because i found it on another // forum. but it is required File pfile; // these 2 lines make a variable. // for example if i type "x" the // program thinks i typed 0. // these are the location of the // circle. float x = 0; float y = 0; // these next 2 lines are true or false // variables. for example if the // command is if(touching) then it // will not run because it is false // not true. if the command was // if(!(touching)) then it WOULD run. // the ! means not. boolean DONT_DELETE = false; boolean touching = false; // these lines of code create MULTIPLE // variables. how many? not decided // yet. we will tell it later String[] save; float[] saven; // this creates a set of code that will // only run ONE TIME! such as setting // positions void setup(){ // this is where the hard stuff comes // in. remember the variable we set // earlier that stood for a word? // well now well say WHAT word // this line says if the width of // the screen is greater then the // height the phone must be sideways // so make the word "landscape" if(width > height){ orientation = "landscape"; } // this line is the opposite. if the // height is greater then the width // the phone must be up // set the word to "portrait" if(height > width){ orientation = "portrait"; } // if the file save.txt does NOT // exist (meaning it is the first // time running the program) then // this will run. if(!(fileExists("save.txt"))){ // remember earlier we made one word // stand for multiple variables? // well this says how many. // in this case we make it stand // for 2 variables: saven[0] and // saven[1] saven = new float [2]; // this is the same thing except // instead of numbers it is words. // this stands for 3 words but all // stand for "". the variables are // save[0], save[1], and save[2]. save = new String[3]; } // now for what makes this all work. // if the file save.txt exists then // do whats below. if(fileExists("save.txt")){ // remember anything from here to // the next } will only run if // the file save.txt exists. // normally we would name the file // save.sav but this is for learning // so its save.txt. the reason for // the .sav is so other people wont // mess with it. this at first will // not run because save.txt does not // exist. we will make it later. // remember earlier when we made // multiple variables connected to // one word? well this actually // assigns words or strings to it. // this will load the strings from // the file save.txt save = loadStrings("save.txt"); // this makes saven stand for // the number of strings in save. // if save is 3 lines long then // this stands for 3 numbers // all of which are 0. saven = new float[save.length]; // if the program ran before, it saved // the orientation to save[2]. // now if the orientation is // still the same as last time // then the variables saven[0] and // saven[1] equal save[0] and save[1]. // this makes it impossible to reset // if you want it to reset when closed // simply delete this line. if(orientation == save[2]){ // but save[0] is a word not a number. // in order to turn it into // a number we must use the float() // command. except it only turns // the word 1 into the number 1 // it cannot turn the word one // into the number 1. saven[0] = float(save[0]); saven[1] = float(save[1]); } // this says if the orientation was // flipped or is not the same // as last time then flip x // and y (or saven[1] and saven[0]. if(!(orientation == save[2])){ saven[1] = float(save[0]); saven[0] = float(save[1]); } } // after all the above code this makes // x saven[0] and y saven[1]. x = saven[0]; y = saven[1]; // here we will make the 3rd variable // save[2] equal to whatever the // orientation of the screen is. // we do this at the end so that if // we already ran the program before it // will compare what the orientation // was and what it is. // if we were to run this at the // beggining then it would always be the // same as orientation. save[2] = orientation; } void draw(){ // this basically draws a big rectangle // over the whole screen. its a white // rectangle background(255); // everything below this will be black fill(0); // if the screen is touched // then saven[0] and saven[1] equal // mouse x and y if(touching){ saven[0] = mouseX; saven[1] = mouseY; } // x equals saven[0] // y equals saven[1] x = saven[0]; y = saven[1]; // whatever x and y are put a circle // there ellipse(x,y,200,200); // time to save it. if we save it then // it CANNOT reset. but variables // cannot be saved. only numbers // can. so save[0] equals // str(saven[0]) and same for // save[1]. anything can be turned // into a word but a word cannot // be turned into just anything save[0] = str(saven[0]); save[1] = str(saven[1]); // finally save the strings save to // save.txt saveStrings("save.txt",save); } // this says if screen touched void touchStarted(){ // then touching equals true. because // if you touched the screen then // you are touching it touching = true; } // if you release your finger off // the screen then void touchEnded(){ // touching equals false because // you arent touching it anymore touching = false; } // um.... I dont know what this does // i got it from another // forum. it checks if a file // exists. feel free to use it. // boolean fileExists(String qaz){ pfile = new File(sketchPath(qaz)); if(pfile.exists()){ DONT_DELETE = true; } if(!(pfile.exists())){ DONT_DELETE = false; } return DONT_DELETE; }
// I hope it works for you! | https://discourse.processing.org/t/sketch-resets-on-rotating/5522 | CC-MAIN-2022-27 | refinedweb | 1,782 | 83.46 |
CString In A Nutshell
Intro
I've heard several misconceptions about the use of CStrings and thought it would be beneficial to some of you to clear these up. In this document I will describe how CString works and address 3 key misconceptions:
- Passing CString by value is bad
- Using CString causes memory fragmentation
- CString is slow
Inside CString
class CString { ... LPTSTR m_pchData; // pointer to ref counted string data };
This is the "header" structure of every string:
struct CStringData { long nRefs; // reference count int nDataLength; // length of data int nAllocLength; // length of allocation // TCHAR data[nAllocLength+1] TCHAR* data() // TCHAR* to managed data { return (TCHAR*)(this+1); } // this+1 == ((void*)this)+12 };
Lets say you create a CString object like this:
CString str("hello");
First CString calls CString::AllocBuffer(5). This actually allocates 5 + 1 + 12 bytes (chunk + EOS + CStringData). nAllocLength will be set to 5 as will nDataLength. You might think that nDataLength should be 18, but since the extra 13 bytes are ALWAYS allocated, it's more efficient for CString to leave off those extra 13. In release builds, your strings are allocated in blocks of 64, 128, 256, or 512, this is where nDataLength comes in handy. In the case of our 5 character string, nDataLength would be 64. Using blocks reduces memory fragmentation and speeds up operations like adding. Reduction of memory fragmentation is achieved by the use of CFixedAlloc. This class never actually frees the memory allocated (until it is destroyed or explicitly told to), but returns free'd blocks to it's "free pool", so no memory fragmentation occurs. CFixedAlloc can be found in the MFC source directory in FIXEDALLOC.H and FIXEDALLOC.CPP if your curious. For strings larger than 512 characters, the memory is allocated and freed the same as in debug builds.
nRefs is set to 1
m_pchData is set like this: m_pchData = pData->data(); pData is the block of memory allocated by AllocBuffer and cast to CStringData. So what we get looks like this:
1 5 5 h e l l o \0 ---- ---- ---- - - - - - - <-bytes ^m_pchData
Of course to free the block of memory, CString cannot free m_pchData, but instead frees (BYTE*)GetData(); GetData() returns ((CStringData*)m_pchData)-1. Remember that it's casting the pointer to a 12-byte structure and subtracting one structure from it (or 12 bytes).
Reference Counting
So how does reference counting help speed things up? Whenever you use the copy constructor or the operator=(const CString& stringSrc), the only thing that happens is this:
m_pchData = stringSrc.m_pchData GetData()->nRefs++
If m_pchData had been == stringSrc.m_pchData, nothing at all happens.
So this bit of code is very fast:
void foo(CString strPassed) { } CString str("Hello"); foo(str);
No string copy occurs, and no memory is allocated. A 32-bit value is pushed on the stack, that value is set (strPassed.m_pchData = str.m_pchData), and an integer is incremented (strPassed.GetData()->nRefs++). That's only one operation more than passing an int by value where: A 32-bit value is pushed on the stack, and that value is set. Now granted, it's definetly quite a few more assembly instructions, but that's why we have 500Mhz CPUs, so don't sweat cycles. When it comes to user interfaces, there's no reason to sweat CPU cycles, the computer is capable of executing billions of instructions in a time frame perceivable by a human. Obviously if your doing intensive graphics animation or massive quantities of data manipulation you might wanna look at your inner loops and optimize there.
The reason reference counts are kept is so that CString knows that it's "sharing" a string buffer with another CString object. If foo were to modify strPassed, CString would first allocate a new buffer and copy the string into that buffer (setting it's ref count to 1). Of course if foo never modifies strPassed, the allocation and copy never occur.
Empty Strings
An empty or uninitialized string m_pchData is set to _afxPchNil which looks like this:
-1 0 0 \0 (EOS) ---- ---- ---- - (_afxInitData) ^_afxPchNil
Note that a -1 ref count means that the string is "locked" and so modifying and empty string always results in a new allocation.
Epilogue
Anyhow, that's CString in a nutshell. It's really a fun class to dig into. So if you've ever worried about passing CString objects all over the place, remember that your really essentially only passing a pointer around. It's quite efficient and if you have need to manage dynamic structured data, you might even consider this model.
Please note that this information is accurate as of VC++ 6.0. I've heard that not all of this is true for previous versions of MFC, but I have not personally verified this.
Date Posted: Feb 23,<< | https://www.developer.com/net/cplus/article.php/633441/CString-In-A-Nutshell.htm | CC-MAIN-2018-34 | refinedweb | 798 | 60.85 |
/* * Copyright (c) 1998> #if IOKITSTATS #include <IOKit/IOStatisticsPrivate.h> #endif _> The IOEventSource makes no attempt to maintain the consistency of its internal data across multi-threading. It is assumed that the user of these basic tools will protect the data that these objects represent in some sort of device wide instance lock. For example the IOWorkLoop maintains the event chain by using an IOCommandGate and thus single threading access to its state. <br><br> All subclasses of IOEventSource that wish to perform work on the work-loop thread are expected to implement the checkForWork() member function. As of Mac OS X, 10.7 (Darwin 11), checkForWork is no longer pure virtual, and should not be overridden if there is no work to be done. <br><br> checkForWork() is the key method in this class. It is called by some work-loop when convienient and is expected to evaluate its internal state and determine if an event has occurred. <br><br>; #if IOKITSTATS friend class IOStatistics; #endif, ...); /*! { #if IOKITSTATS struct IOEventSourceCounter *counter; #else void *iokitstatsReserved; #endif }; /*! @var reserved Reserved for future use. (Internal use only) */ ExpansionData *reserved; /*! ); virtual void free( void ) APPLE_KEXT_OVERRIDE; /*! @function checkForWork @abstract Virtual member function used by IOWorkLoop for work scheduling. @discussion This function will be called to request a subclass to check its internal state for any work to do and then to call out the owner/action. If this event source never performs any work (e.g. IOCommandGate), this method should not be overridden. NOTE: This method is no longer declared pure virtual. A default implementation is provided in IOEventSource. @result Return true if this function needs to be called again before all its outstanding events have been processed. */ virtual bool checkForWork(); /*! */ | https://opensource.apple.com/source/xnu/xnu-3247.10.11/iokit/IOKit/IOEventSource.h.auto.html | CC-MAIN-2021-43 | refinedweb | 287 | 58.79 |
BreizhCTF 2019 - Primera sangraCTF URL:
Solves: 2 / Points: 175 / Category: Web
Challenge description
The challenge description gives us a website URL. It says that the webmaster made a mistake and disclosed the password, a common one, and quickly fixed it directly in production.
Indeed, the website is access restricted and only shows a password form. We understand that we have to find the password.
Challenge resolution
Step one: discovery
We quickly try a few trivial passwords on the form, but they do not work. The goal of this kind of challenge is usually not to do an online brute-force (which have the side effect of ruining the challenge for other players… 🤨), so we try something different.
The challenge description gives the hint that the developer fixed it in production. What if he uses a version control system (VCS, such as git/SVN) directly in production to pull the source-code? This is actually very common…
So, we try to access the
/.git/ folder but it returns a 404 not-found error. However, we know that some web servers return this error when requesting folders, and if directory listing is disabled, even if the folder effectively exists! So, we try to access
/.git/conf too and it works! 💡
For your information, there are other similar files you can try accessing in this situation, such as
/.git/index,
/.git/HEAD,
/.git/logs/HEAD, where among binary data we see interesting file names:
Now we know that there is an exposed git repository with interesting files, and we want to obtain it.
Step two: exploitation
When a webserver has directory listing enabled, it is easy to fetch the repository with a recursive download but this is not the case here. Many tools are available for this. We try several here and obtain similar results, except one that goes on an infinite loop, and we will understand later why 😉. Here is a quick list:
For example:
We think that we are finished here, but the content of
secret_password.py is quite disappointing:
secret_password = "REDACTED"
And “REDACTED” is indeed not the correct password (we tried, you never know when you might be lucky 😉). So we read more about this technique of dumping git repositories without directory listing, and we are reminded that everything in git is well organized. There are trees, nodes, objects, and all of them pointing to others, and everything has a hash (like commit IDs, but for internal objects), etc.
Step three: dive into git internals
Some articles show the usage of the
git fsck command to check if a repository is valid: are all objects presents and with a valid integrity? In fact some git dumping tools are based on this command, where it is run repeatedly: it complains about a missing file, then the file is downloaded, and again.
Its result on our copy of the repository is interesting:
# git fsck error: sha1 mismatch for .git/objects/37/4e045ef2ea84be825ead668a69aac28ce7b53e (expected 374e045ef2ea84be825ead668a69aac28ce7b53e) error: 374e045ef2ea84be825ead668a69aac28ce7b53e: object corrupt or missing: .git/objects/37/4e045ef2ea84be825ead668a69aac28ce7b53e Checking object directories: 100% (256/256), done. missing blob 374e045ef2ea84be825ead668a69aac28ce7b53e
Ok, so one of the objects has a wrong hash 🤔 Now we understand why one of the tools goes to an infinite loop as it cannot download a correct version of the file and re-tries again and again. The article “Reading git objects” teaches us to read its compressed content, and obtain the hash, here with the following script:
from hashlib import sha1 import zlib decompressed = zlib.decompress(open('.git/objects/37/4e045ef2ea84be825ead668a69aac28ce7b53e','rb').read()) print decompressed print sha1(decompressed).hexdigest()
blob 49 secret_password = "REDACTED" 2d1873bdd1fcf3724385f3da4d1db117eba3883d
We have the confirmation that the hash differs from the expected one “374e045ef2ea84be825ead668a69aac28ce7b53e”. We guess that the developer changed the password in the git repository, but without a proper method which leads to a file integrity issue!
We note that in
.git/objects there are folders with 2 characters names, which are the first 2 characters of the hashes, then the remaining of the hash in the filename. So the file with hash “374e045ef2ea84be825ead668a69aac28ce7b53e” is stored in the “37/” folder and filename “4e045ef2ea84be825ead668a69aac28ce7b53e”. We also note that the hash is not based only on the actual content of the file, since there is a prefix added by git which is the type of the object (here “blob”), followed by its original size (49 octets, which is less than ‘secret_password = “REDACTED”’), ended with a NULL-byte separator “\x00” (caution as it was not visible in the output of the previous command). This also well explained in the “Deep dive into git: git Objects” article:
Here are a few other results from git commands:
# git log commit 21ff7f561f87c1a682d11dcb2572772e4e1872af (HEAD -> master) Author: ganapati <ganapati@ganapati.com> Date: Tue Sep 25 14:35:47 2018 +0200 First commit # git ls-tree 21ff7f561f87c1a682d11dcb2572772e4e1872af 100644 blob b9a9f0016edfa13722676a5a7764e5e90683bb6e bottle.py 100644 blob a8d36c721525b80c9cb29dac6f06b5acf8c60c2b challenge.py 100644 blob 374e045ef2ea84be825ead668a69aac28ce7b53e secret_password.py
Step four: brute-force and conclude
Now we know the expected hash, and the expected format of the file (we assume that only the password was redacted, and nothing else changed). Now we have to brute-force the redacted password. Let’s remind the challenge description that says that the password is common: we will probably not have to do a complicated brute-force. Our first idea is unsurprisingly to use the rockyou passwords dictionnary.
We cannot use a standard password cracking tool (or actually we do not know how), as the input to hash depends on the length of every tested password.
Therefore, we use the following Python script (totally unoptimized but good enough for the task here):
import hashlib import sys for word in open("/usr/share/wordlists/rockyou.txt").read().splitlines(): content = 'secret_password = "' + word + '"' content = 'blob %d\0%s' % (len(content), content) if hashlib.sha1(content).hexdigest() == "374e045ef2ea84be825ead668a69aac28ce7b53e": print content sys.exit(0)
After a few seconds, it gives us the original password and content of the git object (and therefore of the original file):
blob 49 secret_password = "mhonowa2248116553575515246859"
We confirm this result by using this password on the challenge website and it is accepted 👍
Thank you to @G4N4P4T1 for this interesting challenge!
Bonus
We initially thought that the brute-force would be more difficult and we thought we would need to make use of password rules. How to combine this with our custom Python script?
We can chain John-the-Ripper and the script. First, John is only used to generate password candidates using a dictionnary and rules and outputs them to
stdout, then our script formats it and computes the hash of the whole. We want to pipe both to prevent creating a huge file.
The script is modified as follows to iterate on
stdin candidate passwords:
import hashlib import fileinput import sys for word in fileinput.input(): word = word.strip() content = 'secret_password = "' + word + '"' content = 'blob %d\0%s' % (len(content), content) if hashlib.sha1(content).hexdigest() == "374e045ef2ea84be825ead668a69aac28ce7b53e": print word sys.exit(0)
And run the whole with these commands:
john --wordlist=/usr/share/wordlists/rockyou --rules=all --stdout | python crack2.py
Author:
Clément Notin | @cnotin
Post date: 2019-04-14 | https://tipi-hack.github.io/2019/04/14/breizhctf-19-primera-sangra.html | CC-MAIN-2019-39 | refinedweb | 1,178 | 61.36 |
There was a discussion of variable length argument lists from which I
still have to recover (more about that another time), but there were
also a whole lot of questions raised about other parts of Python's
design. Rather than repeat my responses over and over again each time
someone raises one such issues again, I've written a little "Socratic"
dialogue that tries to explain why things are the way they are.
The main subjects treated in this dialogue are, roughly:
- why Python has both (immutable) tuples and (mutable) lists
- the rationale behind singleton and empty tuples
- why the parentheses in a function call can't be made optional
It also explains why 'print' is a statement and not a function, and
gives some examples of what you can do with None.
<Q> Why does Python have both tuples and lists?
<A> They serve different purposes. Lists can get quite long, they are
generally built incrementally, and therefore have to be mutable.
Tuples on the other hand are generally short and created at once.
<Q> Then why can't tuples be mutable and serve both purposes?
<A> Imagine a graphics class that stores coordinates of graphical
objects as tuples. It may also store rectangles as a tuple of points,
etc. If tuples were mutable, the class would have to store copies of
all points and rectangles it receives, since otherwise a caller who
creates a point variable, passes its value to the graphics class, and
later changes the point for its own use (e.g., to create another
graphical object with slightly different coordinates) might violate
the internal consistency of the graphics class. Note that most
callers woouldn't modify the points, but the graphics class has no way
to tell, so it has to make the copies anyway. (And now imaging the
software is actually layered in such a way that coordinates are passed
down several levels deep...)
<Q> Then why can't lists be made immutable? There are algorithms and
data structures that guarantee O(log N) or O(N log N)
insert/delete/concatenate/access operations, e.g., B-trees.
<A> This was used in ABC for lists and tables, where I helped
implement it. My experiences with this were that the code was
incredibly complex and thus difficult to maintain; the overhead for
small lists (which are in the majority in most programs) was
considerable, and the access time for single elements was O(log N).
<Q> Why are there singleton and empty tuples at all in Python?
Wouldn't it be easier to forbid them? After all special syntax is
used to construct them; this could be removed from the language and
you would have no empty or singleton tuples (like in ABC).
<A> You can also create empty and singleton tuples with slicing
operations.
<Q> But why are slicing operations needed for tuples? ABC doesn't
have them.
Well, *sometimes* it is useful to treat tuples as sequences of
elements, and use subscripting or slice operations on them. E.g.,
posix.stat returns an 11-tuple. Some applications save the first
three elements of such a tuple (mode, inode, device) as a "unique
identifier" for a file. Slicing (e.g., s[0:3]) is a convenient way of
extracting this. But using a slice operation it is easy to construct
a singleton tuple (e.g., s[0:1]).
<Q> Then why can't tuple slices that produce singleton tuples be
forbidden or made to return the element instead, so that s[0:1] is the
same as s[0]?
There are many algorithms that can operate on all types of sequences
(tuples, lists and strings) using only subscripting, slicing and
concatenation, and in general these may construct singleton or empty
sequences, if only as intermediate results. E.g., here's a function
to compute a list of all permutations of a sequence:
def perm(l):
if len(l) <= 1: return [l]
r = []
for i in range(len(l)):
p = perm(l[:i] + l[i+1:])
for x in p: r.append(l[i:i+1] + x)
return r
For example, perm('abc') is ['abc', 'acb', 'bac', 'bca', 'cab', 'cba']
and perm((1, 2)) is [(1, 2), (2, 1)]. The latter constructs the
singletons (1,) and (2,) internally several times.
<Q> But couldn't tuple concatenation be redefined so that if either
argument were a single value, it would be promoted to a tuple? Then
tuple slices of one element could return that element, avoiding a
sinleton tuple.
<A> Then perm((1, 2)) above would yield [3, 3]! Because the final
results are constructing by concatenating the singletons (1,) and
(2,), which would have been replaced by the numbers 1 and 2.
<Q> But what if the concatenation operator were a different symbol
than integer addition, e.g., '++'?
<A> The perm() function would then require the dubious rule that that
1++2 is defined as (1, 2). Now compare this to 'a'++'b' -- this
obviously means 'ab' (since strings can also be concatenated) but then
the type of a++b would be hard to predict by the reader -- will it be
of the same type as a and b, or will it be the type of (a, b)? I'm
sure this would cause lots of surprises.
<Q> What about doing it the other way around then? I.e., singleton
tuples are not automatically degraded to their only element when
created, but when a singleton tuple is the argument of an operation
that requires a plain value, the first and only element of the tuple
is used instead.
<A> Well, the singleton may itself contain a singleton, e.g., ((1,),).
If we degenerate this to its first element we've still got a
singleton, (1,).
<Q> OK, suppose we apply the rule recursively?
<A> It will only confuse for users who wonder whether singletons
actually exist or not. They conveniently vanish in so many places
that when first learning the language you believe they do not actually
exist. In other words, the user's model of what's happening will
likely be one of the models rejected earlier, where singletons are
discarded as soon as they are created. By the time they encounter a
counter-example it will be too late, and they may have written loads
of broken code, like this function to extract the middle element from
a tuple which returns a singleton tuple instead of an element from the
tuple:
def middle(t):
i = len(t) / 2
return t[i:i+1]
The code is broken, but will work for simple examples:
>>> a = 1, 10, 100, 1000, 10000
>>> print middle(a) / 2
50
Only when applied to a tuple containing lists the bug will show up:
>>> p = []
>>> q = [1, 10, 100]
>>> r = range(10)
>>> b = p, q, r
>>> for x in middle(b): print x
[1, 10, 100]
Expected was the same output as from:
>>> for x in q: print x
1
10
100
<Q> OK, singletons are useful, but why does there have to be this ugly
syntax like (1,) to create them? Can't you just scrap that and make
slicing the *only* way to create singletons?
<A> Well, tuples have a representation on output, and as long as the
values it contains are simple enough, you can read a tuple as written
back in (with input() or eval()) and get an object with the same
value. Since singleton tuples exist, they must have a representation
on output, and it is only fair that their representation can also be
read back in. Also the singleton notation makes it easy to play with
singletons to find out how they work...
<Q> OK, I give up on tuples, they are perfect :-) Since you mention
the way tuples are written on output, why are they *always* surrounded
by parentheses? I thought you said that it's not the parentheses but
the comma that makes the tuple...
<A> This makes it easier to write tuples containing tuples. If the
string representation of all object types is syntactically an atom,
the function for writing a tuple (or converting it to a string, which
uses the same rules) needn't know whether to surround the elements it
writes with parentheses or not. (ABC uses the latter strategy, which
leads to more code.)
<Q> Oh, and by the way, why isn't print a function?
<A> Because it's such a heavily-used function that it deserves special
treatment -- now that it is a statement, you don't need to type
parentheses around the argument list. Also, a statement it can use
special syntax to distinguish whether a newline should be appended or
not; if it were a function there would either have to be two functions
(like Pascal's write/writeln) or a special argument value (like "echo
-n" in the Bourne shell).
<Q> Let's shift attention to the function call syntax. Why can't the
parentheses be optional, like in ABC, so I can write sin x instead of
sin(x)?
<A> And how do I call a function without parameters then? Does x=f
have to call f if it happens to be a parameterless function, as is the
case in ABC [which has no function pointers] and also in Algol-68
[which does have function pointers, so it calls f except when x is of
type pointer to function]?
<Q> No, assume there are no parameterless functions, but you can
define a function of one argument that is discarded; you can call it
as either f None or f().
<A> Fair enough, although it's not particularly elegant -- I suppose
if I call it as f(1) the argument also gets discarded? Anyway, what
do you do about the following ambiguity: a[1]. Does this call the
function a with list argument [1], or does it take element 1 of
sequence (list, tuple or string) a? Surely requiring parentheses
there would only be more confusing:
>>> def f x: return len x
>>> a = [1]
>>> f a
1
>>> f [1]
*** TypeError: subscripting unsubscriptable object
<Q> Can't you resolve this ambiguity at run time, like you already do
for the '+' operator or for the call operator x() (which creates a
class instance if x happens to be a class object)?
<A> I think that would be ugly. Also I cannot think of a situation
where the user would ever use the ambiguity in a polymorphic function,
unlike for the other two:
# A function taking either strings or numbers
def f(a, b):
if a > b: return a
else: return a+b
# A function taking either a class or some other function that
# creates an instance
def g(creator):
for i in range(10):
instance = creator()
if instance.acceptable(): return instance
return None
<Q> What other uses are there for None, besides as return value from a
procedure?
<A> Almost all the same uses that a NULL pointer has in C. An
important case it the use of None as an error return value (if for
some reason the error doesn' warrant raising an exception), e.g., this
function:
def openprofile():
for name in ('.profile', '/etc/Profile', '/usr/etc/Profile'):
try:
return open(name, 'r')
except IOError:
pass
return None # No profile -- use default settings
which can be called like this:
f = openprofile()
if f:
<read the profile>
f.close()
Another use is for a class that may want to postpone creation and
initialization of a subcomponent to the first time it is needed.
E.g.:
class C:
def init(self):
self.sub = None
return self
def usesub(self):
if self.sub is None:
self.sub = makesub()
<use self.sub>
None is also useful if a value is required but you aren't interested
in it, e.g., when using the keys of a dictionary to implement a set of
strings:
class Set:
def init(self):
self.dict = {}
return self
def add(self, x):
self.dict[x] = None
def remove(self, x):
if self.dict.has_key(x):
del self.dict[x]
def ismember(self, x):
return self.dict.has_key(x)
# etc.
--Guido van Rossum, CWI, Amsterdam <guido@cwi.nl>
"What a senseless waste of human life" | http://www.python.org/search/hypermail/python-1992/0292.html | crawl-002 | refinedweb | 2,032 | 58.82 |
- sencha package command doing nothing
- Does ItemHighlight even do anything at all in ST 2.1 RC2?
- Issues using the ProxyCache plugin where it doesn't used cached data
- Ext.Loding loading order in 2.1rc2
- Malformed delta errors from Sencha Cmd v3
- Help need to deploy GS into my android device.
- Problem with List Event Listener
- iframe PDF preview goes outside the border
- Get data from remote source and serialized this data in mobile phone.
- Confused on loading panels MVC
- Getting hold of registered listeners in ST
- Leaflet Marker in Sencha 2 Architect or Touch
- Loading Mask set to false event problem in Sencha Touch 2.0.1
- Rounded panels : CSS newb question
- should learn Sencha?
- Inline Data does not load into store
- General Performance Feedback wanted
- best approach for code that's hard to separate into property MVC setup
- Sencha Touch Monthly calender
- Styling charts through CSS/SASS
- Sencha Cmd v3.0.0.230 build. app.js is empty ?
- Sencha Touch list store diable sorting.
- Sencha Code Signing failing for "iPhone Distribution" profile
- Ext.device.Push
- Uncaught TypeError: Cannot use 'in' operator to search for 'xtype'
- Packaging on android, senchaCmdv3
- Custom icon not showing up in device
- radioCls deprecated?!
- Sencha Touch 2 no generate .apk in release mode
- PullRefresh fix for enable/disable
- Rendering an ext checkbox into an itemTpl?
- Sencha Touch 2.1 Getting Started
- How to display sub categories
- No direct function specified for this proxy
- Timestamp at the end of CSS and JS url breaking cache in production?
- How to listen for rendering done?
- No charting included in sencha-touch-2.1.0-commercial.zip
- new to nestedlist
- unmask when the background of a container is loaded.
- iPhone homescreen shortcut
- Duplicate entries get filtered
- new to sencha touch
- Ext.data.Store #setPage
- Set a variable in session
- How to create a production build with Sencha Cmd (app with sdk 2.0.1) ?
- SASS Mixins: Picker requires layout
- UX context with Date Picker
- [FIX] Sencha touch apps not working on Vodaphone Australia
- Layout card with several items
- Ext.Img not loading in device
- Sencha Touch 2.1.0 not loading on iOS?
- Scatter chart with 3rd dimension
- Add Button to Horizontally-Scrollable Tab Bar
- Log All Events to console
- List of changes from 2.0 to 2.1
- initialize function in the class system
- Advanced class system: how does Evented, eventedConfig, and fireAction work
- Difference between app.json and packager.json
- Store doesn't load
- Accumulate functionality in sencha touch chart
- treeStore with recursive data
- is there a way to query an object for its supported events?
- Debugging a Sencha Native App on Mac
- Sencha Touch 2.1.0 in native app format,how to show a confirm dialog before app exits
- Panel item hide/show
- strange behavior with Ext.Img inside a container
- hidden true not working
- Architect Sencha Command V3 Plugin option. How To?
- Ho to debug a native app on iphone?
- Native Camera : User has canceled operation
- Overlay near mouse click
- Installing Sencha SDK and Generating the GS App
- Sencha Touch 2 app with SAP Gateway
- Unexpected number error
- Event after all the elements of a list were rendered
- Load panel after Logout with new content in View isn't working
- Native Android App generated by touch consumes too many memory!
- sencha app build production
- Sencha Touch 2.0.1.1 Local Store not storing values
- Can not create app on Mac OSX 10.7.5
- Documentation holes for Touch creating 2.x custom components
- Is this possible with Sencha 2.1
- Disappearing items in Ext.form.Panel in Chrome web browser
- resize element in panel
- Carousel - Problems to add items dinamically into a existing carousel
- How to deal with stores in another namespace
- Sencha Touch Charts integration with TabPanel error
- Sencha Touch Picker Slot Text Value
- Ext.List populate? event
- How to show default image in the carousel while image sliding in the same carousel?
- Upgrade from Sencha Touch 2.0.3 to Sencha Touch 2.1.0
- bested layouts or explicit css positioning
- How to add Markers with buttons in Sencha Touch 2.0
- Integrate Payment Gateway in Sencha Touch 2
- .js .css file blocked
- Store with type memory not loading the data
- Why doesn't Sencha use grunt as build tool in sencha cmd?
- Best way of Grid with Infinite Scroll ?
- Prevent navigationView.pop() to destroy current view instance
- Sencha Touch performance
- Programmatically Fire Element Events
- multi items carousel?
- Store same Model via a different response format
- Async paints in 2.1
- Sencha touch 2 Custom Charts components
- Native android packaging
- inline list and sencha touch 2.1
- Sencha Touch 2 and OpenLayer
- iOS 6 issue with Google Maps
- Call launch function
- Sencha packaging to native android problem (javac.exe compiler)
- iphone simulator error
- Sencha Touch 2.1 Charts crash on Memory Leak on iPad and Android
- Drag and Drop in list component
- Native android app shows white screen
- Button event opening default internet browser wrongly.
- zoom in/out with crtl + scroll
- Install and setup in Windows 7 -- ???
- Sencha touch image gallery
- sencha touch generating app issue
- How does Sencha gets rendered in the screen ??? What happens behind the screen ??????
- how to provide epub file link
- Sencha Touch 2.1 infinite loading screen if UserAgent is set to iOS
- action sheet without mask?
- Navigation Title can not be repleaced by the new value
- is one animation more performant than another?
- components extend beyond the viewport
- Listen to the event fired by non component object (not Ext.Component)
- IOS 6 Web Inspector
- Component query on buttons works with title but not with id or action
- swipe event properties - explanation
- swipe up/down - need clarification
- Virtual Keyboard plugin
- How to add new data dynamically to a Ext.List
- POST with remoteProxy and FormPanel doesn't work
- Fastbook is so awesome
- Screen navigation via short cut tiles
- Showing class names in the debugger (rather than Ext.apply.create.Class)
- get reference to 'parent object' from within a mixin
- websockets and android native app
- Component based Dataview and wrapping of items
- Alarm Clock example
- Dynamic Form Fields
- Tabpanel and Navigation view events slow down app performance
- Trying to use a Ext.Dataview to diplay inline data records on a Ext.Panel...
- what tools I can use to create mobile web with sencha touch
- mixin life cycle - need more docs
- CORS referring domain when packaged natively
- Problem with local storage when using sync()
- Drupal 7 as backend
- App not working after Sencha Cmd Build
- GUIDE: Deploying Sencha Touch 2.1 as a Blackberry native apps (Webworks)
- Basic Authentication
- Need help to migrate code from Sencha touch 1 to Sencha touch 2
- touchmove event: how often is it triggered while moving?
- carousel - need to follow the movement of items in the carousel closley
- Sencha Touch within Architect Layout
- thanks for 2.2
- Need help in Lazy loading of List
- Ext.data.Field and convert
- Ext.Img layout confusion
- CSS3 Transitions and ie10
- Ext.fx.layout.Card ChromeMobile
- Sencha Touch html content pinch and zoom
- Loading Json in Store to display it on List Sencha Touch 2
- Scrolling to an item in the new List component
- Jsonp request is failing for some services
- List component not scrolling with VoiceOver read
- #appLoadingIndicator css performance
- Windows 8 tile app, help needed
- How to hidden pie charts's label?
- Sencha packaging generate error "document is not defined"
- Fonts Hazy on Chrome on Nexus 4 (Android 4.2.1)
- Dynamic Creation of View Works In Development, Not In Test
- What is the new release date for Sencha Touch in Action?
- Check if button exists -- Hide/Show /\\/ Remove/Add
- after build app for production load doesnt work fine
- CollapsibleList in Sencha 2.1
- Is Apple Provisioning necessary even for testing the app in my iphone device?
- Expand and Collapse not working in Sencha Touch 2.1
- PIcker selection functionality
- Jasmine testing, how to set it up correctly
- how to access nested list leaf directly
- Acordian layout data from json
- Dynamic height list in Sencha Touch 2.1
- Scrollable config property is not working on dataview.list and poor performance.
- animating multiple items at once?
- Ext.device.Geolocation
- Sencha Command woes...
- Reset style of toolbar
- Vimeo/Youtube scroll whole app in Phonegap + Sencha iframe
- Can this App be done in Sencha or do I need to make it native?
- Optional and mandatory config properties - TypeScript related,
- Store not loaded
- download link for .epub files
- Sencha Touch 2 provisioning on iPhone
- Export data from list into PDF file and Save in local in senchatouch 2.1
- How to use refs from a controller in Architect and Touch 2.1
- Does sencha touch support blackberry curve?
- ST 2.2alpha - Uploaded for playing with
- How to dynamically set the of itemTpl and store for a list?
- Sencha Touch Charts View Refresh(Axis)
- DataList rendering error: webworks & bb10
- New Sencha 2.2 : new features ?
- Nested slide navigation menu?
- Sencha Touch & ExtJS profiles for desktop and mobile in one web project/vhost?
- Load json without web server
- class loading suggestions written to console
- Could we do this with Ext.Carousel?
- Is PullRefresh plugin not really usable?
- CORS Request - HTTP OPTIONS Command fails in Chrome with 'Load cancelled' status
- PhantomJS selenium tests?
- New to Sencha Touch 2.1
- Question for change the Store params?
- List and dataview scrollig issues
- Video Auto play
- reversing the affect of a 'fade' animation
- Video media event handling
- Docs Preview shows Callback preview error
- How to set selectfield option dynamically from url
- String which has telugu characters are not getting displayed properly in sencha touch
- How to make use of black berry track pad for sencha touch application?
- Tap panel jump
- App don't start on low end phone (Android 2.3)
- ST for mobile-friendly website
- change ajax proxy url dynamically
- Generate application for android
- file config and android
- Sencha Cmd v3.0.0.250
- Building Windows 8 Native Applications with Sencha Touch 2.2
- Sencha Touch can run more faster in most Android devices
- Layout Accordion
- how to debug the aplication
- Infinite list or collection list and scrolling on both directions
- Is there anyway to make cross-site request using XML messages?
- What is current status of Sencha 2.1 for BB10 ?
- Sencha Touch 2.1.1? Where to download?
- List : Scrolling to Selected Item
- Please implement this.getApplication or Ext.getApplication on a global level
- Card Layout Navigation
- Google Maps Marker not Centered
- How to consume SOAP web service in Sencha Touch 2
- Encryption in Sencha Touch 2.1
- content disposition in sencha touch
- Workaround for picker selection bugs
- 2.2 alpha does not support inline-block list?
- Setup listeners for dynamically loaded form.
- NavigationView doesn't navigate to other view
- sencha touch 2.1; stuck on the loading screen in iOS
- no reset() method on form panels???? | https://www.sencha.com/forum/archive/index.php/f-91-p-8.html | CC-MAIN-2015-27 | refinedweb | 1,797 | 56.25 |
2018-03), Rick Waldron (RW)
10.ii.c Hashbang Grammar for Stage 2
(Bradley Farias)
BFS: Basically you can have the first line being single line comment, it's not available in function body. We could allow
eval but i'm not sure about that, I just wanted to bring it up because people could have opinions on that?
KG: If we went for it with the class field proposal as it currently stands, it wouldn't actually be a problem to have
#.a in
eval it might conflict but i'm not sure if this is an issue
BFS: Are we allowing private access state within
eval? We don't have a valid production plan within hash.
DE: In Joshua's smart pipeline proposal he uses hash as a placeholder. And place holder means partial application in this case. His proposal doesn't specifically permit this, but if we allow the placeholder to be used in the
eval. You could have
x |> eval("#!"), it would be a different interpretation of the same syntax. (Note: DE got this backwards during the meeting, #! vs !#)
BFS: to my knowledge all placeholder syntax should be static.
DE: we should treat the placeholder like yield.
DE: I'm not arguing that we should allow the placeholder in
eval. It doesn't inherit the same placeholder syntax, I also argued the same thing for private field's that they are not accessible from
eval. However, we ended up permitting private field access in
eval.
WH: There's enough noise (side conversations) that I can't hear you
DE: let me write the example on the board one moment
BFS: there's some concern with the other proposal we're going to talk about that. If we allow #! in
eval there might be some confusing with ???, or also I tried to prevent private state, also using the hash, from being used in
eval. because
eval is a string my arguement is that we can't use it there because we cannot statically determine if it exists.
DE: Even though this is my intuition also, we should consider the applications here, since it falls outside of the applications we've seen before.
MM: We have some parameterized productions in the grammer and then other things we get the context by saying that its allowed in the context .. ? does direct
eval (we are only talking about direct
eval here) does it ever inherit parser parameters from its context? Does direct
eval ever (permit?) a context with this grammar?
DE: There's just a lot of ways we can specify this kind of things
MM: I'm asking about clarifying the precedent
DE: The answer to that narrow question is no, not that I know of. The spec formalism is pretty flexible, but we can be pretty formal about what we want the language to be.
BT: I don't think we have a parser parameter that assigns context to
eval but (?)
MM: I knew about ???... As Dan says, we have flexibility about how we can specify it . but that flexibility should be coordinated with regard to which static context we want.
MM: I dont think that we should use the parsing parameter in direct
eval.
MM: It gives us guidance but it doesn't gives us a decision
YK: I might have missed this before I got here: Why do we need to support this in
eval? I understood the usecase for this as being that node supports it. since node supports it right now, so we wouldn't break it anyway. This would be a valid use case.
WH: That's a good reason
YK: What I was thinking is that since the only reason to do this is to communication with a shell, and since
eval is not used inside of a shell, so. the anwser I guess it you read the file and you
eval it..
MM: All of those evaluations are indirect However it would be extremely weird.. I think the hashbang, having that be part of script or not based on having parse ?? semantics would be good. Parse node(?) production.
YK: I guess it goes back to the first part that I said, the issue we possibly have is place-holder semantics. Given that this is already a defacto standard in node,
MM: After this discussion im strongly in favour of this proposal be phrased as script, direct-
eval, and indirect-
eval continue to have the same start production, and this start production would now accept an optional initial hashbang.
BFS: I'm asking for stage 2 on this, is there consensus for that?
WH:
#! would be a lexical token?
BFS: currently it is as a single token but we can specify it as two if you desire.
WH: No, I do not desire for it to be two tokens. It should be one token. It was unclear because the proposal doesn't properly distinguish between the lexical and syntactic grammars.
JG: You should ask for reviewers.
WH: (will do it)
BT: sure
Conclusion/Resolution
- Stage 2, WH & BT to review
10.ii.b Richer Keys for stage 1
(Bradley Farias)
BFS: I want to discuss a proposal pair to improve creating keys for maps and composite keys. I tried to point them under the term of richer keys. We have some problems that I encounter somewhat frequently with arbitrary nested maps. Multiple object that i'm using as a composer keys. We also don't have any way for nested maps when they use synthetic keys. You have to manually key if you extend map, and if you extend map you have to crawl-up the prototype chain and use the incorrect one either by accident or by design. First thing that I want to talk about is creating a composite key, not value types. you can think of this as a variant of symbol.for. If you pass it any sort of references, however many there are, it creates an identmpotent cache symbol, and it will return the same symbol every ttime, it will return the same symbol back every time. its something that can be passed back between realms. I do think that the order of arguments matters, basically allows us to have duplicate objects within our value list. This, importantly, should not be treated as a solution for value types. it does not have any way to get values back from your symbol. So once you get your symbol, you're essentially weakly holding on to those objects. It has no way to get your value back from the symbol. The global symbol map is weekly holding onto the symbol via this value change. So what does this look like? Given two object a and b, we can create a composite key. Called Symbol.compositeKey, so pretty simple. and then we can use it as a key on an object or pass it into a regular map (not a
WeakMap) . we can discuss if this should be a reference type or (..?). We can use it as a reference type or as a symbol, so that's fine. Whenever I go and I set O key, I create a new key with the same compositeKey of A and B. I have access to both their objects, but I can't reproduce the key, but we can still get access to this when using ??? and it can be GC'ed at that point.
AK: previously you said something about realm can you say what you mean?
BFS: I want to skip over that for now
MM: the symbol corresponds to the identity of a, so the symbol composite key of AB will be different from BA
BFS: Correct. The order is important and duplicates are allowed
??: The symbol corresponds to object-identity at that point.
BFS: Correct it's identity
YK: It's always an object ???
BFS: So we can't put a symbol in a
WeakMap as a key, so lets talk about Map?
YK: That was a clarifying question, so at this point I will wait and say. TL;DR I am unsure if this is the right mechanism, but I will wait. We can't use symbol.for because it does stringify it's arguments.
BFS: It's sane to me to have single argument string to be equal to Symbol.for. Once we talk about using primitives in this composite key we have to be careful about GC because we essentially are keeping an idempotent thing around. The global symbol table isn't specified
BFS: The companion proposal I am suggesting to this, "companion key" its a mapping function for when you are having a custom keyed collection. it returns any value and could be a reference to an object. We can bike-shed the names forever, which I don't want to doBFS: , Its a generic API, any sort of keyed collection sets I'm saying are keyed based on the identity. You can return any value, could be a reference to an object, I could have A map to a different object such as B
BFS: S: I could be argued that you want to have the original key passed to set for entry for keys, but I think that would just keep them alive when it's not needed. I can't think of any use cases where you need the original key (but I could imagine the opposite direction). If we return the original key, we can reuse the entry keys to reuse set and get. but if we return the synthetic keys that may not be true, so im not sure how to tackle that at this time, this is something for future investigation. So right now I don't think we need this on weak collections, in use cases we could probably figure it out. If it is a weak collection, we have to return something like adam said with a lifetime. doesn't have to inherit from the object protoype. ...? basically it is a mapping function, here we have a create user function. im using the email as a primary key.
YK: I understand your proposal, that allows symbol to be collected
BFS: Any sort of itempotency is going to prevent collection.
YK: I think that the use case that makes sense for me is to create e a weak key that can be used in a
WeakMap. so you have a composite key and you want to put the composite key in a
WeakMap allows the composite key to be collected. I think there was some aspect that I didn't fully understand about collecting semantics. can you explain that?
BFS: So to reiterate, the symbol.compositeKey idea does not allow the symbols to be collected, until A's lifetime (??) has been collected. Since generally you're going to be putting them in a map or something, so they'll be held and you'll manually have to dispose of it you'll have to manually dispose of it
YK: so you haeva symboe that contains several object. once those objects are collected there is no way to get that back. so the symbol cannot be collected.
BFS: Correct. It can be collected, in the reference implementation and the spec ??? are collected.
YK: but in this case that symbols is strongly held by some other map.
BFS: I'm not sure of that because you're storing it in a Map.
YK: but it sounds like the pattern that you have in mind is that you create a symbol in a map, and then someone else comes a long and makes that same symbol and uses it again. From my perspective that use case is just better described as a
WeakMap. The use case that make sense to me it to turn the ??? into a
WeakMap.
MM: It took me back to why symbols themselves can not be keys in
WeakMaps. I'd like to make a third point. Why symbols themselves can't be weak-map keys. I remembered symbol.for and its the same issue here. If the symbol has a unique identity, then when the symbol is lost, the association in the
WeakMap is lost. However, if the symbol can be recreated from data with
Symbol.for, then we the
WeakMap can never drop the association because it might still be looked up. Likewise, in compositeKey on an object identity, when the object identity is lost the composite is lost. The problem is that this proposal (for very good reason) also accepts values as arguments to compositeKeys, in which case the same compositeKey can be recreated.
YK: I think the composite keys have the same relationship as the original keys
MM: But, are you suggesting–I'll let Bradley speak to the details of the proposal. but My assumption is that the purpose of this proposal are strongly against having a big semantic difference between object and value arguments
YK: Right, that's not what i'm proposing.
MM: I think I can reduce your question down to the symbols themselves, we could have allowed symbols that are not created by
symbol.for
YK: I dont think symbols are a good mechanism for this.
DE: I wanted to thank you for iterating on this proposal based on the feedback. I really liked some of the recent changes like reKey used to be called hash, and that was really confusing. It seems like compositeKeys are a really core feature. I don't quite understand the memory leak issue, but it seems like something we should continue to look into. But to me the composite key seems to solve a use case that comes up in situations. To me the composite key seems to solve a pretty common use case.
BFS: I only have a couple more slides, I expect this to be some kind of composite key destructuring or ?? With this, we need to be careful a little bit, if we have all these things we've been talking about, GC and all that. We need to be sure that when people are doing these, that we can have good intuition of how things CAN be garbage collected. In particular, if a single life time object is collected. the entire key - the path to it, the composite key is destroyed. for example I have this example with x and y. I only need one of them collected to make the entire symbol to go away. Likewise if X is some disposable object ??. Once a single object is destroyed we can destroy the entire path to it. Since order is important, you really want the object references to be invalidated. That's actually a bit tricky to do. its not in my naive implementation but it is doable I think.
MM: What do you mean by destroyed in JavaScript?
BFS: It's allowed to be collected but its no longer idempotent you can't get access to it.
AK: Its unreachable?
MM: Ok, I am very confused
AK: I'm not sure this is important. I think it just falls out of how its specified. I think you are reiterating something that is logically implied by how it is specified.
BFS: Let me try to rephrase, so once a lifetime associate with this symbol we can remove this symbol from the global symbol table.
AK: oh I see
MM: only if no one is holding the symbol right?
BFS: no we can remove it eagerly
MM: I see but nothing can return to it. I got it
BFS: you can't reproduce the symbol
JRL: the rekey function is being called with the key of map.set by default is that correct?
BFS: It does nothing by default
JRL: How is the rekey called?
BFS: I haven't specified that. Calling
map.set(key) calls rekey with key to get the finally used key
JRL: the internally used key is externally visible through entries?
BFS: Yes
JRL: why isn't the code who calls map.set creating the composite key before calling set (eliminating rekey)?
BFS: Because I've tried to get people to use that multiple times, also even if you do something that is also a solution for extending map which has these problems. Lets say email map. then I have to reimplement all the functionnality of map and doing it manually to all these places. it seems a little grunt-worky.
JRL: My second point: are the keys only objects? I'm trying to do this in Babel, and my first thought is to implement this on
WeakMap (that points to another
WeakMap, that points to another
WeakMap...).
MM: so it is a "weak trie"?
BFS: It is not a "weak trie" because it has to have reference to value types
YK: ???
AK: I wanna go back to what you were saying about subclassing. it seems like subclassing seems like a .. approach instead of using rekey.
AK: That's wasn't it
AK: subclassing gives you the ability to do that
BFS: This is very tailored just for changing the keys though. If they want more behaviour they can do more.
AK: thats exactly my point. this is tailored to something very specific that can be done in user space. in comparison to composite key, its easy to make a composite key implementation that is leaky.
BFS: Mine is not (Laughs...).
AK: rekey seems more... specific and not generally applicable while this seems more significant. I would prefer not to have rekey.
BFS: can you say why you would prefer not have rekey.
AK: because it's narrow, it doesn't do all the things.
BFS: If there are more things that we wish to add, I'm perfectly fine with that.
MM: composite key has a compelling case for standardization, because its doing something thats very hard to do for yourself. reKey is modifying the built-in map and inserting an extension mechanism by extending the API when the built in map is already specified to be extensible by subclassing, now, I'm not a big fan of subclassing but I agree in this case that there is nothing hard about the user using the subclassing mechanism, to get the extension that expresses this feature as a subclass of map so there's no reason for reKey. There's no reason for rekey to complexify the Map class itself, rather than come in as part of a separately complex subclass. Leave the superclass simple.
JHD: The reason is, ill use an existing class, is it possible to ... positive and negative zero because map.prototype.set.call bypasses ??, it so in fact, it is impossible to create a subclass that can specify this behavior. I'm not specifically talking about reKey im saying in general subclassing Map and Set can avoid modifying the built-in...
AK: thats a missuse of my concept of subclass.
JHD: None the less its still something that's impossible for me to provide
AK: I can also hide the implementation detail and hide the map...
JHD: You can then wrap and you can provide it then it because very difficult to get right.
AK: Your point doesn't apply to rekey...
JHD: you can subclass it as a reason not to modify the built in. Then the subclass has to be adding capabilities, and when the built in does not provide that restriction between negative and positive—its not something you can do
BFS: as a counter point it could be seen as desirable to make subclasses as the standard way of making this available. sure if its rekeyable map and
WeakMap. I would still like to see this feature possible.
BFS: Asking for Stage 1, so if we need to add features or change it to a subclass that's fine
Conclusion/Resolution
- Stage 1 acceptance
10.iv.b JavaScript Classes 1.1
(Kevin Smith, Brendan Eich, Allen Wirfs-Brock)
RJE: A couple of things we want to discuss today. The classes 1.1. ?? The decorators to stage 3 ?? Rather than get into technical details about these, we might start off with a philosophical discussion. Given that there are multiple specifications, we should discuss whether one supersedes another, etc.??: Can you clarify what you mean
BT: Can you clarify what you mean by a philosophical discussion?
RJE: The idea that some of these things have been floating around for some time, so we should decide whether they can be reconciled...
AK: I would like to hear the presentation of the new thing
BT: we can discuss the actual proposal but there might be higher level things that we can discuss in advance to that. such as in what ways is the process broken in some ways?
AK: I worry that might take all our time
BT: but if we don't then we might discuss it interleved with the technical discussion
YK: I have a smaller version of Brian's question: what is the proposed resolution of the to this spec.
BE: The proposed resolutions address both the concrete and the abstract...
BE: Take this thought experiment; think about and reject it. That's fine, I think we should just present it and talk about it first. OK, I'm a late comer to this. I think Allen pinged me about this, I think others are aware of this. The history goes back very far, the first TC39 meeting ever in November 1996, Netscape took the lead, and we had a proposal at that time for classes. We had lots and lots of false starts. We had a problem with the baggage that classes have from related languages. Adding them with prototype inheritance is problematic. We almost dropped them from es6, that would have been a disaster—we did it by what's called maximally-minimal design. since then we have classes as a building block. I think what might be said is we have gone in an independent path lead by Babel and other transpilers. And maybe we should back up. I'm presenting this on behalf of Kevin and Allen. we have non modular properties, cross cutting concerns. its treacherously easy to lose some of them as you add to the language. Making sure as we go we keep track of these concerns. We want to minimize the kernel, but its all easy to say rather than to do. One of the suspicions I think some people have, is that we've lost that with classes. We probably need to try and minimize global complexity. We should probably need to take another trip around that. and take a look at what we should maybe look at what should be taken out. We have a habit in the committee, which is quite large now, that makes it hard to make more minimal proposals. I don't have a strong position on how this proposal relates to minimalism. If we have any problem in getting down to any dependent path that doesn't optimize for complexity. We should look at alternative proposals even if they're too modest, too small, too simple. It can be a useful thought experiment. So I hope this framing helps—we should be conscious of what is going on. Reluctance to break consensus is a great thing. This is going to take us back to some goals and some anti goals. This is word salad—I'm not going to read it out. Allen and Kevin, in particular, want to find a new way forward for, the essential parts of the class which are missing now, so lets get into it. This is a little bit preliminary. while proposals that are early implemented for getting feedback, ultimately the committee is also going to dispose of things in a way that the implementations have looked into and not just going to move a proposal that is popular only for that reason. There's been controversy on twitter about this, like "how can you pull back on a stage 3 proposal?" its happened before. We can always pull back from a stage-3 proposal, so I want to make sure that is not controversial. This proposal form k and a has hidden names, per instance encapsulated state .... (see slides)
(presenting slide showing
class Point implementation example)
BE: This example is helpful to show pretty much everything that is in this proposal. it also shows the integrity of this design—encapsulation, for example.
WH: (Asking about Brendan's slide) Did you mean to say
another.brandCheck() instead of
another->brandCheck()?
BE: Yes, that's a bug in the slide. If you use dot then you will get a reference error or call the wrong brandCheck function.
WH: This is a very telling bug.
(discussion about contents of slide, correction noticed)
BE: There's similar issue with hash names.
BE: this looks a lot safer than the hash. if you leave out the hash you have problem. Waldemar what do you think?
WH: I have lots to say about this but I am waiting for you to finish the presentation first.
(presenting slide "a simple example: add a hidden method")
BE: This is just showing a minimal case of an instance variable. You go along showing a hidden method, and a static initializer. I suppose the idea is the outer frame bound. I think one of the most appealing parts of this is not the syntax, but the lexical bodies being hidden. you could have a computed property name, ie
["foo"] and have
this.foo The meaning of this is radically different. I was saying that there is a missing illustration of how computed properties are ... ? Polyfilling allows this.foo as a static initializer, seems like a problem. What this proposal tries to do is get away from this. and use lexical names where possible. When referring hidden names is where you have to make a choice—could be .# could be -> .
BE: we realized any kind of shorthanding doesn't work because of what people call the "ASI" problem. you would need a semicolon before a line that starts with an instance variable access.
BE:
# has the advantage of using it without a prefix. The disadvantage is aesthetics .
BE: There are a lot of issues on the github issue like "why var", though the choice of token is not really core to this proposal; we could use another name.
BE: public fields are left out of the proposal.
YK: clarifying question. I read their documents carefully and my instinct is that they have to reject public fields.
BE: I think Kevin or Allen might want to reject them, but I don't want to make that case here. We don't need to get into that. You can get around that the long way. We are not adding new kernel semantics. There's nothing novel about the syntax proposal. I think this is a fruitless debate. I'm going to advocate against all appeals here. Public fields desugar. This is probably not a surprise given Allen's small talk background. Why use var? Short, love it or hate it, its already there.
BE: I think hidden is too long but I think it works as an intentional keyword.
(more examples)
BE: This proposal does want instance variable to be imperative, not declarative. This proposal does want its variable to be per class, not inherited. This is something I think implementors have asked for.
BE: Again, we already have properties, I think the question for the committee is do we have private variables with the hash syntax or do we do something different. Is there anything else here I want to emphasize? Apologies for the word salad again. The earlier example had a branch x, and the idea is that engines can hoist these to optimize. this is up to the implementers.
MM: With regard to what you're saying now, is there any difference between this proposal and the private state proposal?
BE: Dan you should help me out here...
AK: The answer is "no", there's a slight difference but the methods are the same.
BE: Hidden methods are statically determinable.
AK: the only concrete difference here is the absence of brand check for private/hidden methods.
BE: I'll work in this proposal
DE: Allan initially proposed this semantics, and this is what I started out on the private methods proposal with. However, there was a lot of convincing committee feedback (e.g., from MM, KG) that we should treat them as non-writable own private fields, so I changed the proposal to that.
YK: Are the methods different in any way other than the brand-checking?
DE: If we go with this strategy, this method static branches do not encounter the issue that we saw with static private methods. but we had the objection that this would not be "methody" enough if they were missing a brand check.
BE: this proposal does not have a branch check.
DE: Methods have a brand check in the current private methods proposal, on the other hand
WH: In this proposal calling a method ignores the left side of the
->, so that you can call an instance method on anything and still get your enclosing class's definition of the method rather than the instance's definition of the method.
MM: It doesn't ignore it, it passes the this, and other than that it ignores it.
WH: That's very strange.
BE: Dan, you defended that position.
DE: Which position?
BE: This proposal, these semantics.
DE: Personally im not sure that the brand checking is that important but it also seems more ok. If this is more intuitive for a chunk of the committee of private instance fields that does the checking I'm ok with going that way. From an implementation standpoint, (from what I could tell from talking to implementers) either one would be ok. The brand-checking proposal will be pretty analogous for what you gave to do with optimizing for inlining methods.
BE: Runtime optomization burden is the same.
YK: It fits as the standard static private property. a problem that we have been dealing with lately is that static private methods do not work well with subclassing.
DE: Because the subclass doesn't actually have the static/private method
BE: I think that's also a motivation
AK: I doesn't help that much because it doesn't access static private state.
DE: There's been a variety of intuitions about what hazards are hazardous. Some people view static private methods not working with subclass receivers is worse than ... ? So for that intuition switching to these (no-brandcheck) semantics would address the concern. Other people view them as both being significant.
BE: This is a different design choose, thanks for bribing up. Class initialization blocks: I think this is uncontroversial.
DE: We will discuss it later, in the static public fields presentation.
BE: This slide is worth calling attention to the unbulleted items at the bottom (behinds the scenes slide) -- you can resolve hidden method references statically.
DE: So to answer this question, Without decorators, the methods are already statically resolvable.
BE: You need to partially evaluate a brand check or ..?
DE: What I mean is, without decorators, when you call a private method it's already lexically resolvable; the method is a non-writable part of the field. but you still have to check when reading the method of the object that you are reading the right brand.
BE: Interesting...
DE: Without decorators, it's statically determinable, though we lose that property for private methods in decorated classes.
BE: We should talk about decorators because this is a composition issue that I thought would come up. we should discuss that. ok. in spite of the backwards presentation. but I think people get the idea. this isn't as far a long as other proposals but I think its interesting as a maximally minimal solution. This is not a mature proposal to the degree for the stage 2 or 3. I think if we keep blazing this path of taking (talking?) compiler implementations as the way forward it is more likely that we will make a mistake.
WH: What happens when you nest a class within a class. would you see the outer ones as well as the inner ones?
BE: My understanding is that you can see all the outer ones.
WH: There is no way to get the shadowed ones? ok
DT: In your example, where you had a dot vs arrow confusion. That seems like a pretty plausible bug to show up pretty often. I'm wondering whether you can't really tell if its a bug or if its what the programmer detected. Do you have any thoughts on preventing this class of bugs?
BE: There is a benefit, if you ever use the arrow to get a hidden name that doesn't exist, you get a static error. In this case you'll get a runtime error or something similar. I think this is true of hash as well the latter error of using dot without the hash. And that is analogous
AK: There's the debate whether it's significant
BE: I have an intuition that if you use arrow for hidden names and instance variables. especially people who have a background in hidden and .. ??
AK: you talked about the this.foo and the this.['foo'] initializer. Another hazard, that particular hazard has been discussed by the committee many times. The combination of those discussions and community feedback and use of that feedback in the wild, got the committee to consensus that it was OK. I just wanted to say it's not a new example.
BE: Agreed.
YK: I read the proposal in great detail, and I wrote a summary. one thing that I did was make a table that showed the syntactic difference. It's now clear to me I really like the fact that in the current proposal definitions and uses are analogous. (and I agree that we can change any of these names) but
-> is how we declare them here, previously,
# is how we declare them and
# is how we get them. here in this proposal we have arrow for read and write and invoking, and var or hidden is for declaring. and in the hash proposal it uses a single syntax for all of these.
MM: The same syntax is used at the declaration side and the use side
YK: and at the invocation site.
Yes thats a difference for sure, this claims to be more orthogonal in concepts.
BFS: One thing that strikes me about this proposal is thatI have trouble with the mental model consistency when comparing it to both public and private fields in this proposal. You said these are instance variables basically, its a difference design but I think one of the advantage of the current field proposal, they have no new kernel behavior, they do use define instead of assign and you need to be a bit careful about this. The current staged proposal is more like a minimal proposal, that also draws an analogy between public and private.
BFS: I have some concerns about scoping when you nest things and In particular, you delegate up to whatever is not shadowed, I think this essentially may solve some use cases, but it isn't addressing all use cases. And the semantics are different enough that I don't really like comparing the proposals I would prefer to treat them as different paths we can take but have 1 supercede the other
BE: Can we try to break that down? I agree with you, but having some concerns is not actionable. I agree each scope can be nested? You're shadowing that's your problem. None of these proposals claims to solve all problems. I agree.
MM: I want to just clarify a pedantic point that because people keep using
WeakMaps. Its a
WeakMap-like collection.
MM: It has to differentiate assignment from invocation, otherwise you get a confused delegate problem
BE: that was BFS last point, MM is pointing out that what we have in the hash private field is that its an extension of kernel semantics. we need to be crystal clear about that. Neither one is preserving ES6 level of semantics as such.
MM: You could create a
WeakMap-like collection in the user space. I would still say that the right characterisation of is a desugaring to that feature of ES6.
BFS: the point about define vs assign though is critical.
BE: One of the lesser arguments here is consistency with the use of equal, assignment uses equal as well... This proposal uses a separate selector, and avoids the define vs assign by construction,
MM: the only difference that I see is the lack of initializer. and then syntactically there are of course a number of difference. The fact that there is a leading keyword makes the equals less confusing. Declaring keyword variable name equals expression is never thought to have assignment.
(discussion of the leading keyword)
BE: The lack of equal there kinda takes away ...
MM: I actually like the lack of equal
WH: We're digressing from Bradley's point.
DE: maybe you could put yourself on the queue
BE: I wanted to pick up on that because theres a syntactic controversy on using equals.
YK: the little aside that we had is why we should reject certain aspects of features. .
BE: Public fields?
YK yes, you don't have to reject them, but if you do add them in the future what makes this nice would be less nice
BE: Allen or Kevin does argue that it's better to put the initializer in the constructor body.
YK: I think that whole perspective hangs together and we should consider it as a cohesive thing.
BE: The example that came out in one of the issues I mentioned.
BE: the example that I'm thinking of:
class { [this.foo]() {...} x = this.foo; }
BE: Semicolon! (Laughs). Those two
this are not the same. but there are some particulars. there is no curly body, and just the square brackets. and being on the right in the public field. and this is a smell. Seems like mistakes were made, maybe its acceptable. I know JS has mistakes from long ago that we can't go back. Maybe this is something we should talk about today. In the GitHub issue people bring up with we could use this.
??: since you mentioned my name, we can make changes like that without worrying about breaking --
YK: I think we shouldn't worry too much about... well, I'm not worried about breaking users? more that we should not ignore this issue
i think that we should not ignore this issue.
BFS: Why can't we use curly braces in the initializer to symbolize that the
this binding is changing?
BE: Nowhere else do curly braces change the meaning of this. If we adopt computed property names and public initializer and we will have to make a very convincing story.
(more back and forth)
BE: No where else in the language does square brackets change the meaning of "this" (refers to example above)
BE: Only this juxtaposition arrises in public fields and not in computed properties. no where do square brackets change the meaning of this
BE: Do square brackets change the meaning of this? We should debate that do square brackets change the meaning of this?
BFS: thats not exactly what im trying to state. im asking why a functional form ? I'm asking why do you think a functional form would you be any different. This seems very strange to me as an argument, and I haven't been able to articulate it because ive only seen this proposal for a very short amount of time.
BE: Forget this proposal, YK already tried to explain that
DH: I think the claim is that this proposal that brendan is presenting doesn't strongly claim but it sort of weakly claims that, it has the intention of perhaps never having public field syntax at all. What Brendan is saying, the reason is that's different from the proposal you're proposing today is that public field syntax is gone.
BE: It sort of gestures in the direction that we shouldn't have it
BE: wanted to avoid that controversy. I think the softest form of this is that in this case you won't need public fields.
BE: I'm not here to condem public fields based on this proposal but this example came out and it has nothing to do with ??? proposal
DE: I wanted to give a little more historical context to what we're discussing. As Adam mentioned, both the syntax and the equal sign. People were suggesting maybe we should put in some curly braces, we ended up coming to some consensus on this that = should be explicitly OK as a syntax. Its a cost benefit thing. There are other options than not having public fields. For example not having computed public fields. The other thing was, the use of equals and the keyword at the start of it. We were considering whether a keyword before the field declaration would clarify "define" rather than "set". An educator in the room. Ashley Williams, gave an interesting perspective that a keyword doesn't add that much. Rather people will just have to learn this regardless. The keyword just doesn't give explanatory power, for the issue that you raised about when things are evaluated, we discussed in the Munich meeting that this integrated idea for when things are evaluated, for example you might expect that static public fields would be evaluated in a strictly top-down, left-to-right way way. This really doesn't work for a bunch a reasons: first, the static public field maybe decorated and you need to coalesce getters/setters. The other issue was the static public fields had to have the class no longer in TDZ. So we really had to do all the other things to build a class. If we want to put expressions in these places, that's what we're buying into.
BE: If we have this reordering and staging, and decorators actually up the ante. We should look at the possible misorder,
BE: I dont think computed property names are something we can pull back, thats already in.
YK: clarification -- when we did the Munich ordering proposal We observed that there was only 1 place where there was any ordering at all. As a result there's no compatibility issue right? Today they run top down, so there is no compatibility problem.
BE: I don't want to get into public fields too much because its not my bag.
YK: We should really figure out the ordering before we start bringing up any other proposals
BE: Does anyone here have more questions? I guess we should get to this later...
(refers to the class example above)
KG: It seems plausible to me that we can make that first line an error. That this is the outer this and has nothing to do with the class. This is super confusing and does not seem like something that would be useful. (
[this.foo] as a public method name)
BE: You want to have some ad-hoc incompatible changes. I think that's hard to do because you could also have something like this (refers to example)
YK: I think the current implementation, which in strict mode in modules is undefined.
JRL: It's not always undefined. It's the outer this, which can be defined by
.calling an enclosing function
MM:
a reasonable semantic change would be to ..?
KG: Which seems confusing, that first thing doesn't seems problematic, the only thing which seems bad here, is this having a different meaning depending on whether this is on the left or right of the equals sign. We can just ban it on the left and have no problem.
BE: Its a breaking change and its an irregularity.
KG: My claim is that the current behaviour is confusing. In the class body you're in the class, you enter strict mode.
BE: We have something that seems like it was regularly composed, but then there's this incongruity with scopes
DH: We're in the space of talking about a massive incompatible change
BE: What's the incompatible change?
DH: The whole proposal it's not actually obvious to me if a change to the scopes of computed property names is doable.
DH: We are discussing again topics that have come up multiple times. I just want to reiterate that there is some discomfort. The public fields are you know we have different context that have not the same as computed property names. We have sublty different contexts that do not have visual nesting. I've argued it before and I will argue it again, this is the nature of class syntax. Class syntax is describing a pretty compound construct with a pretty flat context structure.
DH: That's just sort of the nature of a syntactically more sparse syntax for describing a compound protocol. To begin with classes are already combing the se layers of static properteies and ... (properties). So you have to learn the different things that shift your context. Based on some of those signifiers based on prefix and that I think there is some budget in classes where you have to learn this.
YK: Its really a different point that I would like to make
BE: You can teach all this. Classes are like recipes with module functions, and there's a complexity there.
BT: I suspect that it will be hard to not wanting to reply if they're not in the queue. My concern is that the queue accounts for more than 15 minutes of discussion.
(discussion about what to do right now, 5 minutes before lunch)
YK: We've been discussing that we can make this more of an error. The natural form is
(demonstrating the following class on the whiteboard)
class { [this.x]() {} static x = this.x }
YK:
this in this (:drums:) example is the same value. I think this is a justification for making this change...
BE: we had an example on the GitHub issue?
BE: If I understand right, with (?) property names in (?)
YK: this is the outer ??? It's small breaking change
oh I see you are talking about another breaking change other than the one we were discussing before
WH: I agree with YK's point but I think we're ratholing on square brackets rather than going into the main issues here.
JRL: IRC we trying to figuoure out the brand checking, you can literally call the function arrow branch-check, the arrow function doesn't automatically use the branch-check. I just wanted to point it out because it's on IRC
"hello"->brandCheck()
JRL: This calls brandCheck with "hello" as the
this. Everything "works", but it's weird.
BE: The arrow does not do a branch check. Basically you can do a string "hello" and the arrow will follow a brand check with ->"hello"
BE: It's unwrapped because of strict mode
KG: It just seems weird to me that you can lookup that method on random things
WH: My main concern here is that this is too complex.
BE: (joking) which this?
WH: Is this proposal meant instead of the decorators proposal or in addition to the decorators proposal?
??: will this be combined with the decorators proposal?
BE: I actually don't know
YK: The only thing I can say is that the authors of this proposal strongly disagree with decorators?
BE: There's a risk that this proposal is taken as such a clean slate that this blows up decorators as well as public field.
BT Its probably true
WH: We've had maximally minimal classes before. They were insufficient, so this is another attempt at that. However it's clear to me that this does not cover many of the use cases, so, extrapolating a little bit, I don't think this will prevent people from proposing more class extensions later on. Maybe someone is idealist enough to think that this will end future proposals for classes, but I don't believe it. What concerns me is what position we'll be in if we adopted this and then want to address the other use cases. Unfortunately it looks like we will back ourselves into a corner. One way I look at that is refactorings—the kinds of code transformations that people are likely want to do. There is a desire to use public fields, not just private ones. Dynamically creating public fields doesn't work if you want to attach decorators to them. The obvious syntax for public properties is taken and they won't be able to be made consistent with the private ones.
BE: Because
-> vs.
.?
WH: Yes. And also because the syntax for private fields doesn't visibly mark them as private.
??: I dont like arrow because its buggy.
BE: I was asking why you think this proposal, other than the attitude of its creators precludes that...
WH: Public members would have radically different behavior from private ones. One kind of fields you must do declaratively, the other kind of fields you must not do declaratively. One kind of methods always do dispatch, the other kind of methods never do dispatch.
BE: yes
WH: It rubs me the wrong way.
BE: OK, let's put that after because I still don't understand the distinction you're making about public vs. private methods
BT: They can, certainly I think they will undermine what this proposal is trying to do.
WH: You are declaring private fields with var. so now it's--
BE: (continuing WH's sentence) ... "star wars". It feels like a mismatch. I agree with that sentiment.
WH: Looking at the refactoring cost of changing a method between public/private -- there's just so many pitfalls. You have to change all the dots to arrows. One you have to do declaratively, the other you cannot do declaratively.
BE: ???
BE: You still have to do the work for it(?). This is controversial and it came up in the GitHub issues too. Using the hash there is still a hazard, if you leave it out then it is a .name.
WH: I think of the
# as part of the name of a private member.
-> doesn't have the same connotation and leads to more confusion.
DH: I like Waldemar's point, especially regarding backing ourselves into a corner. The proposal is that the slides said at the beginning and Allen had said on Twitter that we need to make sure we're considering cross-cutting concerns and looking holistically.
DH: So we've done all this work to consider cross-cutting concerns, "lets remove those from the discussion". That's the opposite of cost-cutting concerns, that's not even thinking about them!
DH: If we only do a design that refuses to engage with these things than we can back ourselves into a corner (paraphrasing, possibly).
DH: It's still removing them from the discussion, and the whole claim is to consider them in our design
DH: You building a broader and broader understanding.
BE: Then you just get max.
DH: I'm not claiming that there's any process that gets you perfect. I am claiming incremental work can get you further along. I don't think this process has been a greater upper bound process.
BE: Yeah, This isn't like a kitchen sink process, I agree with that.
DH; I really like the way that you frame this as a thought exercise, while i'm not comfortable with the proposal ??
BE: Considering this under a Smalltalk halo, I think it's interesting to discuss, the arrow and the static resolution is interesting. I agree with brad
DE: So I really like this phrasing of it, I wanted to go back to what we were talking about before with the intuition of arrow vs dot hash. The way I've been explaining this proposal is that "hash is the new underscore. Basically, like
basically its like a
WeakMap". Private fields can be thought of as internal slots or as properties of a
WeakMap, and the semantics in the private field case is a subset of the public field case (plus more exceptions). We talk about public and private "fields" because they are in correspondence with each other. We should make it clear to programmers that private is different from public so people don't get confused. I want to make the opposite point. So that developers can migrate between these things, they should be parallel. Developers who are currently using public can easily migrate their code to private. JavaScript programmers understand these patterns of leading things with underscores with intent to not make things visible outside of the class. This sort of intent in creating this analogy lead in a way that it took .... ? and copied it over and I agree with what ??? was saying that hash should be part of the name. is really sort of core to our model that we hope programmers won't mix it up.
??: If we only we could use underscore
DE: Thats a frequently asked question that I get about this proposal. It's clear that there's community discomfort with the hash. Multiple uses of the dot--one of them referring to public and one of them referring to private. This is why we need ?? site to make this less ambiguous. Usually the question after that is well why not underscore and the answer is that there is a compatibility issue. To the extent that this proposal is trying to address community feedback and intuition about what the interface should be, I feel like the arrow misses the mark. I think the intention from a lot of programmer is to use dot. Once you realize that the dot doesn't work, arrow seems like a good idea superficially but it doesn't do well with refactoring and clarifying this correspondence
BFS: He brings up the ? concern, when you refactor from a . to an arrow, a
. is a reference lookup vs. an
-> which is a lexical lookup. We are not just changing how things are looked up. we are also changing what I fundamentally understand intuitively about dispatch. that brings up a new concern which I had a horrible time describing earlier. It make it more complex for me to explain. This makes it much harder to understand, it may be like this in other languages, but it seems much harder to understand this way.
TST: I would like to say something about how the proposal ??? One of my main concerns that I would like to make explicit is, to me but at the ?? box? I'm sympathetic to some of the changes proposed, and we arrivesd at the proposal as it is right now as we were influenced by many factors. A lot of it has been discussed earlier. It might be worth revisiting them, its also presented as a counter to some proposals that I feel are completely orthogonal to this one. and that feels like a slight of hand. I feel like the proposal should in this case go and argue against the specific proposal, rather than being one about the private state. and you mentioned this that this doesnt preclude these other proposals. you said that this should also be a reset and I dont agree with that.
BE: I didn't do that. I said it was a thought experiment. I'm not Allen or Kevin. Smalltalk design point ... I think they have sympathy for decorators, and I think that bringing up cases where we have allowed this mean things that it shouldnt might be worth the time that we spent this morning. I hope it was worth considering alternative design. I don't want to drag this out too long...
YK: I agree with what brendand just said which is there are somethings. I agree with there are are things worth considering: first we consider whether we want rprivate methods to have the branch-check. we consider arrow syntax. Private methods to have the branch check, we could consider arrow syntax but I think that is pretty unlikely, given dan's very well private syntax faq. (NOTE FROM DAN: This FAQ was written by Kevin Gibbons. Great job, Kevin! )We could consider making the
.# a single token, not sure why this was so important to Alan. He did repeat it a lot in the issue thread. Finally we could consider ??? I think with the exception of the arrow syntax, all those other things I don't object to. From that perspective i'm happy that this proposal was presented
AK: I will spend 30 se. In this proposal from Allen and Kevin,
this changes meaning in these places (points to whiteboard and explains the different lexical scoping of various
thises)
BE: We've already got "this" meaning different things
AK: but it adds another one
BE: which we might want, I believe static locks might be desired? The static initliazer we could talk about it separately. We have problems where this changes meaning, it could be good if it was as small as possible
MM: I want to discuss an answer to Brian with regard to Process. One of the dynamics that has repeatedly come up, the suggestion here is calling into question decision X and has decision X already some into consensus. and that happening again for decision y and decision z... Our process is not broken as long as we don't take our process too seriously. There are reasons why sticking with the process literally would be broken, but the right fix to that is not to try to design and write down an amended process. the right way to deal with that is to deal with it when it comes up. The particular thing that's triggering this for me, is that (and the reason I pulled my item from the queue) is that the consensus of decisions is perfectly sensible, for Y by itself and Z by itself. Is that the reason is that I called my item in the queue " the atom of consensus" But as stewards as the language as a whole, we are to put all of those things together, that have consensus back on the table to see if they still have concesnsus when they are combined together.
??: people said repeatedly in the repo that it was our process that lead us to that issue.
MM: I think that there is a bug in the process, but I don't think we should fix it by writing more process documents. The bug in the process is that it is all forward oriented. where as the visible pogress is adding stuff. The focus of the process are the individual proposals considered individually
MM: Doesn't leave us discussion time for the overall complexity of the language. It doesnt allow us to consider cross cutting issues between proposals and makes us focus on proposals individually.
BE: There is definitely some fear that the committee will add too much. Never say never use namespaces—except for regards to Common Lisp there's name packages
MM: There is a very particular kind of "never". There are safety properties that people come to depend on, and adding features can break existing safety properties. I really feel like it's a distraction from the classes that we're discussing. There's concern with focusing too much on safety features that it ends up as destroying an added feature.
the way we talk about safety properties is really important. I would love to formalize it so that we do not lose it over time.
BE: I don't think anybody should fear that we'll break your code, stage 3 doesn't mean stage 4.
DT: This is a follow up on the process item. A lot of current proposals are small incremental features and proposals, then we can see the need for something more general. Dean
BT: Is this a clarifying question?
DT: A lot of the current proposals are small incremental feature improvements and when we have enough of those we can see a need for something more general, so we can see that the process is actually right, and we can back out of those smaller things to do a more general change.
YK: I do want to say something about the process I think its important to think that the stages in the process represents the work that people are doing. So if by the time something gets to stage 3 there's been a lot of work not just by those in the committee, but by everyone in the community, just last month we had four different hour long calls with people in the community. I think its also important to remember that by the time something gets to stage 3, a lot of feedback has been gathered and we should remember that if we reboot at that state we have to repeat the process of getting all that feedback again. and then we might be exactly where we were to begin with. We need to be able to say "the process is on the rails". We have to repeat the for example at the very beginning of the process we did the munich ??? and a lot of the discussion since then has been having the champions work together on cross cutting proposals. In fact, I guess I would say that a complexity really comes from how much we have explored the considerations. A lot of what seems complex about fields comes form looking forward to decorators and vice versa. I don't think it's either a problem ??? to reject them ???
I think its just a consequence, the opposite of those things not what everyone wants.
DT: you already addressed one of my comments. Allan bringing in Smalltalk things, so this thing is kind of important, but This isn't really smalltalky at all, this is a lot of new stuff that seems very thoughtful about JavaScript.
BE: I didn't say grinding the axe... what I find small talky here is not the variables, but the instance varibales being private in a lexical way. Not the details but the instance variable being private and not in a flexible way. That also feel smalltakly. Its more about philosophy of doing things in an integrety way and doing things through methods. I think its fair to say that thats smalltalky, I think its a useful point of view
Conclusion/Resolution
- Good to have these cross-cutting concerns thought processes; let's keep thinking things through carefully
- The committee was critical of several aspects of the JS classes 1.1 proposal, but there was some support for a couple aspects
- No known champion to follow up, so this proposal is not at a stage
- Public and private instance fields and private methods remain at Stage 3
- Will follow up on this later in the meeting.
10.iv.c Static public fields for Stage 3
(Daniel Ehrenberg)
DE: I wanted to propose the public static field to stage 3. It is syntactic sugar to make data properties on the constructor. In this example it's sort of equivalent.
class MyClass { static myStaticProp = 42; constructor() { console.log(MyClass.myStaticProp); // prints 42 } }
DE: Beginng a class body allows visually grouping, this static public field with other elements. Nailing down common semantics follow the pattern that we have followed so far. A big part of the reason why public static field are justified, they are heavily used in big frameworks like React. They expected properties to be set like PropTypes. Its not just react, its many many things in the ecosystem using babel or typescript to cross compile or transpile down to es5. The syntax here in this proposal almost entirely matches what's deployed and used of this ecosystem of transpiled JavaScript. Here on the right, I have this tweet by Kent C. Dodds: (see slide) For background kent was a TC39 delegate for a while. He was in the committee when we was discussion this proposal. That shows how deeply in the community this proposal is.
(showing slides regarding implementation "Semantic details proposed")
DE: In the top left you can see that the initializers are run one by one, interspersed with adding the field to the class. In the bottom left, you can see that the class is actually in-scope in the initializer expression, which means that you can instantiate the class from static public fields. this is something we agreed on tin the Munich meeting. In the top right you see early syntax error. In general the scope of the thing on the right side is just like if you had a public static method, except that the arguments cause a syntax error. We create a syntax error if you access
arguments. but aside from this it lines up with methods.
DE: In the bottom right you see what we were talking at the committee regarding set vs define semantics, instead of being something like static y = hello. You can see in this example, when we have a super class that has a static setter with the same name as the public static field in the subclass, the setter is not trigger, and this like all the other semantic details is just a logical consiquence of matching the pattern that we established in the stage 3 public instances proposal. On consequence of the fact that we're creating own fields, basically, if you have a class that extends another class it inherits as a result this data property. You can observe some things about some JavaScript prototype inheritance model.
The subclass can read the super class' property, but if you write to it later it creates its own data property.
As an example here we have class D extends C,
class C { count = 0 inc() { return this.count++ } } D extends C { } C.inc() D.inc() alert(C.count) // 1 alert(D.count) // 2
D prototype's is C. The count of C becomes 1 and D doesn't have a count property, because there's not reason it has a this property because it's on C. And this is exactly how it works with objects. if you use object literals and object.create which are all idoams that are well known from JavaScript programmers.
KG: this isn't the surprising bit.
DE: This showing is the surprising bit
WH: You are giving a rationale for why you proposed it this way, but I don't think the rationale makes it "acceptable".
DE: Maybe would it be fine to discuss questions and objections at the end?
DE: I'm making sure to say to the committee why I think this is acceptable. We examined several other possible semantics which you can find in the static classes proposal repository. and none of them really seem feasible for how this can works, and none of them seems really feasible. At the same time this is in the context of a proposal that is very well motivated. Again this is more things that people need to learn. In this case I think it's justified, none only because it's following ??? semantics, and not because of the cost of learning it. We previously discussed this behavior in the community, and we decided to advance the proposal based on this discussion. Right now the status of the proposal is that it is at stage 2 and it was retracted for reason that I'll discuss later. One possible mitigation is to make a possible syntactic
class.x meta property which is more ergonomic for accessing properties of the class.So one reason why someone might want to write code such as this, is to have a more terse way to refer to the class that they are inside of so when you are within a class:
class C { static count = 0; static inc() { return this.count++; } } class D extends C { } C.inc(); D.inc(); alert(C.count); // 1 alert(D.count); // 2
DE: If you call
C.inc() you can cont of the fact that's the class C, you don't even need to think about what's inside of the subclass, personally I would just refer to C if thats what I want to do, but maybe if you have a very long class them, then you'll want to do something different. To mitigate for the duplication for a very long class name, if you add some syntactic sugar which ???
class.count is always refering to c.count, this sort of hazard case would not occurs, the semantics that I would suggest for this is to always refer to the inner most class. there is som concern about how this would behave with static private. This proposal is not proposing static private, but if we were, we would suggest it refers to the class of which it's private name was. The case where this comes up is where you have nested classes and there may be multiple different classes you may be thinking about with this proposal you could get these Java-like static semantics.
class C { static count = 0; static inc() { return class.count++; } } class D extends C { } C.inc(); D.inc(); alert(C.count); // 2 alert(D.count); // 2
DE: where when accessing from this method inc() with the receiver being D, you're still ?? If the communitee is intersted, this would be a really short proposal, not really bigger than Bradley's proposal and I would be happy to pair spectext and explainer for this. As part of the investigation into this feature a number of delegates and I looked into several alternative proposals. So for history, static public fields were initially part of this bigger package that included static private fields and static private methods which were all part of TC39? There were some issues discovered—or more broadly commnunicated to be a specific problem with static private field and static public field, so in an attempt to find a unified solution to .the problem the idea was to retract the idea of static private fields and static public fields to stage 2 In this proposal i'm just proposing that static public field should go back to stage 2 we want to think things through and identify road blocks so this is not just about moving this proposal forward. So one of these is static blocks. Brendaan talks about that in his proposal (calss 1.1), It seems useful, what static blocks give you is a way to execute some code in a scope which contains access to private fields and methods. One thing that static private fields would give you is the ability to execute things in such a scope.
let get; export class A { #x; static { get = a => a.#x; } } export class B { constructor(a) { const x = get(a); // ... } }
DE: Ron has a proposal that I've linked to in the slides (tc39/proposal-static-class-features#23 ). I like that proposal, you know there's a lot of different alternatives, Kevin and Allen proposed a different alternative for static blocks as part of JS classes 1.1. There is an explainer that I put up a while ago and that's another alternative. The only few things to decide on, if we want to go this way we can work through them.
DE: Static private fields and/or methods, we discussed it previously and we retracted because of the subclassing hazard. We've considered a few other options how to make this work. One option proposed by KG was to re-initialise private and public static fields on subclasses. Basically when you declare a static field, and you subclass, you'll reinitialize the field it on the subclass. There were some downsides to this, Personaly I'm not really excited about public static methods to be used as a state factory. There was also an idea from JRL about using accessors rather than own data properties. One downside to this that MM brought up was the Object. freeze would not freeze them. This could lead to failing to protect communication channels. We discuss them with more member of the community, so even we went through the process with the hazards, we would end up not regretting the choice to use own data properties for static public fields. There's no feasible alternatives that applies to change the public/private field syntax. In the slides is an example for why you might want static private fields.
DE: Another possible follow-on proposal: private names declared outside of classes, continuing with the theme of what we saw in this example. This is a proposal that was, I don't know when It has started but at least was considered during the ES6 cycle, back when it was based on private symbols. This little code sample here, is kind of an annotation for the current proposal. Mainly, if you make such a declaration of a private name outside of the class, and you make a function outside a class, you can refer to the private name from within that function. This way, you have a family of functions that are declared lexically that use the same name. A lot of JS programmers having a familly of functions declared that operates on object literals, they call it part of "functional programming". You can see on this slide that this outer #x, if you just have #x, it will just create a new private name in the class that shadows the outer lexically declared. It would seem like a weird deviation of lexical scope to modify the inner scope. There's some more thins to work through, how it exports to modules importing/exporting some of the scoping, and understanding whether its learnable. Understanding how it's learnable, this is why I pushed this in the past because I'm concerned that it could be difficult to understanding this concept of the name being a separate thing. In the gist (gist.github.com/littledan/d3030534cf96075d47228955828f932e ) you'll find more complete list of syntax. But now I'm thinking about breaking this into separate proposals.
DE: The final option is lexically declared elements in class bodies. Allen and Kevin were big advocates of this concept, which I guess evolved into their proposal to replace public field declarations with static blocks. In this example we use a token (
local) to indicate a function which is lexically nested in a class
const registry = new JSDOMRegistry(); export class JSDOM { #createdBy; #registerWithRegistry(registry) { // ... elided ... } static async fromURL(url, options) { url = normalizeFromURLOptions(url, options); const body = await getBodyFromURL(url); return finalizeFactoryCreated(body, options, "fromURL"); } static async fromFile(filename, options) { const body = await getBodyFromFilename(filename); return finalizeFactoryCreated(body, options, "fromFile"); } local function finalizeFactoryCreated(body, options, factoryName) { normalizeOptions(options); let jsdom = new JSDOM(body, options): jsdom.#createdBy = factoryName; jsdom.#registerWithRegistry(registry); return jsdom; } }
DE: There are a bunch of details to work out for this too, ordering of execution if we include things besides functions. And then there's syntax, whether we should use the local token, seems like we're dammed if we do damned if we don't thats so far the discussed we've had. Function declarations don't have observable behavior when they execute, so they don't run into this ordering issue.
DE: Anyway, why we should advance static public fields to stage-3: there's a lot of community feedback lpositve feedback for this proposal. its rare that we gt this kind of strong and actionable feedback. For example: When SGN tweeted about the implementation of instance fields in V8, a response was expressing excitement about adding static public fields as a follow-on.
DE: Valerie Young from Bocoup wrote the spec on Test262 which V8 passes. Previously this proposal reach stage-3 as part of the class fields proposal. This is in the context from Kevin proposal to re-initliaze static public fields. Aside from that, the accessor possiblity, I just don't see any interaction with other proposals that we need to look through and docuement. You can see this document here about which interaction and other ideas were considered. Stage three review will be done by Sathya Gunasekaran. There's no change in the semantics, or in the way they are organized, if there are any changes, they're just bugs that we'll figure out how to revert. Should we go to stage-3?
YK: I should just conflate the first two of my comments. The first thing is I want to express strong support for this proposal. Just last week we added a protocol to ember that is a static symbol protocol. I think there's a lot of use-cases for static fields. The second thing (and I'll just read what I wrote in the queue) is are we willing to have static public fields even if we might never have static private fields? Or is accepting this proposal tantamount to accepting static private fields in a big enough time horizon? If we accept this proposal are we ok to reject other proposal? Some people don't want to reject others proposal
DE: I don't know what you're suggesting we discuss. But I will answer the question of whether it makes sense to add this proposal without static private. I think it does make sense to add this proposal without static private. It seems like there's some cases where people ran into the subclassing issue; some people said that , so theres a difference between having a runtime error when the ??? and having a early error when you type the code not having static private, you will get a syntax error that says that static private does not exist. Potentially, it could even link to documentation containing idioms where you can get the same thing.
YK: I'm trying to find out if there are any people that feel strongly that if we accept this feature we have to accept private static. I want to make sure if people want to have that argument, that we have that argument now, rather than awkwardly in the future which will make it easier for us to get stuck.
DE: Ok, so does anybody want to make that case?
YK: Everyone would be ok to reject private static, even if we accept public static?
(general saying of no one making the case for private static being rejected)
Yk: I'm putting this across in an aggressive way on purpose because we're gonna have this debate later anyway
DE: We already have a queue of people objecting to things, so I think we should just go to those things first.
WH: This proposal makes me very uncomfortable, because it feels like we're offering support for even numbers and delaying support for odd numbers to the future. The thing that we should be defining (and the thing that we were defining until November) is static fields. If we are going to do static fields they should work analogously for public and private. There were some concerns regarding private fields with the
this problem—it's the same
this problem as in public fields. I'm (grudgingly) ok with it, if we have matching private fields. I don't like the behavior of how it interacts with subclassing and
this, but the alternatives are worse. I'm not willing to put up with different approaches for public and private. We should keep the language simple and not special-case every intersection of features to behave unorthogonally.
DE: WH was saying the hazard here is the same between static public and static private.
WH: Yes
DE: I disagree, the hazard is a bit different, to encounter the hazard on static public, you have to write to the field. To encounter the hazard on private static, you only have to read the field.
WH: I know that, but that doesn't change the fact that that hazard is present on both public and private.
DE: OK, so we can just go to BFS
BFS: There was some talk about cross-cutting design and having a cohesive story for all these class features, I just want some clarity on Waldemar's side if the concern is a maximal design for all these features?
WH: Are you asking me a question?
BFS: yeah
WH: I'm not looking for a maximal design. I am looking for not splitting the public and private static fields into different proposals that go in different directions.
DE: I want to mention that we're only talking about fields here... We're currently in a state where at stage 3, we have instance private fields and instance private methods. Even if we added static private fields and static private methods, I don't really see why this proposal in particular is in a special place, also, I just wanna clarify about this hazard to make sure the committee is on the same page (points to this example):
class C { static count = 0; static inc() { return this.count++; } } class D extends C { } C.inc(); D.inc(); alert(C.count); // 1 alert(D.count); // 2
The hazard here is when you write to the count, and the hazard with private static methods if when you read from the private method and you get a type error. One thing that Kevin has made in the past, a TypeError in such a case would make it kind of un-method like from this point of view. If you cant call a method with subclasses and receivers then it isn't really a method. im trying to find some common point in this area.
KG: This is on the same topic, a two-part reply to WH, because I have the same concern about cross-cutting details and not designing features ??? But I think there is sort of an important part to this which it seems to me that we have considered a lot of path foward, like just not do it or static blocks. The important part here, is that it seems to me that every alternative that we consider acceptable is that we still want to do static fields this way. We rejected a couple of things including my ??? proposal, which would involve of change the behavior of static and private fields. If it really is the case that we haven't made a decision on what to do with static/private. I think we can just do static public. This is my argument, for being ok in this particular case.
AK: you mentioned static blocks, and in Brendan's proposal this was present to get nearly the same behavior. I'm wondering if you research into the use-cases of static blocks and whether static blocks would suffice in those use-cases.
DE: one is that it is not as terse as the current syntax. I think people like the current syntax, I don't know if people wants to reply in the queue. There's a lot of JavaScript programmer in the room.
AK I dont know how we could do a really objective study on that. It just strikes me and I'm interested in WH would be OK with static blocks and NO static fields. I'm just trying to figure out some middle-ground here.
YK: I would personally be OK with that largely because the use cases that I care about are protocol cases where someone told you you gotta put that static thing in this. I also think that it addresses the issue of static / private ?
YK: I think static blocks would be fine, I also think it addresses the issue. That's my opinion, I think people who aren't for some reason thinking about complying with this protocol but are thinking "oh I just want to put this thing on this object" would probably find it annoying
MM: ... once you have a static block, then the overhead per field, is you know, classname dot thing equals something. Also on the ??? issue, I want to apologize for orthogonal classes, because public and private are not orthogonal semanticly . I agree with the all WH but if the semantics that comes from ???. Private and public are just too different from each other because private is not inherited. Orthogonality was a false hope. We should avoid a pretend orthogonality if we can't have a real one.
DE: Would you be interested in adding a reply topic to the queue?
WH: Public doesn't inherit either (not even for reading) after doing a write.
DE: please add your item to the queue
JRL: We already have staticaly owned fields with static methods. I could assign a static property that is a function. Disallowing static fields is only disallowing a very common pattern.
DE: ... for example we don't allow proptype field, there's no syntax for doing it. Whatever we do as programmer we always are opinionted.
PZE: It should match ??? which If you assign
C.x = 5 that should be the same as the static x = 5
I wouldn't expect it for sure, I would like it to represent something we already do right now. On the other hand the private property is a bike-shed, can we preserve that for something that has iteration? for now there are a number of reserved characters like tilde and others. This is my point of view, and we can reserve the hash for circumstances where it's absolutely necessary to disambiguate between private static fields.
DE: I'm not sure if I understand the question, in this topic im trying to understand public private fields. The syntax for private field, i'm not proposing that for discussion right now. We're not proposing this right now, but we continue talking about this offline.
YK: As MM pointed out, we don't have inheriting on static/private. So for me when I think about this problem I avoid the syntax with the issue of inheritance.
JHD: I just wanted to talk about static and private field, when we talk about footgun and user confusing. Actual usage tends to ??? a lot. Public class fields probably have more usage than any other new feature in JavaScript in years. It has been many years of very wide usage, so what I was seen the current semantics are expected by everyone—very intuitive. And the foot-guns were are talking about are no different from foot-guns in JavaScript. deviating from that would be a mistake. it's a good thing to improve JavaScript when we can but it's not possible to redesign JavaScript from the first time. Discussing public and static field is fine, whatever we include them or not. The community pushback on us doing anything different will be very large so it means we should think very carefully about... we were unwilling @ and # because of years of documentation changes that would be required. I don't see how we should be willing what to override that since it means going against conventions in very popular frameworks like React, etc. I don't think it's a strong ??? to overide that.
DE: Jordan, what do you think at this point of removing this feature and just using static blocks?
JHD: Decidedly not. that would be terrible, users have been using this for many years,. Changing the public properties in static blocks is not a viable option. Both public instance field and static public field are supported by the transpilers. How much actual, do you have a sense, how much static public as opposed to the public instance In the react ecosystem in 90%, It's very massive. Every class should theoretically define propTypes and default props. Dan right here is in the React team.
DAV: I just ran a search on static/nonstatic proptypes. I see about 4000 matches across 50,000 components
YK: What was the query?
DAV: Its for "static defaultProps =". if I search for propTypes .... thats 8000
MM: Thanks that was very informative.
YK: I searched Ember's code base and I found 5 cases in my case which is much much less than public. It really depends on if someone tells you to do it. And it does seem like react tells you to do it. Ember's doesn't, and we have this usage anyway.
AK: we just heard some strong arguement about static and private but they doesn't address the concern about private static. I am wondering if Jordan—does that argument say anything about what that says to you about static private?
DAV: We don't have a use case for it.
AK: It's nice, it's very useful to hear that static public is very important.
Jordan: So for static private, of all the use cases I can think of most of them are covered by closed over ??? where the class in defined. In the presence of instance private if I want to be able to access (restrict?) private data outside the class, I have no way of doing that.
off of instances of my class. and in that case it would be useful to have some kind of function declaration within that ??? im not saying that hrer isn't a use case for that but static private has not been transpiled the use cases for that havent really come around. and It has not been shiped in browsers yet
DE: The exercise the committee members have done with me over the last couple of months is to have several proposals, either as semantics or as follow on proposals. None of these potential follow-on proposals leads to wanting a change in public static fields. The hazard doesn't seem that bad to me, or from a lot of view points. I'm not sure if it address your point.
AK: It doesn't really, I'll try to explain what I'm trying to say... The usefulness of static public, doesn't really speak to that question at all... What I'm trying to say is, static public is useful, static private is so not useful that we can defer it.
DE: At this point at the stage of stage 3, we already have instance private methods and field, we don't have static private fields. We already made the decision, that we're ok with saying that not all the gray area is filled out. Maybe we can reconsider that decision if its not intuitive. We can talk to educators—those who have experience teaching public static fields to JavaScript developers. I don't know if anybody in the room is involved in interested in leading such a discussion, or has education experience to give here. . AK: I think what im saying is that you and WH are talking past each other. You are saying that its not getting in the way. WH is saying "we should figure out what we are doing"
DE: I've been trying to get this moving for a long time. I got in touch with a number of groups of people to try and get feedback, as well as recruiting champion groups for each of these four follow-on proposals I mentioned. I'm really hopefull that in follow on meetings we will have more detailed discussions about them individually. The fact that we don't have static private field in the language;fora big chunk of JS devs they're not part of the mental model; so if theere's part of the transpilers ecosystem, they are already part of the teaching.
AK: I appreciate your work on this. I'm diagnosing the disconnect that happened in the room.
YK: I am removing myself from the queue, but I still have a point to make. I think from the fact that people feel strongly about public/private static. even though WH and I have the opposite perspective. Due to the fact that static private does not inherit we do not have this yet. people talk about filling out the grid. For me filling out the grid is not a good solution
DE: This proposal attempts to follow Allen and Mark's orthogonal classes framework, which was a "grid with holes". By doing this, the runtime "hazard" is converted to an early error.
YK: It's not what people are worried about. When we'll add public, people will be very confused why private is missing. An explanation that the committee just didn't fill out this grid box in 2018 won't be very satisfying.
DE: I think it's the case either ways, because transpilers will continue to support static/private fields.
YK: I think you are overly aggressively rejecting static block, multiple people presented it as an alternative
JRL: back on the static block init, it's literally no better than declaring the prop after the class declaration. All of done now it's increase my identantions twice
AK: Except that it lets the private fields and methods be in scope
JRL: That's not what it's being used for today
DE: Do we have consensus for this to go stage 3?
YK: I don't want to be the only person that objects
WH: I also object. I want the cross-cutting issues addressed.
Conclusion/Resolution
-- No concensus, does not proceed to stage 3
Decorators towards Stage 3
(Brian Terlson, Yehuda Katz, Daniel Ehrenberg)
DE: This is a status update of the decorators proposal which is at stage two. I think we're getting into the last stretch and my hope is to present this at the next meeting for stage 3. What are decorators: they are a mechanism for meta-programming in classes. Decorators let you modify a class declaration. There is an earlier version which is significantly different from the current proposal which is implemented in transpilers. So as an example, here is an example of using field decorators (example). That comes up in multiple frameworks such as Polymer and Salesforce. To modify a field declaration or to modify a calls or method. We heard more about use-cases in the previous meetings.The core semantics, classes methods and fields are reified into special descriptor objects. You can see a full description of the API and ??. So in the past couple of months, YK and I arranged meetings with various stakeholders to go over the API and get feedback.
DE: So, first we met with native implementors, in this meeting there would be JSC, chakracore. There were some scheduling issues. We went over the proposal, we went over the
metaprograming.md file. The biggest concern from the implementors meeting was that private name type was a primitive, which leads to additional implementation complexity. The private name type was a reification of private ??? types. In particular they don't refer to the syntactic constructs so
#x but
#x indirects to another underlying name, which is a different name each time the class is evaluated. Private name is currently specified as a primitive type. the stake holders identified this as the biggest overhead and in the end there isn't really much of a reason for it to be a primitive, another alternative would be for it to be a frozen object. And I have a draft of what the semantics would be, we just need to write it up in specification text.
DE: We also met with framework authors, Polymer, Ember, Mobx, view. We went through the decorators proposal with them. Some interesting feed back here is that the additional features that were added in this iteration of the proposal is that ???. Adding decorators or first class decorator support for ?? fields, would be directly useful with ?? there is also an issue with parenthesis, which I have more description of in a future slide. One action from this is to continue with the full proposed feature set, we thought about moving/adding certain things. But this meeting seemed to solidify we're settling or more on this feature set.
DE: We also met with transpiler authors, Google closure compiler, typescript and babel, which all implemented es6-> es5 and are at various stages of implementing various language features that we at TC39 are currently specifying. We got some feedback about some things would be a little bit verbose to compile or might have layering issues in compiling. Some feedback from Google closure compiler which is also used as a JavaScript static analysis tool. For example, dead code elimination, which closures are especially good. Decorated classes do not lend themselves so well to dead code elimination. We in the champions group think that's OK, because decorators are expected to be for more dynamic features. The hope isn't to take some static code and add a bunch of decorators and make everything worse but to fill use cases that are used in more dynamic language use cases. So, when talking about the compiler output there's sort of a trade of that cross-compilers can use. They will have to think about what they want to do in some of these cases. Another piece of feedback was again asking the question "should we allow decorated private field methods at all". This is something we discussed in the previous meeting and I recorded the champion groups decision that we should because they're important use cases and there's and important privacy model is that only things within the class and decorators are only within the class. It's well defined, if you call out to a decorator, the decorator for this particular field or method then this decorator can only see this particular field or method. If its for the class as a whole, then it can see the entire contents of the class. That's the model we are going with for now. Spec updates: there was feedback in the Jan meeting that element descriptors should be more ?? so we tweaked that. There was something about coalecing the getters and setters so we implemented that. We reverted that and element ordering change ??? The other thing that we added is ...? to string property. which should be easier for branch-x spec. There are also a number of spec type and documentation improvements from new contributors. There is also an increase in new contributors which I am really happy to see that.
MM: I heard brand check and I don't see anything on the list about it?
DE: @@toStringTag is used. The last time we discussed adding actual brand checks in JS, some committee delegates stated strongly that we must do something similar to @@toStringTag, if anything, so that's what this proposal does
MM: So, its not a brand?
DE: It's not a brand but it should be usable in practice. When you want to have a function that can both be used as a decorator or as something else. You can check the @@toString to see whether you were given a decorator descriptor.
DE: Specification questions: as I was just mentioning, this issue with PrivateName. This is really the top concern for implementations that I've heard. Some parts of this proposal sit in the front end of the implementation but it also adds complexity to the back end implementation to add a new primitive type. A new primitive type would require the need to think about how property access, well have to specially update ToPropertyKey. That's what the current specification text does. and all of this is complicated because of how heavily optimized property access is. There are just many different implementations of it, there are a number of different usage patterns. By making it a frozen object rather than a primitive, all this complexity is avoided and decorators don't need as many cross-cutting changes. Spec text is not written yet, but there is a plan in a linked issue.
DE: Parenthesis? So this is a long running issue from years ago. One issue was that back when decorators were passed to the class as an argument. Then it was hard to overload, a function which could be used as a decorator and a function which could be passed as an argument. To determine whether it was being alled in the decorator sense, or if it is called as a function within a function. That particular case is actually fine now, because its easy to tell whether we have a function or decorator. We added
toString tag to make it even easier. maybe you have a function that has a certain property and you want to overload that. When we discussed this at the framework meeting, there were mixed opinions. some people are strongly in favor of... I should step back. when I say adding parenthesis. With this change in the calling convention, when you have @ decorator and class we would make it so first you call that function and undefined?? So the champions' recommendation is to not do that and stick with the simpler model. Where we just evaluate that as an expression whether its a function call or not, and then call that resulting function with the class decorator or the element decorator as an argument. So, for both of these questions I would be really interested in more feedback.
DE: So for next steps, this is currently a stage two proposal. We don't have Test262 tests yet. A babel implementation is currently in progress (Nicolò). Someone (Nicolò) is iterating on the babel version; he was very helpfully reporting specification issues and making fixes. I don't know of any other draft implementations apart from that one. There were five stage 3 reviewers. I haven't heard back from any of them. I would be interested in any feedback from these reviewers for this proposal. The plan going forward is apply the settlement that was noted here and any advice that we get back there. To continue to work with Babel on the implementation and to propose decorators for stage 3.
MM: Specifically about the PrivateName object, which is unsurprisingly my concern. First of all, clarifying question and then I will say what I want to say: It is distinguishes initialization so the assignment that was not present in the
WeakMap throws.
DE: Yes
MM: I somehow had missed that it had ever been proposed as a primitive type. I always assumed that it was an object. I want to take everybody time to emphasize why we must reject it as a primitive type; which is, where is the mutable state? Primitive values are immutable. Values go through membranes without modification. You cannot proxy them. And a private name object you can use to associate a value with an immutable object. How can you do it if the object is immutable? Clearly the state is not in the object. The mutable state of the collection has to be semantically in the
WeakMap-like collection. Therefore it cannot be of a type that implies that it itself is an immutable value.
DE: I see. There is something else about the PrivateName. I don't know if it will imply anything here. We're proposing PrivateNames to be a frozen object, and the reason is because BFS has raised the concern that any modifications to the PrivateName prototype shouldn't affect decorators, as this would be a way to intercept certain reads and writes to private fields. So the way that the interface would work is that rather than having PrivateName be a property of the global object it is passed as the second argument to every decorator.
MM: Thats a PrivateName instance correct?
DE: No, the parameter is the PrivateName constructor. PrivateName instances are passed in the key property of the class elements. We need to represent private names that are there created syntactically, but we also need to be allowed to create ?? that ?? If you want to add an @observed or @tracked decorator which will do some sort of action on the set. You need to be able to store the underlying state similar while replacing the name of that field with a getter or setter, so for this sort of case, We heard from framework authors who were like "oh wow" but immediately pulled out some mangled code which having private names be this thing that can be used that way is important
MM: I see the need for access to the constructor, I see the need for instances to be defensive, with regard to somebody does prototype poisoning. In some sense that is the entire rationale for frozen realms, lock down all the primordials. I'm worried about trying to do piecemeal ad-hoc locking of things down in a way that create an appearance of safety but has none of the safety. If you poison Object.prototype, everythings going to be confused anyway.
DE: Yeah this is to protect from a very specific attack that BFS described, BFS do you want to talk about?
BFS: So my concern was coming from Node and Node core in order to make a robust core. A lot of Node code is written within an unsafe realm so we can safely preserve a lot of these. We can safely preserve a lot of these safely assign them to a variable at a time. I'm having trouble figuring out how to do that safely here an issue thread to have the getter and setter separate from the key itself. So not a member just a function where you pass in an identity for the key and then if its the set pass in the value on the right, of the second one. Basically the whole point of this is to ensure that decorators can be used within Node in a safe way, when we're already in that realm which you have stated that some of them are safe.
MM: So you mentioned how you make things safe when you run first in a realm that might get corrupted. If private name is defined as a
WeakMap-like collection in a really symmetric way to how map is. The way you do that safety is for each you pick out the methods from the prototype every method you want to use safely you save it off on the side. And then you apply it to.. and you save the constructor on the side. Ideally in an uncurry'd form and then you apply it, you save the constructor on the side to create instances which are genuine instances.... All this is very painful, but if you're programming in a corruptable realm this is the pattern. I don't see why this case is different form any other case in a corruptable realm.
BFS: I think in particular at the time I originally voiced this complaint, PrivateName was being propposed as a primitive.
DE: I don't actually see how that changes anything. but there were methods on PrivateName.prototype, as originally proposed. If you can run code when a Realm starts up, you can copy off the methods. I probably wasn't very clear when I tried to make MM's case to you when we discussed this several months ago.
BFS: I can look it up on GitHub but I think I understood it. This is a very difficult topic for me.
MM: Clearly the PrivateName needs its own very detailed careful discussion. That should not happen right now because its a detailed discussion that was focused on private name and I think that as long as PrivateName becomes something. and as long as it acts as the right kind of
WeakMap collection
JRL: (Context: tc39/proposal-decorators#43) I tried to argue for the same proposal that you just mention so that you create a PrivateName object instance? The solution I came on was using syntax to construct the private name securely. The private name is a constructor but you can't new it, it will just throw. The only way to properly make a private name is to use the private key syntax so you would say
private x and
x would be a private name and would have own properties, get/set and others that would allow you to do
WeakMap properties. That would be the only way at this point to secure a private name instance object. In that case you don't even need an instance object at that point, because you can just use access syntax much like you have privates in classes ? You can just define a private property on the class or you define a private property on the object (using the lexical private) and use it as though its a regularly property on the object (like a private instance field on a class). YK has added a gist to the chat a couple of times that fully describes this.
DE: I'm all for continuing to investigate that path, which is different from what I'm proposing here. At the same time when we have such a proposal I'm not sure if we could jump to a conclusion right now, because that syntax has also been proposed for the shorthand, while I'm not really sure what that is trying to solve.
JRL: I didn't quite understand that from the GitHub issue maybe you can clarify that
JRL: My issue is how do we give private name to objects as well as classes. so my idea was to make it a
WeakMap that you can't monkey patch. Doing that exploration private name, lexical declaration instead of declaring a private name instance would be very unusual in the current spec. It would be better if it was just a primitive that we could work with.
MM: I think a lot of these things were directed at me, can I respond? The key thing in what you said is security, and I want to ask you the same question I asked BFS. If you are programming with these thing s in a normal manner, inside a corruptable realm. Then the monkey patching of Object.prototype and Array.prototype then it will be corrupted anyway. So whats the safety that you are concerned about.
JRL: I think the concern is the node concern even if you do the monkey patching.
MM: The internals are in a special realm that can't be monkey patched and what I heard you say is that the code is monkeypatched but it can be used awkwardly but safely.
BFS: Humm, yes but to an extent if we allow anything like user-land decorators we might have issues, so that's a long conversation. You could create an uncorruptable realm and create the primitives you need in there and use those in an uncorruptable manner.
MM: In which case te safety that you are seeking by transitively freezing the ??
BFS: That is a performance bottle neck for us
DE: Sounds like there's a lot to talk about for private names, maybe we could have an offline discussion between this meeting and next meeting about the details
DE: yeah I don't think that we will be able to get a resolution here. Lets make a separate meeting
YK: I just want to thank DE for his work on this. it may not be obvious to people that there has been so much reaching out to stake holders. I think he has done a great job navigating what was a very complicated process
Yk: I was certainly out of my depth when I tried to champion it with BT and I just want to thank DE for the difference he made
WH: I'm curious about one of the things you raised in the presentation, which is the behavior of decorator argument parentheses. Some decorators take arguments; some don't. Is there a case where the same decorator either takes arguments or not? Currently
@foo and
@foo() are specified to do very different things, but there is existing precedent in the language to make them behave identically—
new expressions work that way. You can omit the parentheses if you don't want to specify arguments.
DE: Err, some decorators take arguments and some don't and some are overloaded between taking arguments and not taking argument. To your question on the queue regarding operator new grammar. I think we would re-usue the grammar. some people have also raised concern about this. So there was a thread where Alan suggested we create an object for this case and that we call this decorate method on it, and that would be the semantics. Alternatively it wouldn't be clear on the benefits.
WH: I'm not proposing creating an object. I'm pointing out there there is a precedent for taking an object and omitting the parenthesis and have it mean the same as if you called it without arguments.
DE: The concern is whether we'll be coming up with more and more cases where the parenthesis will be optional. There are other languages that you don't need parenthesis. in JavaScript we cant work like that and we. won't work like that. We can only add specific cases where there's explicitly a function call. The default that i'm leading towards is not adding this additional case, decorator authors could if they choose to overload their decorator. It's basically a one liner. in the first line of your decorator checks for arguments, and that is how you can return this function. On the thread so far people have talked about a few different overloading cases, no one has brought up that particular case.
DE: You can also use arguments.length
CPl: We did talk about this. I think Ron has some position on this
WH: Things like that seem very hacky and brittle. The way the
new operator behaves seems to be much better.
DE: I'm not really sure what you mean by "things like that". We're just applying the decorator as an expression.
AK: It would help me to understand why not allow a call if there are no parenthesis because it would be confusing to have more and more calls that are included without parenthesis (implicitly).
DE: The other point is that it increases stylistic variance. If you have a decorate like @nonwritable which doesn't take any arguments there would be 2 ways to invoke it and creating another choice point and that could add more friction for developers.
WH:
new does that.
CP: the strongest point from Ron was that you might use a decorator like @foo.bar, with and without parenthesis, and what will be the context when calling
bar?
DE: Here is an example decorator library that has this @ key. Which has this @t.expose and the only way to make this work is expose would sort of close over the key because you would lose the receiver if you call the function and just use. the results. But maybe its not very important, since you can make it a getter that returns a function bound to the right receiver.
YK: I don't have a very strong opinion on this question. It has never occurred to me that having no parenthesis could be an option. So now I don't in the world where you could differentiate. In that mind though I don't mind adding auto parentheses. TLDR; from my perspective its fine. There is one thing. It should be clear that decorator and function use should be clear since that's addressed I don't know what solution we could provide here.
WH: I am strongly in favor of what you call "auto-inserting
()".
RB: My main cases and concerns were around , we tried to use new semantics what the receiver would look like, I had a couple of soft concerns because of it basically requires that every decorator is written as a decorator factory. Seems like overkill for scenarios like @readOnly or @ennumerable and simple decorators where you are making a small change to the descriptors. It feels like overkill in those cases and again my concern was mainly about the receiver. Beyond that I would need to pick through a couple of issues I commented on and get back to you
DE: It seems like you could handle that by having it instead of a property but have a method that is a getter and could bind the receiver, it's still possible to create sufficiently expressive decorators in this case, does that address your case?
RB: my concerns are not really strong concerns, my conerns are.... the big thing here is that we could have call semantics that are not exactly the same as new semantics. We don't have to instantiate a new instance of key.Exposed, so exposed doesn't have to be bound but rather its a call that if you have parenthesis that are added for you. Parenthesis vs non-parenthsesised new has had problems in the past. Where if you want to use something you have to add parentheses nad dot off of it. it's one of those things where....can possible be confusing to people, where the current semantics are, i'm calling the thing that you wrote with a descriptor. that's the function i'm calling. Those semantics are clear. it's just the underside of if I'm writing the decorator or using the decorator in my class do I need to add parenthesis or not which are mostly documentation based.
WH: The issue you raise of following a
new expression with a dot that is purely a thing that arises in expressions. This cannot arise here. The decorator grammar is a very limited subset of expressions: essentially either parenthesized full expressions or limited member expression. I raised an issue a few meetings ago about what happens with a decorator such as
@foo().bar, and it was closed as won't fix: syntax error.
DE: So we do permit arbitary expressions if yo just put parenthesis around the whole thing.
WH: If it's not a parenthesized expression, the call must be the last thing in it.
DE: I agree about the particular issue that Ron is raising but there is a broader question of if we want to add more cases of calls without parentheses.
WH: For decorators I think the same rationale applies making parentheses optional. With
new, the 0-argument case is common on enough that you don't want to require the parentheses. You also do not want to distinguish the empty parentheses from not having them.
DE: Ok so how would the committee feel if we moved forward or back on this. With the grammar based on the new grammar?
YK: Any objections?
AK: that sounds like its a big piece of feedback.
DE: Really the champion group doesn't consider either option to be fatal. This is an aesthetic preference as far as we are concerned. If anybody wants to argue in the other direction that would also be useful.
AK: As a novice in this area, this was one of the things I came upon when reading the spec, it seemed unfortunate to have this difference. I'm mildly in favor, if you say "ill make this change based on a 5 minutes conversation" but that sounds not the case, sounds like this has been discussed quite a lot of times
YK: I basically wanted to reiterate what AK just said. We don't really care. I think from a user perspective it actually is very common to not have the parenthesis in practice the syntactic use is going to be sometimes yes sometimes not, we're either gonna do branding as a way to detect it or we are going to insert the parenthesis if we can't just find the problem and I think at this point we just want to come back to stage 3, if the committee doesn't care we will pick something. this is an issue that we have discussed at length.
DE: Members of the champions group have discussed both options. Thanks for all of your feedback and please, if you want to review the proposal in more detail you go can go onto the proposal repository and discuss on issues.
Conclusion/Resolution
- Decorators remains at Stage 2
- we're thinking of going toward adding the parenthesis
- follow up more on private name semantics details
12.iii.b. What does 1JS mean in a world of transpilers?
(Daniel Ehrenberg)
DE: When we talk about evidence from the ecosystem, there's two different examples within the ecosystem. This comes up in the Classes 1.1 proposal, and in the decorators proposal. One is powerful and minimalist: adding fundamental capabilities, refrain from superfluous syntactic sugar. For example the decorators to transpiler authors. We could focus on things that we could fundamentally not construct instance vairables, strong encapsulation. And we could come up with a wide variety of superset languages such as JSX. Not on our mental agenda as things we want to do. Another vision is to have a common language, including more usuable things: syntactic sugar addit l ergonomics, opinionated things here. We can look to the ecosystem of compile to JavaScript languages for inspiration and what works well such as Babel plugins. When something is widely supported in this ecosystem, we take it as a strong datapoint that something is needed for the language. For compiled languages to work together in the ecosystem. People wouldn't want to change their code from this to that, "but this is irrelevant because they're using language X". These are the two views we're seeing together. I want to Talk about ecosystem alignment within TC39. The word of those JavaScript transpilers is going in the direction of ?? Based on the fact that a lot of them have these policies to only add standards on a TC39 standards track. This isn't for all of them, but for example for TypeScript. In order for it to ship, it needs to be in stage 3+ and babel only ships things that are at least Stage 0. It corresponds to at least discussed in TC39 (or by a delegate). There are other parsers such as Acorn, which only parses Stage 4. Other language authors. applying processes as language innovators, which are creating new languages with these innovations.
(NOTE: we will fix this at the end of the session, Sven: I fixed it)
BT: I wanted to add additional context here. The users of transpilers, developers are, this is the result of research by TypeScript, developers are extremely skeptical of new features that are not on the standards track. On the typescript team, we've heard endless feedback that a handful of features in advance of standards (just features TS users wanted) that didn't make sense to go on the standards track, and those all came back to bite us in the end. The biggest problem I believe is that if you're not on the standards track. Anything you do that isn't on a standards track is possibly problematic for your users. TypeScript is said to be a separate language from JS for this point.
DE: One thing that's been a factor, a vote of confidence for us, if this ecosystem saw TC39 as stagnant, they wouldn't have adopted these "must be on standards track" policies. Dave Herman proposed the One JS. We don't split out JS into many more mode than we have now, and that we harmonize among these modes as much as possible. I want to ask, should we have two JavaScript? I think the ecosystem of tools will have a language that will evolve. Such as public fields, private fields, decorators. So I think, ecosystem-wide there should be some sort of standard, if we don't have 1js, 1js here is seen as the standards track policies, So, an issue with the current status if we don't continue pushing forward with decorators, people may have trouble (due to policies, recognition of the development of the feature) flipping these flags on. Ultimately, if TC39 doesn't want to be the body for these types of things, or to defer to transpilers, they should be coupled with a different standard (not EcmaScript?). I wanna say, 1JS including transpilers ** (big ast) JSX for a typed language, we won't put these things in JavaScript. (this doesn't make sense, missed something) Such as these class features and decorators. We can talk about advantages/disadvantages of 1vs2JS. Advantages: minimal powerful version of JS, if a team chooses they can stick to minimal form of JavaScript, It may be simpler to implement, maybe with respect to optimizablility this feature should, some of the features proposed don't give a lot of optimization anyway, but just add complexity. There's some cost to going through with this divergence. We could view these compile to js languages as out of scope. There's a lot of JavaScript programmers who consider the complexity, because long term they need to see the divergence (?). At the same time, even if we do do this, we stil need to take into account these other languages if our users care about interop between standard JS and the other language. If our users end up caring about the interaction. This happens in other languages as well, for example C++ syntax in Objective-C. How many users use Objective-C++? Not many but they can still be considered a stake holder. We also must take into account, Node and other developer tools, which have certain sets of mismatches —Maybe those could be solve in other ways. When we figure out a language feature, how do we decide whether it goes into real JS or this other language? What are your thoughts on this language?
BT: no I think you covered it, lets go to the queue
YK: so I want to stand in favor of 1JS. For a few reasons, compile to JavaScript languages support features with a long life. For example if we treat private state as something that can be punted off to the ecosystem, You can imagine different transpiliers and the constraints they have. There's also a runtime requirement that requires coordination. For example a library that wants to use a decorator, we need to share a declaration between those states. But I might not want to force people to use a particular transpiler to use the feature. Increasingly transpilers don't want to have that role. People do perceive TypeScript extensions to good features, People want the TypeScript team to standardize certain features, but specifically DON'T want the JavaScript team to implement them as well.
AR: We've been in the HTML world trying to solve some of these problems. With all of the other pieces of the platform, the difference between high level and low level feature implementation, we perceive that JavaScript has been at the bottom of the stack. There is a natural thing for JS to do, to achieve the expressivity that people want. Some folks have argued that transpilers should go off and do their own thing. We think of this as admission of defeat. This matters in the real world. (Shows phone). This is a phone that'll be sold for less than $100, and it'll be with us for the next 5 years. The chip in the phone is a handmedown from 2014 (bad chip then, bad now), This is what's currently drowning in your JavaScript. Any code run server-side as opposed to client-side on this very low powered phone is a much better experience for the user. And anything that we can reasonably build into the platform will decrease download size and improve performance on low-powered devices and networks
DE: one case where this comes up is transpilers will make the output for decorators will be relatively large.
AR: I'm currently reviewing 1MB of output code, To try to understand what is happening in that product, once again, looking at this device, Where we have diverged from API that's diverged from the standard features. This is the natural process of moving off the evolutionary path; the fact that you. That fact that you can deliver custom non-standard code, is not a vote that you should. To deliver more of that value to more people more of the time
MM: An issue regarding one of the position. There's the 1JS position, trying to improve the language through syntactical features, that we've talked about. There's the separate issue, there's the compile to JavaScript languages, then there's things like TypeScript and JSX, which are trying to be standards track, addtional syntax, widely used, but not standard. The issue that I want us to pay attention to, is that while we're participating in that structure, We need to pay attention as to what grammatical constructs to keep reserved for those extended languages. Such that we don't create syntax that conflicts with widespread extensions. We've been doing that, but we haven't been doing so in any systematic way. We know not to conflict with TypeScript or JSX syntax, But for other people, there's no systematic guidance on what syntax they can use (because we won't evolve and collide with them).
DE: Interesting idea, how is it related to the idea that these compiled languages WANT to align with TC39, rather than innovate themselves.
MM: Take a look at TypeScript and JSX, it is conceivable that at some point we will bring into the Standards process some sort of TypeScript like typesystem. But we've got several type systems that use the same syntax. Flow and TypeScript collide on syntax, so there's some sort of reconcilliation process needed. And it's our job to do that, when the state of these things is such that these major players would rather complete their own standards than integrate in some other system. We have not been hearing that from type script and flow. likewise with jsx I haven't heard anyone proposing that we reencorporate that. This kind of brings us back to the E4X use case. Even if we included those things, here's a particular syntactic space that we reserve for them. And at some later time we engulf and devour. That's a tricky coordination with people have reserved that syntatic space for their extensions. If the dominant desire in those ecosystems is to continue to develop on top of JavaScript rather than coordinate with the JavaScript standard. Then I think we should respect that.
Pedram: Are you suggesting to maintain two specifications, one for regular JavaScript and one for compile to JavaScript?
DE: we disuccsed that in the case of deocrators, where YK expressed that this should just be Transpiler feature and not standard. We've heard several negative responses from community members.
Pedram: My other question is there's not one specific compile to JS.
De: I'm talking about other features like Decorators, that are not specific to ???
MM: I'm going to take that as a clarifying question to me, so I can answer. What I have in mind is a proposal to standardize the TypeScript syntax as an extension, but not the semantics. We rejected that because what does it mean to have the syntax without the semenatics? What I am now suggesting is realizing tha thtey are converging on multiple syntactic spaces as a single syntactic space, with the goal to mutually decide to converge on the lessons learned. We should standardize some of these syntatic spaces, as reserved for expierimentation (lol) for these authors that we won't take over.
DAV: In case of TypeScript or Flow, and JSX. What about something like public class fields?
MM: So public class fields would not be in this catagory. That's the kind of thing that people who are prime to introduce it would run into the same problems as BT (context?). This is the kind of thing that we won't use unless it's standardized, so there'd be pressure to standardize this at the language level.
BT: My queue entry is about this.
BT: The difference between JSX and TypeScript, and something like public fields, is how easily separable it is in developer's minds. It turns out that developers are asking, "hey this should be standardized? help...". There is an expectation in TypeScript users that we should work forward to this kind of future. There isn't much issue separating TypeScript from JS. JSX falls under the same thing, it's very contained, no side effects in other parts of the language. As a result. Some people saying it would be great for JSX to be standardized, We support JSX and TypeScript, for people that wanted it, so I think that the syntax reservation is an interesting idea, but it's not a syntatic thing. I don't know how to describe what that reservation is.
YK: I think that the claimed thing that we reserve things after the colon. That is not a valid way to reserve a syntax in a grammar. That's not the technical reason, but the stuff after the colon is not where you stop. It's a non-expression in a syntax specific to the language. There already have been cases where if we really want to preserve it, we have no way of doing so.
DH: You have just expressed a very big concern I've had with this idea.
WH: I agree
(general agreement)
YK: We have not done it and what we're saying doesn't make sense. We are trying really hard not to collide.
DE: that's besides the point.
API: From the user point of view, we gave TypeScript to thousands of peoples, one of the decisions that came into play was jumping into the compiled language what's the safety valve? If you have to escape, because a problem happens, what's the out? The decision that sealed the deal was that the output is just JavaScript. That was good enough. I can't speak for everyone,, but people don't speak of it as a different language— Generally, people don't think of it as a different language. People want to program in JS.
DH: Not a question, I missed the first couple minutes, I don't really understand what the problem statement is. I definitely I would be very sad to see the 1JS sentament to lose mind share. It's not a precise concept, but We're here to do standardization, so we are the central point, naturally that's a broadining and widening thing over time. That means that we'll scale the way we work, but if we fragment the development space (???), some of these working groups Ideally, there's a nice evolution where we can keep evolving where we work and coordinating how we work so there's some cohesion in a single spec. Last couple of related poitns: It feels to me a lot of what we're describing as stuff we can't standardize as either a failure or a success of the process. It's relevant to the work we do, so it's not finalized and we have to allow for that ambiguous middle state, Just because something's not standardized doesn't mean we have to split it off in a separate space. We tend to split out from the main trunk—but They often don't end well, There's stuff like Annex-B, there's E4X, and the whole thing that lead to 1JS in the first place. There's an alternate universe, where I don't know what's meant by 2JS, but whatever it is I don't like it.
DE: This has been interesting, but we don't really want to splinter JS. Maybe this will feed into other proposals.
Conclusion/Resolution
- "this has been an interesting discussion;" no consensus needed.
We discussed this morning if we wanted any clarification about Class related conversations.
DE: Brendan made a proposal this morning, do we have a conclusion on it? Do we want multiple competing proposals, do we want to return feedback to a single proposal?
-- Return to the class 1.1 feature update discussion. --
YK: So I enumerated a bunch of things, I'll narrow it down. I think we should reconsider whether we want a brand-check. Whether we want private methods, Should we reconsider initializers? I think we should reconsider having a mandatory leading key word
BFS: I would like to explore them both as staged proposals, we can learn by comparing them. We shouldn't consider this as a conclusion of one or the other.
YK: What does it concretely mean to say that both are active?
BFS: We can continue discussing both propsals, and move forward with the existing proposal.
YK: What is the difference between that and continuing with the proposal that we proposed this morning. I'm not trying to troll, but genuinely curious/confused.
BFS: I think that I am ok with advancing class 1.1 proposal separately than the exising proposal.
YK: So, you're suggesting it would be a big offense to keep the 1.1 proposal over the existing one.
BFS: I think it's fine.
DE: I wanted to take the time to analyze the 1.1 proposal, as soon as possible. Before this meeting, I don't think we considered the -> syntax in the committee, just on GitHub. But other things we did discuss. We arrived we arrived at this particular case. I don't think switching to a different position will be easy and if we do want to do that, that I hope we do so soon.
BE: I'm not going to champion, Allen probably won't either. It may just be a strawman. My intention wasn't to advance a proposal.
DE: I don't see any new cross cutting concerns. Reading the explainer, I don't understand if there's any synergy -- to see If we do want to make some of these changes. If not, maybe we come with a conclusion. We have very carefully considered various cost cutting concerns, and I'd like to discuss those.
AK: I would like to focus less on the fact that we have made decisions on this and that point. The thing that's most important, is that the presentation this morning, is that the classes 1.1., seems to be about a desire to address concerns moving forward. The champions that have been doing this have the thing in there heads. As you get out of the whole commitee that's less true. I believe the discussion helped clarify some of that stuff. I don't think it makes sense to treat Classes 1.1 as a separate proposal. I take it as a, OK, this gives us a chance to see the direction we're going. I don't think it's a, it's more a gut-check for the committee, to focus on a series of details, not a separate proposal.
BE: I don't think we're missing concerns, we're just weighing them differently than we would doing things piecemeal
MM: So, I think that very concretely, what has been said about Classes 1.1., We need to consider some of the issues on which we already reached consensus on, to be still open for reconsideration. That means, when discussion to consider them arises, we don't shut them down as reason to not discuss them again. In particular, the leading keyword, I would like more evidence about how normal users see syntax. How they form understanding between declrations and assignments. But I find the leading keyword compelling, prohibiting
this initializers compelling, the issue about the brand check, I like the brand check, but I think it should be reexamined, until we declare consensus.
DE: I want to clarify what I meant. I agree that we should be able to reconsider things. If we think about it harder and come to a different conclusion, we should do that. My response to this has been in part making response tin the air, and in part examining committee history. Maybe it's hard to follow the arguments in the air that the original reasoning is still valid. The arguments that were made that have led us to where we are need to be recognized and considered.
MM: We need to consider the stated arguments, just consider some of these being not fully at consensus. Maybe the consensus will change.
DE: I'm not sure if that's a productive way for us to progress... I'm not sure how we should structure this process-wise. If at any point, people make a particular case, then we say we have consensus, then we make the same case again that doesn't convince people anymore. I'm not sure if we should retract proposal stages because of that situation.
MM: Depends on what you mean by progress? I think that the reconsideration is that the lesson that we should take from classes 1.1
MM: I think that the things that some of feel should be reconsidered, should be reconsidered and not be in a state of consensus.
AK: What does it mean to not be in a state of consensus for things we have consensus for?
MM: We historically have had consensus, the issue is do we have consensus now? We've generally hold consensus. That the disagreement after the state of consensus counts for less. But we should be re-asking, do we have consensus now?
YK: From IRC, initiailzers being rare, forgot that pepole acutally use it frequently in some sub-ecosystems to create a bound method. We should consider that usecase strongly before dropping the feature. I.e. fields initialized by arrow functions. We should consider that before dropping support for that. I don't use it but it's popular. Fields who's initiailzer is an arrow function. The arrow closes over
this, so it creates a bound method.
BE: People really like that.
YK: There's no literal this context, it's implicit.
MM: oh my god, If there's no this, you can't be sensitive to what it is...
BE: Someone tweeted and example of this at me, if I may, consensus is fuzzy at the frontier. if it's really fragile, then why even have it? If consensus is retraced so quickly, you really didn't have it.
MLS: I have a softer view of MM's consensus. It causes me to want to move slower on the existing proposals going forward. In my mind, we need to consider the existing proposals in light of what was presented in the classes 1.1 proposal.
AK: To respond again to MM, I think that the presentation is a nice way to rethink the cross-cutting concerns. For me to go back to the state that I was in regarding the Stage 3 proposals. Given, that stage 3 represents some level of vetting and a lot of thought It represents a lot of details on the grounds. Just because someone asked a question doesn't mean we've lost consensus.
MM: I agree with that
AK: It's a burden to do that research in a round just to re-do it in another proposal. It's hard to discuss whether
# or
-> is more intuitive. OK, we've got people in one meeting to agree, but then ignore that.
DAV: I heard that the arrow functions use case, this one is important in React, because there are performance issues, we need to memoize things, so we often use arrow functions. We've enabled this about a year ago in Facebook, I've searched in our React repo and we have 16000 declarations like this in Facebook, in our experience people did not encounter any difficulties about this. so this is an empirical point regarding this
DE: Even though we don't have another topic on the agenda, maybe we can add another one tomorrow, based on what KG laid out the differences between the 1.1 proposal and the current one. We could go through and see if we want to adopt any of them. Or we could say that we leave this topic to a particular champion to go forward on that direction. It's been a good excercise to reconsider this. Ultimately, I don't see any changes to make to the proposals I'm championing.
DE: Is there any support for someone to champion that?
AK: Can we narrow down the list of things? Some interest in the leading keyword, some interest in getting rid of the brand check. Interested people should talk tonight, so the list isn't every single difference between the two proposals. I think we should list the topics left and discuss which ones to discuss.
YK: An uncontriversal one is the static block. Dan introduced it, and it seems to be relevant. I would like to do a presentation on lexical private declarations, in light of the fact that it became an important debate today. (NOTE FROM DAN: I did not introduce the idea of a static block; it's an idea from many other programming languages such as Java that was previously discussed in the ES6 timeframe)
Conclusion/Resolution
- Will not be multiple competing proposals
- Open to continuing to discuss issues that people may raise
- Will continue to discuss tomorrow | https://esdiscuss.org/notes/2018-03-21 | CC-MAIN-2019-18 | refinedweb | 24,450 | 72.05 |
#include "usb-private.h"
#include "usb-mouse.h"
Create a private structure for use within this file.
Destroy a private structure.
Initialized pDev as a usbMouse, usbKeyboard, or usbOther device.
Turn pDev off (i.e., stop taking input from pDev).
Read an event from the pDev device. If the event is a motion event, enqueue it with the motion function. Otherwise, enqueue the event with the enqueue function. The block type is passed to the functions so that they may block SIGIO handling as appropriate to the caller of this function.
Since USB devices return EV_KEY events for buttons and keys, minButton is used to decide if a Button or Key event should be queued. | http://dmx.sourceforge.net/html/usb-common_8c.html | CC-MAIN-2017-17 | refinedweb | 115 | 78.25 |
import "github.com/jmhodges/levigo"
Package levigo provides the ability to create and access LevelDB databases.
levigo.Open opens and creates databases.
opts := levigo.NewOptions() opts.SetCache(levigo.NewLRUCache(3<<30)) opts.SetCreateIfMissing(true) db, err := levigo.Open("/path/to/db", opts)
The DB struct returned by Open provides DB.Get, DB.Put and DB.Delete to modify and query the database.
ro := levigo.NewReadOptions() wo := levigo.NewWriteOptions() // if ro and wo are not used again, be sure to Close them. data, err := db.Get(ro, []byte("key")) ... err = db.Put(wo, []byte("anotherkey"), data) ... err = db.Delete(wo, []byte("key"))
For bulk reads, use an Iterator. If you want to avoid disturbing your live traffic while doing the bulk read, be sure to call SetFillCache(false) on the ReadOptions you use when creating the Iterator.
ro := levigo.NewReadOptions() ro.SetFillCache(false) it := db.NewIterator(ro) defer it.Close() for it.Seek(mykey); it.Valid(); it.Next() { munge(it.Key(), it.Value()) } if err := it.GetError(); err != nil { ... }
Batched, atomic writes can be performed with a WriteBatch and DB.Write.
wb := levigo.NewWriteBatch() // defer wb.Close or use wb.Clear and reuse. wb.Delete([]byte("removed")) wb.Put([]byte("added"), []byte("data")) wb.Put([]byte("anotheradded"), []byte("more")) err := db.Write(wo, wb)
If your working dataset does not fit in memory, you'll want to add a bloom filter to your database. NewBloomFilter and Options.SetFilterPolicy is what you want. NewBloomFilter is amount of bits in the filter to use per key in your database.
filter := levigo.NewBloomFilter(10) opts.SetFilterPolicy(filter) db, err := levigo.Open("/path/to/db", opts)
If you're using a custom comparator in your code, be aware you may have to make your own filter policy object.
This documentation is not a complete discussion of LevelDB. Please read the LevelDB documentation <> for information on its operation. You'll find lots of goodies there.
batch.go cache.go comparator.go conv.go db.go doc.go env.go filterpolicy.go iterator.go options.go version.go
const ( NoCompression = CompressionOpt(0) SnappyCompression = CompressionOpt(1) )
Known compression arguments for Options.SetCompression.
ErrDBClosed is returned by DB.Close when its been called previously.
DestroyComparator deallocates a *C.leveldb_comparator_t.
This is provided as a convienience to advanced users that have implemented their own comparators in C in their own code.
DestroyDatabase removes a database entirely, removing everything from the filesystem.
GetLevelDBMajorVersion returns the underlying LevelDB implementation's major version.
GetLevelDBMinorVersion returns the underlying LevelDB implementation's minor version.
RepairDatabase attempts to repair a database.
If the database is unrepairable, an error is returned.
Cache is a cache used to store data read from data in memory.
Typically, NewLRUCache is all you will need, but advanced users may implement their own *C.leveldb_cache_t and create a Cache.
To prevent memory leaks, a Cache must have Close called on it when it is no longer needed by the program. Note: if the process is shutting down, this may not be necessary and could be avoided to shorten shutdown time.
NewLRUCache creates a new Cache object with the capacity given.
To prevent memory leaks, Close should be called on the Cache when the program no longer needs it. Note: if the process is shutting down, this may not be necessary and could be avoided to shorten shutdown time.
Close deallocates the underlying memory of the Cache object.
CompressionOpt is a value for Options.SetCompression.
DB is a reusable handle to a LevelDB database on disk, created by Open.
To avoid memory and file descriptor leaks, call Close when the process no longer needs the handle. Calls to any DB method made after Close will panic.
The DB instance may be shared between goroutines. The usual data race conditions will occur if the same key is written to from more than one, of course.
Open opens a database.
Creating a new database is done by calling SetCreateIfMissing(true) on the Options passed to Open.
It is usually wise to set a Cache object on the Options with SetCache to keep recently used data from that database in memory.
Close closes the database, rendering it unusable for I/O, by deallocating the underlying handle.
Any attempts to use the DB after Close is called will panic.
CompactRange runs a manual compaction on the Range of keys given. This is not likely to be needed for typical usage.
func (db *DB) Delete(wo *WriteOptions, key []byte) error
Delete removes the data associated with the key from the database.
The key byte slice may be reused safely. Delete takes a copy of them before returning. The WriteOptions passed in can be reused by multiple calls to this and if the WriteOptions is left unchanged.
Get returns the data associated with the key from the database.
If the key does not exist in the database, a nil []byte is returned. If the key does exist, but the data is zero-length in the database, a zero-length []byte will be returned.
The key byte slice may be reused safely. Get takes a copy of them before returning.
GetApproximateSizes returns the approximate number of bytes of file system space used by one or more key ranges.
The keys counted will begin at Range.Start and end on the key before Range.Limit.
func (db *DB) NewIterator(ro *ReadOptions) *Iterator
NewIterator returns an Iterator over the the database that uses the ReadOptions given.
Often, this is used for large, offline bulk reads while serving live traffic. In that case, it may be wise to disable caching so that the data processed by the returned Iterator does not displace the already cached data. This can be done by calling SetFillCache(false) on the ReadOptions before passing it here.
Similarly, ReadOptions.SetSnapshot is also useful.
The ReadOptions passed in can be reused by multiple calls to this and other methods if the ReadOptions is left unchanged.
NewSnapshot creates a new snapshot of the database.
The Snapshot, when used in a ReadOptions, provides a consistent view of state of the database at the the snapshot was created.
To prevent memory leaks and resource strain in the database, the snapshot returned must be released with DB.ReleaseSnapshot method on the DB that created it.
See the LevelDB documentation for details.
PropertyValue returns the value of a database property.
Examples of properties include "leveldb.stats", "leveldb.sstables", and "leveldb.num-files-at-level0".
func (db *DB) Put(wo *WriteOptions, key, value []byte) error
Put writes data associated with a key to the database.
If a nil []byte is passed in as value, it will be returned by Get as an zero-length slice. The WriteOptions passed in can be reused by multiple calls to this and if the WriteOptions is left unchanged.
The key and value byte slices may be reused safely. Put takes a copy of them before returning.
ReleaseSnapshot removes the snapshot from the database's list of snapshots, and deallocates it.
func (db *DB) Write(wo *WriteOptions, w *WriteBatch) error
Write atomically writes a WriteBatch to disk. The WriteOptions passed in can be reused by multiple calls to this and other methods.
DatabaseError wraps general internal LevelDB errors for user consumption.
func (e DatabaseError) Error() string
Env is a system call environment used by a database.
Typically, NewDefaultEnv is all you need. Advanced users may create their own Env with a *C.leveldb_env_t of their own creation.
To prevent memory leaks, an Env must have Close called on it when it is no longer needed by the program.
NewDefaultEnv creates a default environment for use in an Options.
To prevent memory leaks, the Env returned should be deallocated with Close.
Close deallocates the Env, freeing the underlying struct.
FilterPolicy is a factory type that allows the LevelDB database to create a filter, such as a bloom filter, that is stored in the sstables and used by DB.Get to reduce reads.
An instance of this struct may be supplied to Options when opening a DB. Typical usage is to call NewBloomFilter to get an instance.
To prevent memory leaks, a FilterPolicy must have Close called on it when it is no longer needed by the program.
func NewBloomFilter(bitsPerKey int) *FilterPolicy
NewBloomFilter creates a filter policy that will create a bloom filter when necessary with the given number of bits per key.
See the FilterPolicy documentation for more.
func (fp *FilterPolicy) Close()
Close reaps the resources associated with this FilterPolicy.
Iterator is a read-only iterator through a LevelDB database. It provides a way to seek to specific keys and iterate through the keyspace from that point, as well as access the values of those keys.
Care must be taken when using an Iterator. If the method Valid returns false, calls to Key, Value, Next, and Prev will result in panics. However, Seek, SeekToFirst, SeekToLast, GetError, Valid, and Close will still be safe to call.
GetError will only return an error in the event of a LevelDB error. It will return a nil on iterators that are simply invalid. Given that behavior, GetError is not a replacement for a Valid.
A typical use looks like:
db := levigo.Open(...) it := db.NewIterator(readOpts) defer it.Close() for it.Seek(mykey); it.Valid(); it.Next() { useKeyAndValue(it.Key(), it.Value()) } if err := it.GetError() { ... }
To prevent memory leaks, an Iterator must have Close called on it when it is no longer needed by the program.
Close deallocates the given Iterator, freeing the underlying C struct.
GetError returns an IteratorError from LevelDB if it had one during iteration.
This method is safe to call when Valid returns false.
Key returns a copy the key in the database the iterator currently holds.
If Valid returns false, this method will panic.
Next moves the iterator to the next sequential key in the database, as defined by the Comparator in the ReadOptions used to create this Iterator.
If Valid returns false, this method will panic.
Prev moves the iterator to the previous sequential key in the database, as defined by the Comparator in the ReadOptions used to create this Iterator.
If Valid returns false, this method will panic.
Seek moves the iterator the position of the key given or, if the key doesn't exist, the next key that does exist in the database. If the key doesn't exist, and there is no next key, the Iterator becomes invalid.
This method is safe to call when Valid returns false.
SeekToFirst moves the iterator to the first key in the database, as defined by the Comparator in the ReadOptions used to create this Iterator.
This method is safe to call when Valid returns false.
SeekToLast moves the iterator to the last key in the database, as defined by the Comparator in the ReadOptions used to create this Iterator.
This method is safe to call when Valid returns false.
Valid returns false only when an Iterator has iterated past either the first or the last key in the database.
Value returns a copy of the value in the database the iterator currently holds.
If Valid returns false, this method will panic.
IteratorError wraps general internal LevelDB iterator errors for user consumption.
func (e IteratorError) Error() string
Options represent all of the available options when opening a database with Open. Options should be created with NewOptions.
It is usually with to call SetCache with a cache object. Otherwise, all data will be read off disk.
To prevent memory leaks, Close must be called on an Options when the program no longer needs it.
NewOptions allocates a new Options object.
Close deallocates the Options, freeing its underlying C struct.
SetBlockRestartInterval is the number of keys between restarts points for delta encoding keys.
Most clients should leave this parameter alone. See the LevelDB documentation for details.
SetBlockSize sets the approximate size of user data packed per block.
The default is roughly 4096 uncompressed bytes. A better setting depends on your use case. See the LevelDB documentation for details.
SetCache places a cache object in the database when a database is opened.
This is usually wise to use. See also ReadOptions.SetFillCache.
SetComparator sets the comparator to be used for all read and write operations.
The comparator that created a database must be the same one (technically, one with the same name string) that is used to perform read and write operations.
The default comparator is usually sufficient.
func (o *Options) SetCompression(t CompressionOpt)
SetCompression sets whether to compress blocks using the specified compresssion algorithm.
The default value is SnappyCompression and it is fast enough that it is unlikely you want to turn it off. The other option is NoCompression.
If the LevelDB library was built without Snappy compression enabled, the SnappyCompression setting will be ignored.
SetCreateIfMissing causes Open to create a new database on disk if it does not already exist.
SetEnv sets the Env object for the new database handle.
SetErrorIfExists causes the opening of a database that already exists to throw an error if true.
func (o *Options) SetFilterPolicy(fp *FilterPolicy)
SetFilterPolicy causes Open to create a new database that will uses filter created from the filter policy passed in.
SetInfoLog sets a *C.leveldb_logger_t object as the informational logger for the database.
SetMaxOpenFiles sets the number of files than can be used at once by the database.
See the LevelDB documentation for details.
SetParanoidChecks causes the database to do aggressive checking of the data it is processing and will stop early if it detects errors if true.
See the LevelDB documentation docs for details.
SetWriteBufferSize sets the number of bytes the database will build up in memory (backed by an unsorted log on disk) before converting to a sorted on-disk file.
Range is a range of keys in the database. GetApproximateSizes calls with it begin at the key Start and end right before the key Limit.
ReadOptions represent all of the available options when reading from a database.
To prevent memory leaks, Close must called on a ReadOptions when the program no longer needs it.
func NewReadOptions() *ReadOptions
NewReadOptions allocates a new ReadOptions object.
func (ro *ReadOptions) Close()
Close deallocates the ReadOptions, freeing its underlying C struct.
func (ro *ReadOptions) SetFillCache(b bool)
SetFillCache controls whether reads performed with this ReadOptions will fill the Cache of the server. It defaults to true.
It is useful to turn this off on ReadOptions for DB.Iterator (and DB.Get) calls used in offline threads to prevent bulk scans from flushing out live user data in the cache.
See also Options.SetCache
func (ro *ReadOptions) SetSnapshot(snap *Snapshot)
SetSnapshot causes reads to provided as they were when the passed in Snapshot was created by DB.NewSnapshot. This is useful for getting consistent reads during a bulk operation.
See the LevelDB documentation for details.
func (ro *ReadOptions) SetVerifyChecksums(b bool)
SetVerifyChecksums controls whether all data read with this ReadOptions will be verified against corresponding checksums.
It defaults to false. See the LevelDB documentation for details.
Snapshot provides a consistent view of read operations in a DB.
Snapshot is used in read operations by setting it on a ReadOptions. Snapshots are created by calling DB.NewSnapshot.
To prevent memory leaks and resource strain in the database, the snapshot returned must be released with DB.ReleaseSnapshot method on the DB that created it.
WriteBatch is a batching of Puts, and Deletes to be written atomically to a database. A WriteBatch is written when passed to DB.Write.
To prevent memory leaks, call Close when the program no longer needs the WriteBatch object.
func NewWriteBatch() *WriteBatch
NewWriteBatch creates a fully allocated WriteBatch.
func (w *WriteBatch) Clear()
Clear removes all the enqueued Put and Deletes in the WriteBatch.
func (w *WriteBatch) Close()
Close releases the underlying memory of a WriteBatch.
func (w *WriteBatch) Delete(key []byte)
Delete queues a deletion of the data at key to be deleted later.
The key byte slice may be reused safely. Delete takes a copy of them before returning.
func (w *WriteBatch) Put(key, value []byte)
Put places a key-value pair into the WriteBatch for writing later.
Both the key and value byte slices may be reused as WriteBatch takes a copy of them before returning.
WriteOptions represent all of the available options when writing from a database.
To prevent memory leaks, Close must called on a WriteOptions when the program no longer needs it.
func NewWriteOptions() *WriteOptions
NewWriteOptions allocates a new WriteOptions object.
func (wo *WriteOptions) Close()
Close deallocates the WriteOptions, freeing its underlying C struct.
func (wo *WriteOptions) SetSync(b bool)
SetSync controls whether each write performed with this WriteOptions will be flushed from the operating system buffer cache before the write is considered complete.
If called with true, this will significantly slow down writes. If called with false, and the host machine crashes, some recent writes may be lost. The default is false.
See the LevelDB documentation for details.
Package levigo imports 3 packages (graph) and is imported by 131 packages. Updated 2019-12-17. Refresh now. Tools for package owners. | https://godoc.org/github.com/jmhodges/levigo | CC-MAIN-2020-34 | refinedweb | 2,836 | 59.5 |
Although javascript is truly not an object oriented languange, microsoft with the release of client side framework for ajax, had really made working with javascript much easier. Today i will explore the concepts for javascript intellisense, notifying asp.ne ajax framework of any external client side libraries and how to create classes and use inheritance to extend those classes.
To start off i will create a simple class called person in the javascript file called Person.js. Person class will contain a property called Name and a methods called Print.
At the top of the file you will notice a comment which includes MicrosoftAjax.js. This directive is used by visual studio 2008 to provide you intellisense for the microsoft ajax library. Intellisense can simply be provided by adding the comment directive at top of your JavaScript file. Notice that we using the name attribute to reference a JavaScript file that is embedded in an assembly called System.Web.Extensions.dll.
We create a person class by first initializing the name private variable in the constructor which is typically a function. The naming convention is irrelevant in terms of javascript because javascript has no notion of private variable. However by attributing private variables with underscore, intellisence recognizes it as private variable and ensures that its not visible outside the scope of the class.
Next down we go ahead and create getter and setter for the name property and create a method called Print which simply prints the name variable using alert box.
After creating the class we go ahead and register the class with the asp.net ajax client side libraray by passing the name of the class in the registerClass method. registerClass happens to be a method defined on the Type object.
We then finally notify the ajax client side framework that we have finished loading this javascript file. It is recommended practice to call notifyScriptLoaded on every javascript file to notify asp.net ajax.
We will move forward by creating another class by extending Person class with class Student. Student will have one extra property called SSN and will override the Print method. Lets drill through the Student javascript class.
We wont be covering all the details about the class except for few noticeable items worth mentioning. First notice the comment directive for intellisence uses path variable instead of name and that is because the javascript file we are trying to reference is not embedded in the assembly and belongs to our project. We are also making use of a concept introduced by asp.net ajax client side library called namespace registration. Concept of namespaces only exists in C# and vb.net or other programming langauges. Namespaces helps in preventing naming conflicts by keeping related classes together. The concept was great enough for Microsoft to introduce the registernamespace method in the client side library. It allows you add a particular class to the namespace registered above.
After registering we go ahead and add a property called ssn and override the print method defined in the person class. In order to access a property defined in the base class, we make use of callBasemethod passing in the name of the method whose value we like to get and then concatenating the results with ssn variable declared inside the Student class. We then go ahead and regsiter the class with ajax client side library but this time we use the second overload which lets u specify the class you are inheriting from which happens to be People.Person class.
Now that we have those two classes ready we can go ahead and create instances of Student class and call Print method to display the result. Here is how the code looks like.
There is not much to talk about in this code except that we are registering the two JavaScript files with scriptmanager and then inside the pageload function we create an an instance of Student class. We then assign the values to Name and ssn and call print to get an alert dialog on the screen that looks like the screen shot below
In this blog posting, i walked you through how to create simple class in javascript that extends other class. We also discussed how namespaces can be used to resolve naming conflicts. I hope this walk through was informative.
Very helpful! Thanks.
Pingback from Dew Drop - March 12, 2008 | Alvin Ashcraft's Morning Dew
Actually, Javascript IS an Object-Oriented language. Its just prototype based instead of class based. | http://weblogs.asp.net/zeeshanhirani/archive/2008/03/09/exploring-asp-net-ajax-client-side-library.aspx | crawl-001 | refinedweb | 752 | 61.67 |
This is no doubt a very stupid question, but how do I _run_ qbzr once it's installed? All I see is an __init__.py and running that leads to an error:
<blockquote>
Traceback (most recent call last):
File "C:\win_
from bzrlib import registry
ImportError: No module named bzrlib)
</blockquote>
My problem might be related to a bad install. I installed the dependencies I could but I might have missed something. Python 2.5 was already installed before I started.
Question information
- Language:
- English Edit question
- Status:
- Answered
- For:
- QBzr Edit question
- Assignee:
- No assignee Edit question
- Last query:
-
- Last reply:
- | https://answers.launchpad.net/qbzr/+question/53820 | CC-MAIN-2021-25 | refinedweb | 102 | 63.39 |
Lean 0.2.4
Generic interface to multiple Python template engines - Tilt for Python
Lean is intended to provide a consistent interface to various python templating languages. Code wise it is a port of Tilt for Ruby.
At the moment it just has support for CoffeeScript and Scss as those are what I need but I will be adding support for as many other python templating languages as I can. I will also be trying to add support for the compiled template functionality that Tilt has, just as soon as I can understand how it works and how to do it in Python.
If you want to get involved and help add support for other templating languages then please, get stuck in!
Installation
$ pip install lean
Basic Usage
from lean import Lean tmpl = Lean.load('blah.coffee') tmpl.render()
License
Lean is licensed under the MIT License, please see the LICENSE file for more details.
- Downloads (All Versions):
- 26 downloads in the last day
- 128 downloads in the last week
- 477 downloads in the last month
- Author: Will McKenzie
- Download URL:
- License: MIT License
- Categories
- Package Index Owner: OiNutter
- DOAP record: Lean-0.2.4.xml | https://pypi.python.org/pypi/Lean/0.2.4 | CC-MAIN-2015-22 | refinedweb | 196 | 56.29 |
Read/parse/write an NNTP config file of subscribed newsgroups. More...
#include "config.h"
#include <dirent.h>
#include <errno.h>
#include <limits.h>
#include <stdbool.h>
#include <stdio.h>
#include <string.h>
#include <sys/stat.h>
#include <time.h>
#include <unistd.h>
#include "private.h"
#include "mutt/lib.h"
#include "config/lib.h"
#include "email/lib.h"
#include "core/lib.h"
#include "conn/lib.h"
#include "mutt.h"
#include "lib.h"
#include "bcache/lib.h"
#include "adata.h"
#include "edata.h"
#include "format_flags.h"
#include "mdata.h"
#include "mutt_account.h"
#include "mutt_logging.h"
#include "mutt_socket.h"
#include "muttlib.h"
#include "protos.h"
#include "hcache/lib.h"
Go to the source code of this file.
Read/parse/write an NNTP config file of subscribed newsrc.c.
Find NntpMboxData for given newsgroup or add it.
Definition at line 72 of file newsrc.c.
Remove all temporarily cache files.
Definition at line 102 of file newsrc.c.
Unlock and close .newsrc file.
Definition at line 118 of file newsrc.c.
Count number of unread articles using .newsrc data.
Definition at line 132 of file newsrc.c.
Parse .newsrc file.
Definition at line 162 of file newsrc.c.
Generate array of .newsrc entries.
Definition at line 296 of file newsrc.c.
Update file with new contents.
Definition at line 389 of file newsrc.c.
Update .newsrc file.
Definition at line 439 of file newsrc.c.
Make fully qualified cache file name.
Definition at line 518 of file newsrc.c.
Make fully qualified url from newsgroup name.
Definition at line 559 of file newsrc.c.
Parse newsgroup.
Definition at line 575 of file newsrc.c.
Load list of all newsgroups from cache.
Definition at line 616 of file newsrc.c.
Save list of all newsgroups to cache.
Definition at line 650 of file newsrc.c.
Compose hcache file names - Implements hcache_namer_t.
Definition at line 690 of file newsrc.c.
Open newsgroup hcache.
Definition at line 709 of file newsrc.c.
Remove stale cached headers.
Definition at line 735 of file newsrc.c.
Remove bcache file - Implements bcache_list_t.
Definition at line 783 of file newsrc.c.
Remove stale cached messages.
Definition at line 803 of file newsrc.c.
Remove hcache and bcache of newsgroup.
Definition at line 812 of file newsrc.c.
Clear the NNTP cache.
Remove hcache and bcache of all unexistent and unsubscribed newsgroups
Definition at line 846 of file newsrc.c.
Get connection login credentials - Implements ConnAccount::get_field()
Definition at line 988 of file newsrc.c.
Open a connection to an NNTP server.
Automatically loads a newsrc into memory, if necessary. Checks the size/mtime of a newsrc file, if it doesn't match, load again. Hmm, if a system has broken mtimes, this might mean the file is reloaded every time, which we'd have to fix.
Definition at line 1017 of file newsrc.c.
Get status of articles from .newsrc.
Full status flags are not supported by nntp, but we can fake some of them: Read = a read message number is in the .newsrc New = not read and not cached Old = not read but cached
Definition at line 1219 of file newsrc.c.
Subscribe newsgroup.
Definition at line 1256 of file newsrc.c.
Unsubscribe newsgroup.
Definition at line 1280 of file newsrc.c.
Catchup newsgroup.
Definition at line 1308 of file newsrc.c.
Uncatchup newsgroup.
Definition at line 1347 of file newsrc.c.
Get first newsgroup with new messages.
Definition at line 1390 of file newsrc.c. | https://neomutt.org/code/newsrc_8c.html | CC-MAIN-2021-39 | refinedweb | 576 | 65.39 |
Easy Template
Simple scripting for Plone pages
Current release
collective.easytemplate 0.7.1
Released Apr 29, 2009
Get Easy Template for all platforms (59.0 kB)
- collective.easytemplate-0.7.1.tar.gz
Project Description:
Installation
Add to your buildout:
eggs = collective.templateengines collective.easytemplate zcml = collective.easytemplate
Run Add-on product installer for Easy Template product.
collective.easytemplate depends on collective.templateengines and Jinja2 template engine.
Running unit tests
Python eggs Cheetah, Jinja2 and Products.LinguaPlone must be installed in order to run all unit tests.
Author
Mikko Ohtamaa
Twinapex Research, Oulu, Finland - High quality Python hackers for hire
This product is bruteware. Please go to mybrute.com and start your career as a gladitor. (This computer game takes just 3 minutes from your everyday work time and is überfun)..
Portlets
Use Templated Portlet portlet to add scripts to your portlets. Templated Portlet is based on `
Enter the template code in the visual editor. Raw HTML editing is not yet supported authoring guide
This document describes available variables and functions in Easy Template elements. Some example template snippets are included.
The default Jinja backend exposes tags as functions. Since Jinja makes clear distinction between variables and functions, you need to add () after tags to render them.() }}
Print document body text (HTML):
{{ context.getBody() }}
Get the URL of the current object:
{{ context.absolute_url() }}
If you have a write access to the object you can even set values in the template, though this is not very useful:
{{ context.setTitle('Moo the novel') }} 'sister':
{{>
Tags are custom functions you are able to use in your templates. They provide an easy way to extend templates with your own Python functions. Tags are registered in tagconfig.py file in the collective.easytemplate package.
Easy Template comes with several useful tags out of the box and they are explained below.
explore
Dump object methods and variables for developer consumption.
Warning. This tag is not multiuser safe. You want to disable this tag on production site, since it is a read priviledge escalation.
Explore tag helps you to build scripts by exposing the variables and methods insde the objects. It prints a tabular output of available methods and variables.
Parameters:
object: Object to explore
Show the guts of current Templated Document object:
{{ explore(context) }}
Show what we have available in the portal root:
{{ explore(portal) }}
Show what was returned by a function which returns a list - take the first element:
{{ explore(query({"portal_type":"Folder})[0]) }}
query
Return site objects based on search criteria.
Query returns the list of site objects as returned by portal_catalog search. The objects are catalog brains: dictionaries containing metadata ids as key.
See ZMI portal_catalog tool for avaiable query index and returned metadata fields.
Key-value pairs are taken as the parameters and they are directly passed to the portal_catalog search.
The output is limited by the current user permissions.
Parameters:
- searchParameters: Python dictionary of portal_catalog query parameters. index->query mappings. Bad index id does not seem to raise any kind of an error.
Return value:
- List of ZCatalog brain objects. Brain objects have methods getURL, getPath and getObject and dictionary look up for catalog metadata columns.
Examples
Return the three most fresh News Item objects sorted by date:
{{ query({"portal_type":"News Item","sort_on":"Date","sort_order":"reverse","sort_limit":3,"review_state":"published"}) }}
Return items in a particular folder:
{{ query({path={"query" : "/folder", depth: 0}}) }}
For more information about possible query formats see this old document.
view
Render a browser:page based view. If there is no registered view for id, return a placeholder string.
Parameter name: View id, as it appears in browser/configure.zcml.
Parameter function: Optional. View instance method name to be called. If omitted, __call__() is used.
Example (render sitemap):
{{ view("sitemap_view", "createSiteMap") }}
viewlet
Render a viewlet.
Parameter name: Viewlet id as it appears on portal_view_customizations ZMI page.
Example:
{{ viewlet("portal.logo") }}
rss_feed
The function reads RSS feed. You can iterate manually through entries and format the output. This is mostly suitable when dealing with HTML source code.
Parameters
- url: URL to RSS or RSS
- cache_timeout: Optional, default value 60. Seconds how often the HTTP GET request should be performed.
Return
- List of dictionaries with following keys: title, summary, url, updated and friendly_date.
Example (raw HTML edit):
{% for entry rss_feed("") %} <p> <b>Title:</b> <span>{{ entry.title }} </p> <p> <b>Summary:</b> <span>{{ entry.summary }} </p> {% endfor %}
plone.app.portlets.rss.RSSFeed is used as the RSS reader backend.
list_folder
List folder content and return the output as <ul> tag. Mostly suitable for simple folder output generating.
The formatting options offered here are not particular powerful. You might want to use query() tag for more powerful output formatting.
Parameters
- folder: The path to the listed folder. Same as the URI path on the site.
- title: Render this title for the listing as a subheading
- filters: portal_catalog query parameters to be applied for the output. See query() below for examples.
- exclude_self: If True do not render context Templated Document in the outpput
- extra_items: String of comma separated entries of URIs which are outside the target folder, but should be appended to the listing.
Example (create a course module listing from a course folder):
{{ list_folder("courses/marketing/cim-professional-certificate-in-marketing", title="Other modules in this course:", filters={ "portal_type" : "Module"}) }}
latest_news
Render list of latest published news from the site. Uses collective.easytemplate.tags/latest_news.pt template.
latest_news also serves as an example how to drop a custom view into the visual editor.
Parameters
- count: How many items are rendered
Example:
{{ latest_news(3) }}
translate
Translation catalog look up with an message id.
Translates the message to another language. The function assumes the translation is available in gettext po files.
Parameters
- message: gettext msgid to translate
- domain: gettext domain where the message belongs, optional, defaults to "plone"
- language: target language code, e.g. fi, optional, defaults to the currently selected language
- default: The default value to be displayed if the msgid is missing for the selected language
** Return **
- translated string
Examples:
{{ translate("missing_id", default="Foobar") }} {{ translate("box_more_news_link", "plone", "fi") }}
For available default Plone msgids, see PloneTranslations product source
current_language
Get the current language for the user.
This enables conditional showing based on the language.
** Parameters **
- No parameters
** Return **
- The current language code as a string, e.g. "fi"
Example:
{% if current_language() == "fi" %} Päivää {% else %} Hello {% endif %}
Registering new tags
If you want to add your template functions you must add them to collective.easytemplate.tagconfig module (note: in the future Zope configuration directives and ZCML can be used to register the tags).
All tags implement collective.templateengine.interfaces.ITag interface.
For example code, see the existing tags in collective.easytemplate.tags package.
Example how to register a custom tag (run in your product's initialize() method):
from collective.easytemplate import tagconfig from myproducts import MyTag tagconfig.tags.append(MyTag()) | http://plone.org/products/easy-template | crawl-002 | refinedweb | 1,138 | 50.63 |
>>:)
Re:Indentation syntax has its problems too (Score:1)
Well, since indentation level dictates grouping
if (x == 4) { x = 10;}
y = 6;
must be:
if x == 4:
x = 10
y = 6
whereas
if (x == 4) {
x = 10; y = 6;
}
must be
if x == 4:
x = 10
y = 6
I'm sorry, but to me it's as clear as day and completely unambiguous. You seem to have missed the point - that indentation level expresses the intent.
Ruby (Score:1)
"this string".length
and it returns 11.
Or you can make a loop such as:
4.times { |i|
puts i
}
and it prints: 1
2
3
4
To iterate through a hash in Ruby, you do:
someHash.each { |key,value|
#do stuff with key and value
}
That's an example of using a Ruby iterator, one of the nicer language features.
Defining a class is straightforward:
class MyClass
@@numInst #class vars start with '@@'
attr_accessor
def initialize(x,y,z="something")
#constructor, gets called by 'new' #note: instance vars start with '@'
@x = x
@y = y
@z = z
@@numInst += 1
end
end
inst1 = MyClass.new(1,2)
puts "x = " inst2 = MyClass.new(3,4,"something else")
puts "total number of instances"
Note: the attr_accessor method above creates accessor methods for the variables in the list following it
Anyway, Ruby is very cool check it out at:
or:
Re:Braces vs Whitespace )
Good God, no! I just have to program in it. Occassionally.
One of MUMPS' problems is the blocking via indentation. It got so bad, they added the "." for explicit blocking-- as in:
I $L(NDC)=11 D Q
.S X=$$FINDNDC(NDC) .I X D ..N F S F=$P(^APSAMDF(X,2),"^",3) ..W:OUTPUT $$FMTNDC(NDC,5,4,2)," Format=",F,! .E W NDC," Not found",!
W:OUTPUT "Test ",NDC,"..."
You are right-- it's a holy war issue. My experience with indentation-as-blocking has not been good; doesn't mean it doesn't work for other people. Python is a great language... for other people.
Re:Braces vs Whitespace (Score:1)
The bugs occur when editing code, when adding an outer loop and the code isn't properly re-formatted. Visually, it is harder to deal with a "running context" (that is, the leading white space means "include me in the previous block") instead of an SOB/EOB-marker context ("everything between the braces is a block").
In communication, a statement is assumed to be related to the previous statement. In fact, we have a special phrase for statements that *don't* relate to the previous statement-- a "non-sequitor." When communicating, we have special markers for "start of topic," and "end-of-topic." Each sentence does *not* start with a marker that says, "I'm part of the same topic as the previous sentence."
As long as your program chunks are short and to-the-point, whitespace blocking is okay. But not everything lends itself to short program blocks.
Braces vs Whitespace (Score:2)
But, I do have one quibble, the same quibble I've had with Python from the outset. Using whitespace blocking to mandate code structure forces the programmer to the language, and not the other way around. I like my code to fit my style.
I program in MUMPS, a terse database/language written in the late sixties. It's a decent language, as far as that goes, but it also uses whitespace for blocking. I have seen more bugs due to stray spaces than misplaced braces (in C, Perl, etc). Plus, it makes it a pain in the ass when re-formatting huge blocks of code.
Plus, it really *doesn't* make the code more readable. It merely forces the program to a particular style. And Mr. van Rossum's style is not mine. (Arguably, he does have better style than me.)
Re:Braces vs Whitespace )
Maybe not in your dream world, but out here in the real world I have code to write, and it there is no editor that will skip to the end of a block in Python, but there is in every other language that I use, then I know what I'm going to choose.
If Joe Schmoe puts in 8 spaces, followed by a tab, it's not Python's fault.
Once again, the real world isn't that simple. Code gets lots and lots of different programmers working on it, over the course of years. And in every other language I use, tabs versus spaces are not an issue. So once again, I look at what has really happened in my 20 years of professional programming experience rather than what should happen in an idea world, see that it would cause major problems in Python, and decide not to choose Python for my next project.
Re:Braces vs Whitespace
Re:Licence issues (Score:2)
The FSF obsesses over these things because that is the way that the law works. Writing legal documents is a lot like writing complex C memory management code. One off by one error and the entire application segfaults. It's the same thing with the law. One minor detail could cost the case, and when you are talking about something as important to the FSF as the continued "freeness" of the software they have developed you can see why this would make them a little paranoid. Because of this the FSF has worked very hard to make sure that everything that they do is as legal and aboveboard as possible.
That's why they require pen and ink signatures on a legal document assigning them as copyright holder before you can work on GNU software. They know that only the legal copyright holder can press charges in the US, and they want to be sure that they have the power to enforce their license.
Many of the other open source projects (like Python, for instance) have been much more haphazard about the licensing of their product. Guido, for example, failed to make sure before continuing work on Python that it would continue under the same X style license as it always had. His employer got nervous, and their lawyers came up with a license that isn't GPL compatible (at least according to the FSF lawyers).
It is convenient to blame the FSF lawyers, but they didn't change the original Python license. They just pointed out that they don't feel that the new license is GPL compatible. If these details weren't important, then perhaps the people who changed the original license should change it back. The fact of the matter is that the details probably are important enough that neither side is going to bend. The FSF doesn't want to threaten the GPL, and the lawyers at CNRI and Digital Creations don't want to be liable for problems someone might have with Python.
The FSF should be commended for taking care of these details before it starts developing software. If Guido would have done the same, there wouldn't be any problem.)!
The main reason I chose to to GPL my latest open source project--the MaraDNS server [maradns.org]--was because I knew that there were some incompatibilities between the GPL license and the Python license. As long as the GPL may make it impossible to make a python module out of my code, I am not going to GPL it.
Instead, I made MaraDNS public domain. BTW, I use Python-style syntax for the mararc [maradns.org] file MaraDNS uses.
BTW, isn't it against the license for Python to have a gdbm module, since gdbm is GPL and not LGPL? And, is it not inappropriate to have Python KDE bindings or use Python in KDE programs?
- Sam
Re:Braces vs Whitespace )
Remember the Golden Rule of Programming:
There is no language in which it is the least bit hard to write incorrect programs.
I use Python a lot. I use C a lot. As far as I'm concerned, the indentation issue is a non-starter --- you're going to indent your code *anyway*, otherwise you're not worthy of that paycheque. In C you have a bit of extra effort involved placing braces correctly. In Python you have a bit of extra effort involved getting the indentation right. When all's said and done, there's equal effort involved in each approach.
Remember: you cannot enforce good programming. You can only help. Python's pervasive dynamicity, data structures, class structures, module library all help good programming *far* more than the indentation style.
Re:Argh. We need license compatibility. )
Beyond that, at least braces are printable characters--I think it is important to focus the language on what can be seen rather than what is omitted or "invisible" in some sense, and I like the fact that most of the languages I program in are free form about white space (C, perl, shell).
Whitespace does doesn't seem to me to lend itself to ease of use as an active formatting element--what about environments that map tabs to 4 spaces instead of 8, what about (hypothetical) environments where tab doesn't even do what you'd expect, or requires tab settings to be made before use, etc. It seems to easy to get an extra space or tab in there where it could actively hurt you without being obvious to observe.
Re:Braces vs Whitespace (Score:5)
Re:Braces vs Whitespace (Score:2)
And that's good for when you want to read your code. But style is bad for the general case of $programmer_y wanting to read $programmer_x's code. You address the first problem at the expense of the second. Python addresses the second problem at the expense of the first.
Interestingly, he specifically mentioned Knuth's condition: "once program units were small enough." Thus Python doesn't just make your code look a certain way, but also uh
.. "encourages" .. you to structure your code a certain way. (e.g. Lots of of little subroutines instead of long blocks with ifs nested 20 levels deep.) Now individual style is really going out the window...
---
Re:the GPL is not a contract (Score:2)
If someone violates the GPL, they can still be sued -- but they would be sued for copyright infringement, not contract violation.
--.)
--
Re:Braces vs Whitespace )
So you see, in practice the use of indentation to delimit blocks is not impractical at all. It simply comes down to a matter of tastes, training, and preference.
I write a lot of C++ and Python code. I like both; the static vs. dynamic typing issues are HUGELY more relevant in determining which is better for a certain task, than block delimeters.
Re:To all you whiners )
Try this.
from sys import*;from string import*;a=argv;[s,p,q]=filter(lambda x:x[:1]!=
'-',a);d='-d'in a;e,n=atol(p,16),atol(q,16);l=(len(q)+1)/2;o,inb=
while s:s=stdin.read(inb);s and map(stdout.write,map(lambda i,b=pow(reduce(
lambda x,y:(x>8*i&255),range(o-1,-1,-1)))
It's Andrew Kutchlings, from
Sumner)
In other words: "NO, I don't have another answer".
Sorry, but I also don't care for the indentation for the exact reason that he gives.
if (x == 4)
x = 10;
y = 6;
Now how did I know that the writers intent was
if (x == 4) { x = 10; }
y = 6;
or
if (x == 4) {
x = 10; y = 6;
}
the if without braces should also be avoided in C but I do fall for that too.
Now if you use the braces, you know what the programmers intent was. But with Python, you don't. It could have been an indentation mistake, and that is harder to debug.
So if they ment
if (x == 4) { x = 10;}
y = 6;
How will you know with just an indentation syntex.
I used Python for about a month, and gave up and went back to Perl for scripting. This is probably because of my long C experience.
I believe in the More than one way of doing it. That's also probably why I hate MS Windows!
Steven Rostedt
R and Python (Score:2)
I have been studying up on the R language lately, an open source version of S, the statistical language of John Chambers, and I've noticed that R and python are awfully similar in their basic, and novel, language concepts. The R homepage is at
The omegahat project, at, has developed interfaces between R and python, as well as packges to interface between R and Perl, and R and Java.
Anyway, I would have liked to hear Guido's thoughts on R or S and how they compare to python. The correspondence of concepts in the two languages in amazing to me, given how different their origins were.)
I agree completely. Guido says "Don't you hate code that's not properly indented?", and I do. I also don't like reading code that is poorly documented, doesn't use descriptive identifier names, or uses 300 line methods that should be broken up. However I would be very annoyed if a compiler were to refuse to compile code without comments, or forbid variable names of less than 6 characters, or limit methods to 50 lines. Maybe this is a result of my libertarian views; I don't want a nanny state, and I don't want a nanny compiler.
And more importantly, there's no way (that I know of) to put an RSA Python script in a 4-line sig block...
Re:Indentation syntax has its problems too (Score:2)
FSF cares but goes too far (Score:2)
As far as the State thing, the Python license has a good point, not all states honor the general disclaimer law. This could cause problems for GPL and FSF would be smart to take that into consideration.
"One World, one Web, one Program" - Microsoft promotional ad
Re:Braces vs Whitespace (Score:2)
When you try to run a program with inconsistent spacing, python will complain about it - the simple program:
print "First Line"
print "Second Line"
will cause an error when you try to run it, because the second line isn't indented properly.
There's also tabnanny, a standard module that's designed to check for inconsistent indentation. From its doc string: "The Tab Nanny despises ambiguous indentation. She knows no mercy."
-- fencepost)
Put in the light that once it is compatible they have to live with it FOREVER just like Eben said.
That is not a decision they should take lightly and it is a good thing to take caution when you are talking about a Lic which thousands and thousands of projects depend on. Something such as this can undermine the integrity of everything.
The GPL needs to stand up in court but declaring it legal and then watching the Python lic go down in flames because someone abused it on their own software doesnt help GPL any at all since it was declared "compatible".
The reasons and scenarios are legally to far fetched for me to properly illustrate but unless you have been a lizard under a rock you absolutely know how extreme and far fetched software lic and the law can be.
It is not something, even if I dislike the GPL on principle, that should be taken lightly if FSF wants to see that the GPL keep as much integrity as possible until it is outright challenged in court or some such. Play it safe.
BTW: I think posting something like that obviously written only for a small and closed audience is not cool... Not that it wasnt expected given how the letter was written.
Jeremy
Re:Argh. We need license compatibility. (Score:2)
The GPL is all fine and dandy, but it causes problems. The point of the GPL is to spread it's ideology virually. That's great. But when it's not what you want, find something with a more liberal, tolerant license (LGPL included) or do your own implementation.)
I suspect forced indentation is something of a holy war issue. I'm repulsed by it myself but I can see it having advantages that don't necessarily apply to me. I really don't think it's a visual ergonomics issue at all, just a question of personal taste. (Of course, I happen to consider PostScript syntax to be elegant, so who am I to talk?)
/Brian)
Didn't he also say that we should all go and learn how to code in machinecode if we have more than a casual interest in computers? Or was that someone else?
The main problem, though, is long lines of code. What do you do when you hit the 80 character mark? The JavaScript implementation sucks, it's no good if wrapover is also valid on its own... That's the annoying bit.
Re:Braces vs Whitespace (Score:2)
The easiest way to GPL compatibility (Score:2)
If the only problem really is that CNRI doesn't want to be sued, the easy way to do this for CNRI is to license python to someone who takes the risk to be sued, for example our friend Zooko, who then releases it under GPL.
Maybe, this ceremony would have to be repeated every release of python.
Apologies for attaching to the top rated post - I'm not a pilotfish.)
However, the biggest problem that I face is the portability of the code... it's extremely difficult to write a "stand-alone" application that can be distributed. Sure, there's the Freeze tool, but it's a pain to use, and hard to configure properly.
If I'm writing an app that I want others to use (on a non-Linux system), I'll usually choose C/C++ instead, because I know that I can easily send it out. Otherwise, I end up with an application that needs three installers... (Python, Win32, mine).
Oh well...
Freeze-a-phobically-yours, Madcow.. | https://developers.slashdot.org/story/01/04/20/1455252/guido-van-rossum-unleashed | CC-MAIN-2017-39 | refinedweb | 3,023 | 71.34 |
Overview : the past, if you want to get started with Deno knowing Node.js would be an added advantage. Even though Deno has arrived as a competitor for NodeJS in the industry not so quick but people are sure that it’ll take over.
I was reading lot of documentations and materials to understand the difference. So, here are the advantages that i see from Deno,
- It is Secure by default. No file, network, or environment access, unless explicitly enabled.
- Supports TypeScript out of the box.
- Ships only a single executable file.
- Has built-in utilities like a dependency inspector (deno info) and a code formatter (deno fmt).
- Deno does not use npm
- Deno does not use package.json in its module resolution algorithm.
- All async actions in Deno return a promise. Thus Deno provides different APIs than Node.
- Uses “ES Modules” and does not support require().
- Deno has a built-in test runner that you can use for testing JavaScript or TypeScript code.
- Deno always dies on uncaught errors.
I was very excited as other developers when Deno was announced. In this post i will demonstrate how to create a simple Web API with Deno and deploy to production on Web App with Azure.
PreRequisities:
You will need to have an Azure Subscription. If you do not have an Azure subscription you can simply create one with free trial.
Install Deno :
Using Shell (macOS, Linux):
curl -fsSL | sh
Using PowerShell (Windows):
iwr -useb | iex
Using Homebrew (macOS):
brew install deno
Using Chocolatey (Windows):
choco install deno
Using Scoop (Windows):
scoop install deno
Services used:
- Azure Web App
- Github Actions
Step 1 : Create Deno Rest API
I will not be going through each step on how to create the REST API, however if you are familiar with creating APIs with Node , it is the same way that you need to do. You need to have the main file server.ts which will have those routes defined. (server.ts)
import { Application } from ""; import router from "./routes.ts"; const PORT = 8001; const app = new Application(); app.use(router.routes()); app.use(router.allowedMethods()); console.log(`Server at ${PORT}`); await app.listen({ port: PORT });
One feature that i personally liked in DENO is that it provides developers to code with TypeScript that addresses “design mistakes” in Node.js. In this case i am going to create an API to fetch/add/delete products and my interface would look like as below (types.ts),
export interface Product { id: String; name: String; description: String; price: Number; status: String; }
Similar to how you would define routes in Node, you need to define the routes for different endpoints when user want to execute fetch/add/delete operations as follows(routes.ts),
import { Router } from ""; import { delete_product, add_product, get_product, get_products } from "./Controllers/Products.ts"; const router = new Router(); router.get("/", ctx => { ctx.response.body = "Welcome to Deno!"; }); router.get("/get/:id", get_product); router.post("/add", add_product); router.get("/get_all_products", get_products); router.get("/delete/:id", delete_product); export default router;
The final step is to create the code for the logic of those each routes. You need to implement the methods which are defined in those routes. For example get_products would look like
import { Product } from "../Types.ts"; let products: Product[] = [ { id: "1", name: "Iphone XI", description: "256GB", price: 799, status: "Active" } ]; const get_products = ({response}: {response: any}) => { response.status = 200; response.body = products; };
You can access the whole code from this Repository.
Run the DENO app:
Once you are good with everything, you can run the app in local and check if the endpoints are working as expected.
deno run -A server.ts
And you would see the app running in port 8001 , and you can access the endpoints as follows ,
Step 2 : Create Azure Resources
Now we are good with the first step and you can see the app running successfully in local. As the next step let’s go ahead and deploy the app to Azure. Inorder to deploy the app, you need to create a Resource Group first.
Create a ResourceGroup Named Deno-Demo
You can navigate to Azure Portal and search for Resource Group in the search bar and create a new one as defined here!
Next step is to create the Web App , as we are going to deploy this app to a Linux environment, you can set the configuration as follows,
Step 3 : Deploy to Azure with Github Actions
One of the recent inventions by Github team that was loved by all developers were Github Actions. Personally i am a big fan of Github actions and i have published few posts earlier explaining the same. To configure the Github Action to our application, first you need to push the code to your github repository.
Create a deno.yml
To deploy the app , we first need to create the workflow under the actions. you can create a new workflow by navigating to Actions tab and create new workflow
I am assuming that you are familiar with important terms of Github Actions, if you are new you can explore here. In this particular example i will be using one package created by Anthony Chu who is a Program Manager in Azure functions team. And my deno.yml looks like below,
# A workflow run is made up of one or more jobs that can run sequentially or in parallel jobs: build-and-deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - uses: azure/login@v1.1 with: creds: ${{ secrets.AZURE_CREDENTIALS }} - name: Set up Deno uses: denolib/setup-deno@master with: deno-version: 1.0.2 - name: Bundle and zip Deno app run: | deno bundle server.ts server.bundle.js zip app.zip server.bundle.js - name: Deploy to Azure Web Apps uses: anthonychu/azure-webapps-deno-deploy@master with: app-name: denodemo resource-group: deno-demo package: app.zip script-file: server.bundle.js deno-version: "1.0.2"
One important thing you need to verify is the resource-group and the app-name as you created on Azure.
Also you need to add secrets of your application under secrets in Github repository. You can generate a new Service Principal and obtain the secret as below,
az ad sp create-for-rbac --name "deno-demo" --role contributor --scopes /subscriptions/{SubscriptionID}/resourceGroups/deno-demo --sdk-auth
It will generate a JSON like below,
You can copy and paste the JSON under the secret named “AZURE_CREDENTIALS” ,
Now we are good with everything, you can update some file on the repository and see the workflow getting triggered. You can monitor the deployment by navigating to the workflow.
Once everything is successful you can navigate to Azure portal and open the Web App endpoint to see if the app is running successfully.
You can see the app running successfully on Azure.
Final words
I really enjoyed learning about the Deno project and created this simple app. I hope this article can be of value for anyone getting started with Deno with Azure. I see it Deno gaining in popularity, yes. However, I do not see it replacing NodeJS and npm based on several factors. If you found this article useful, or if you have any questions please reach out me on Twitter. Cheers!
You must log in to post a comment. | https://sajeetharan.com/2020/05/31/deploy-web-api-with-deno-to-azure/ | CC-MAIN-2021-04 | refinedweb | 1,211 | 65.01 |
Utilities for creating (micro)service tests. Based on Mountebank.
Project description
Utilities for creating HTTP (micro)service tests. Based on Mountebank.
Mountepy works by spawning and cleaning after given HTTP service processes and Mountebank. Thanks to that you no longer need that “start X before running the tests” for your application. No. Your tests start “X”, it’s put up or down only when it needs to and as many times as you need.
- Test-framework-agnostic (use unittest, nose, py.test or whatever… but I like py.test).
- Enables fast and reliable end-to-end testing of microservices. They won’t be aware that they are in some testing mode.
- Tested on Python 3.4, Ubuntu 14 x64.
- Planned features in the road map below. If you have suggestions, just post them as Github issues. Pull requests are also welcome :)
I recommend Pytest for elastic composition of service process test fixtures. Your process may start once per test suite, once per test, etc.
Installation
$ pip install mountepy
A standalone distribution of Mountebank (including NodeJS) will be downloaded on first run.
If you don’t want Mountepy to download Mountebank:
- Install NodeJS and NPM. On Ubuntu it’s
$ sudo apt-get install -y nodejs-legacy npm
- Install Mountebank yourself
$ npm install -g mountebank --production
Examples
Mountebank acts as a mock for external HTTP services. Here’s how you spawn a Mountebank process, configure it with a stub of some HTTP service, assert that it’s actually responding. Mountebank process is killed after the with block.
# requests is installed alongside Mountepy import mountepy, requests with mountepy.Mountebank() as mb: imposter = mb.add_imposter_simple(path='/something', response='mock response') stub_url = ':{}/something'.format(imposter.port) assert requests.get(stub_url).text == 'mock response'
It’s a good idea to test your service as a whole process. Let’s say that you have an one-file WSGI (e.g. Flask or Bottle) app that responds to a GET on its root path ('\') with a string it sees in RET_STR environment variable. Also, the app needs to know on what port to run, so we also pass it as an environment variable. {port} is a special value for Mountepy. It will be filled with the application’s port, whether it’s passed during object construction or automatically selected from free ports.
# port_for is installed alongside Mountepy import mountepy, requests, port_for, os, sys service_port = port_for.select_random() service = mountepy.HttpService( [sys.executable, 'sample_app.py'], port=service_port, env={ 'PORT': '{port}', 'RET_STR': 'Just some text.' }) with service: assert requests.get(service.url).text == 'Just some text.'
Starting a more complex service running on Gunicorn can look like this:
import os, sys gunicorn_path = os.path.join(os.path.dirname(sys.executable), 'gunicorn') service_command = [ gunicorn_path, 'your_package.app:get_app()', '--bind', ':{port}', '--enable-stdio-inheritance', '--pythonpath', ','.join(sys.path)] service = HttpService(service_command) # You can use start/stop methods instead of using the "with" statement. # It's the same for Mountebank objects. service.start() # now you test stuff... service.stop()
“Real world” use of mountepy can be found in PyDAS.
Measuring test coverage
Mountepy starts your code in a separate process, so it’s normally hard to get information about the code covered by the tests. Fortunately, this problem is solved by Coverage. See this documentation page.
In short, you need to:
- run coverage.process_startup() in each new Python process (this can be enforced by installing coverage_pth, but some caution is required)
- set COVERAGE_PROCESS_START environment variable to location of your .coveragerc
- run the tests themselves: coverage run (...), coverage combine and then coverage report -m
Again, see PyDAS’s tox.ini for demonstration.
Running tests
Clone the repo with submodules, then install and run tox.
$ git clone --recursive git@github.com:butla/mountepy.git $ sudo pip install tox $ cd mountepy $ tox
Motivation (on 2015-12-30)
- Why Mountebank? It can be deployed as standalone application, is actively developed and supports TCP mocks which can be used to simulate broken HTTP messages.
- Why not Pretenders? Doesn’t support TCP and the development doesn’t seem to be really active.
- Why not WireMock? Doesn’t support TCP and I don’t want to be forced to install Java to run tests and it doesn’t seem to have more features than Mountebank.
- Why create a new project? There already is a Python Mountebank wrapper, but it doesn’t offer much.
License
Mountepy is licensed under BSD Zero Clause license.
Why I didn’t use one of the more popular licenses like MIT, 2 or 3-Clause BSD or Apache2? Well, this one is practically equal to 2-Clause BSD (and I don’t see any functional differences between it and MIT license) with the exception of the rule about retaining the original license text in derivative work. So if you’d happen to redistribute my library along with your software you don’t have to attach a copy of my license. So you won’t break any copyright laws by being lazy (which I like to be, for instance). You’re welcome.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/mountepy/ | CC-MAIN-2018-43 | refinedweb | 856 | 60.21 |
- REAL WTF here is how he prefixed all of the bitwise operators with a lowercase b. lol....
until (my BAND makesit)
{
repeat
{
practice();
} until (good(NOT bad))
}
so confusing lol...
Admin
I might, but my language happens to have a LISP.
--Rank
Admin
I have found my new sig[:D]
Admin
ultra confusing that i'm using a bitwise operator in a boolean evaluation statement... just wanted to make it look like english ;-)
Admin
Oh, I just vuv the do_nothing macro... How quaint!
Admin
The real WTF is that there are still ¿people? who thinks that not programming in C makes them smart...
Admin
I especially like these:
#define forever while(1)
#define untilbreak forever
#define untilreturn forever
Nothing like adding your own inconsistencies.
Admin
Actually this is encouraged in the great book Code Complete by Steve McConnell.
if (someTestPasses())
{
// do some stuff
}
else
{
do_nothing(); // this is cleaner than the alternative ";"
}
Steve encourages this style of code because it implies that the programmer knows that nothing is to be done if the test failes.
I really don't think this post is a WTF. A little extreme, but definitely not a WTF.
Admin
#ifndef BRILLANT
#define BRILLANT
Admin
#define IF if (
#define THEN )
#define BEGIN {
#define END }
#define ELSE else
Admin
Correction:
#define REM //
Admin
Well, it's obvious you don't maintain code in the real world, or else you'd understand why this sort of thing is one of the ultimate WTF's.
Let's assume that the original code was around for 22 years, and ALL of the original programmers have moved on and/or died. And there's no documentation of the system.
Now, one day, you inherit this code. And your boss asks to make some significant changes, and they need to be done yesterday. Not only do you have to learn all the programming idiosyncracies of all the programmers who came before you and modified this code, but you have to LEARN A NEW LANGUAGE!
Admin
hurrfffdurrrr
Admin
For a little flavor, how about:
#define BREAKERONENINE /*
#define OVER */
Admin
At the first glance it looked to me like this had been done by an old RPG-"who-needs-all-that-new-stuff-all-i-need-is-a-5250-Terminal-Programmer
Admin
This reminds me of a situation we had here at work. We have this horrific web application that we have to use to deploy and launch applications. It's so handy that we have this team of "script" developers that have to hand code something everytime you add a new application. We had them add access to a set of spreadsheets the business developed and then dumped on us to support. We also asked them to add a link for quality. The menu routine they wrote for quality was completely different than it was for production. We asked them about this. Their answer, "Different developer, different style."
We still laugh about that. You wouldn't want to let 1) saving time by resuing existing code, 2) being consistent between quality and production, or 3) providing the user with a consistent experience get in the way of "style."
Admin
Don't be so melodramtic. I program in Python, Ruby, Java, C# and ObjC. Those are the languages I say I "know". However, I could sit down, right now, and fix a bug in a VB application if you gave me a couple of minutes to get comfortable.
That being said, if someone who claims to know C sits down at this system, these syntax changes are nothing more than a nuisance. If you can't get beyond this, you don't deserve to be called a programmer.
Sure, he didn't have to, but this is not a WTF...or at least not a major one.
Admin
I love how the page comment looks like an Apple ad. Instead of "iPod: 1000 songs, in your pocket," it's:
It gives it that unmistakable air of "I'm so cool I don't have to explain very much."
Admin.
Admin
This is the easiest WTF to fix:
Admin
Oddly enough, the V7 Bourne Shell (the One True Shell) source was coded just like this. Steve Bourne was apparently more familiar with Algol than with C, so he wrote up a series of #defines to make C more Algol-like. Quite an amusing read.
Admin
You have to extend that to the guy responsible for this mess. If he's a "real" programmer, why does he have to waste time concocting a set of wrappers for C constructs? If he has to work in C, what's wrong with just learning C? If this guy is for some reason incapable of using a language without resorting to these sorts of measures, I say he doesn't deserve to be called a programmer.
Admin
nice!!!
I'm sure these ones are on the list too
#define begin {
#define end }
Admin
Too true, too true.
Admin
#define begin {
#define end }
Admin
For those of you (Windoze users who are utterly confused, this syntax is similar to bourne shell scripting).
Clearly some UNIX nerd was trying to make his C programs look like his shell scripts.
Not uncommon in the early days of C.
Admin
The folks over at the IOCCC can do this; perhaps they should be asked?
Admin
A complete revolution? 360 degrees?
I played around a bit with stuff like this myself when I was younger. It can be amusing briefly, but it certainly does not belong in a production system.
Sincerely,
Gene Wirchenko
Admin
This is definitely a WTF, but when I look at some of the operator replacements I think of operator overloading. Seriously, that's almost a WTF feature in itself.
Oh, now I get the CAPTCHA everbody's talking about.
Admin
But your cruise missle doesn't need to run forever. You *obviously* need a check at the end to see if the missle has exploded, so you can exit the subroutine cleanly.
[:D]
Admin
This:
else
{
// do nothing
}
is just as clear, without adding an unnecessary function call (and definition).
Admin
I program in Python, Ruby, Java, C# and ObjC. Those are the languages I say I "know". However, I could sit down, right now, and fix a bug in a VB application if you gave me a couple of minutes to get comfortable.
I don't doubt that this is true, but there's more at stake here. When you sit down to a Python app, you expect Python. That means you start thinking in the Python syntax. Same with Ruby, C#, ObjC, C++ and so on. You don't want to sit down and get something completely different.
Aside from that, you also have to work within the coding standards of the team - this clearly didn't do that.
Admin
Just thank your lucky stars nobody told him how to use trigraphs
Admin
The best part of that one is that it can actively cause problems.
It *should* be:
#define do_nothing do {} while (0)
(Note the lack of a trailing ;, so you can do:
if (something)
do_nothing;
..)
Admin
No it is worse than that, I think he should be sent driving with Teddy Kennedy instead...
Admin
<font color="#CCCCCC".</font>
Today, you just use the correct compiler option for that. Both gcc & vc++ have one for it.
Admin
At first I thought that we could do it by handling an exception. Then I realized -- Naw. Just leave it as an unhandled exception and let it blow up.
Admin
Skinnable programming languages - it's the future.
Don't like your current language? Don't want to go to the hassle of writing your own language and compiler? Just pick a language that's close enough and skin it.
Admin
This explains SO MUCH about bash sintax...
Admin
Or just run the preprocessor on the code only changing the #defines?
Admin
Yeah, it's funny how new kernels keep getting built... ;)
Admin
That is fine as long as your code blows. If it sucks, it might cause a missile malfunction.
My code is hot, unless it is cool.
Sincerely,
Gene Wirchenko
Admin
??(
??[
Anyone remember these?
Admin
Okay, it's mostly annoying, but there's a minor case to be made for the EQ operator -- namely, that it's easy to leave off an '=' by mistake.
Of course, a better solution to this is, whenever possible, to put one's variables on the right side of the == operator, but still, I can almost forgive that one.
Admin
Shhhhhhhhhhhhh that is SUPER secret. If Alex told you, he would have to kill you.
Admin
Hehehe, funny NOT NOT NOT
No, really NOT NOT then NOT then NOT
Some then people do_nothing then do_nothing seem do_nothing to really then want then then then to go back do_nothing to then do_nothing VB.. then Wonder do_nothing why NOT
Admin
Those of us commenting here might well agree that "C" is perfect as it is (and would never argue vociferously about pre-increment, post-increment, or explicit add as being the only "right" way to go about something).
That's why, I'm sure, that a quick Google of the phrase "better c" turns up a mere <font size="-1">224,000,000 hits.
</font>
Admin
The concept isn't priceless, but the implementation surely is.
I believe they are trying to achieve:
if (a GT b) then
do_nothing
else
blah blah blah
endif //which I assume is in the snipped version
Of course this won't compile because they didn't define do_nothing properly.
Admin
I actually like the forever, untilbreak and untilreturn keywords. They're very descriptive about the purpose of the loop.
do_nothing could be useful if you want to make explicit that nothing should be happening in a certain case... though a comment would usually be more appropriate.
The rest is a bit weird, but I've recently done something pretty similar, extending the language with macro's. Annoyed by the lack of try...finally in C++, I've defined:
<FONT color=#008000>#define finally(code) catch(...) { code; throw; } code;</FONT>
<FONT color=#000000>So now I can do:</FONT>
Resource r = allocateResource();
try {
do_stuff(r);
}
finally(freeResource(r));
<FONT color=#000000>Which works just fine as long as you make sure that you dont return in the guarded block.</FONT>
Admin
You mean syntactical arsenic.
Ask the person who has to maintain this.
Why do you think languages have standardized syntax? Wait, you're right! It's boring seeing the same keywords, constructs and idioms in every program. Let's mix it up a little! Instead of merely stating that I know C on my résumé, I can have pages listing all the flavors of C I can code in. | https://thedailywtf.com/articles/comments/The_Secret_to_Better_C/2 | CC-MAIN-2021-31 | refinedweb | 1,798 | 73.07 |
Geonetwork developers,
please find a patch for SRU support in GN attached to the SRU proposal
on the trac ()
In order to activate it, apply the (eclipse) patch to the trunk and add
the libraries listed in (libs.txt) to WEB-INF/lib. In case you dont want
to compile yourself, you find a zip file containing these libraries on
the WMO webpages (hope this is sufficient so as to ascertain the
trustworthy origin)
I can also provide access to a pre-compiled version should there by
interest.
A few comments on the implementation.
I have put most changes into the package org.wmo.geonet, although some
changes were necessary in the rest of the package structure.
The new package is mostly for clarity of the patch and can be changed
upon integration.
The SRU gateway is implemented as jeeves service and can be reached at?<SRUQUERYstring> . Only
publicly accessible metadata is searchable, as was already the case with
the classical Z39.50 interface. SRU is only active if the Z39.50
interface is enabled (could be changed easily when a new config option
is added). There is no possibility to select the catalogue which is to
be searched via the URL (portal.sru/mycatalogue?<SRUQUERY>), since this
would require an additional "/" after the portal.sru in the URL, and
there seems to be no way to implement this in jeeves. The possibility to
select the catalogue is not required though and a default (GN local) can
safely be assumed according to the standard.
The SRU implementation requires an update to jzkit3 (from version 1
before), mostly because of the caching and improved configuration.
I used the improved configuration features to configure the usage of a
"geo" attribute set, vastly improving the ISO23950 capabilities of GN.
The list of attributes was elaborated with the help of the ISO23950
community, and is mapped to corresponding lucene indices. Strict
adherence to this profile can be configured via the jzkit configuration
files which I put into WEB-INF/classes (have to be in the classpath). (I
will document how to use them). We would like to encourage discussion of
the supported attributes and their mappings to lucene indices. Please
find a document listing the mappings attached to this mail and the trac.
(SRUattributes_1x.pdf)
There were some concerns about breaking the remote Z39.50 implementation
of proprietary GN extensions, like BLUENetMEST. I downloaded them and
made sure that the new implementation is easy to integrate. Currently
there are two "interfaces" to the remote search. Triggering a search,
and getting a list of (remote) searchable repositories.
With the new implementation launching a remote search works as it used
to work before. In order to get the list of searchable repositories, I
provided a helper class which returns names and connection strings (used
to indicate which remote repositories to search in a request). This
replaces the old mechanism of including the repositories.xml, which has
been moved to a new format.
For testing I included two operations (testListRemoteRepositories,
testRemoteSearch) into the SRU gateway (SRUSearch.java) which simulate
how the "interfaces" are accessed and can serve as an example on how to
use the remote search. (the SRU does not (but could) do a remote search)
The implementation is currently being tested by the German,Korean and
Chinese weather-services, as we plan to use geonetwork as part of a
global distributed meta-data catalogue (WIS).
The (XML) format in which SRU requests are returned can be changed in
xsl/portal-srusearch.xsl. There might be some discussion whether the
metadata files should be returned as XML or CDATA and which namespaces
should be used for the reply, and we would welcome suggestions of the
community.
The patch being a medium to major change, I would hope that it can still
be considered as a feature for 2.5, since we would like to make (and
continue making) a contribution to the GN community which has provided
us with an essential tool for WMOs WIS initiative.
Please dont hesitate to criticise this, I'm sure there are many
shortcomings, this work is meant to be understood as a proposal and I'm
happy to address concerns of the community.
Thanks also go to Ian Ibbotson the developer of JZkit for help and
accepting patches, who received a separate copy of this email.
wow, this was a long one.. directly correlated to the amount of pain in
the patch I guess (-;
best regards
Timo
--
Timo Pröscholdt
Program Officer, WMO Information System (WIS)
Observing and Information Systems Department
World Meteorological Organization
Tel: +41 22 730 81 76
Cell: +41 77 40 63 554 Gabriel,
Indeed there is a sandbox [1] to start working on switching on maven.
Last thread on this topic is here :
I think it is still desired by the community, at least I do.
Any help or feedback on this would be greatly appreciated.
[1]
ciao,
Mathieu
On Fri, Jan 15, 2010 at 10:00 PM, Gabriel Roldan <groldan@...>wrote:
> Hi Geonetwork developers,
> I've heard there was some work going on or at least the intention to
> move out from ant in favor of maven.
>
> I would like to know if that's still desired and if so, how can I help
> with it.
>
> Cheers,
> Gabriel
>
> --
>
>
>
Sorry a lot this was not this list.
On Tue, Jan 19, 2010 at 1:24 PM, jose garcia <josegar74@...> wrote:
> all,
I am working on a gn installation int he UK and have been uploading data,
but I am having problems in the image quality of layers. Under the default
map which I have a map of the UK, the image is fine but when I load in
vector data (shapefiles), the background goes white and the images have a
fuzzy boundary around them. They only appear OK when zoomed in very close.
I have been searching through the help files and have not been able to
resolve this issues - any help would be greatly appreciated as I cannot
understand why they display fine in geoserver but appear so poor when loaded
in intermap within gn.
Many thanks
Mark
--
View this message in context:
Sent from the GeoNetwork developer mailing list archive at Nabble.com.
#187: iso19139 XSD update
-------------------------+--------------------------------------------------
Reporter: fxp | Owner: geonetwork-devel@...
Type: enhancement | Status: new
Priority: minor | Milestone:
Component: General | Version: v2.5.0
Keywords: XSD |
-------------------------+--------------------------------------------------
* we are not using latest iso19139/119 XSD release from Eden website (ie )
* changes are :
* GML3.2.1 instead of 3.2.0
* 3 typos in gmxCodelist.xml (fixed in GeoNetwork loc files)
* Added tcCodelists.xml
* changes in srv namespace only
* !ServiceMetadata.xsd : changes on cardinality
{{{
<xs:sequence>
<xs:element
</xs:sequence>
replaced by
<xs:sequence
<xs:element
</xs:sequence>
which is equal from an XSD point of view I think.
}}}
* !ServiceModel.xsd (here we need a small migration task)
* SV_ServiceSpecification_Type : Based on a new class. Need to change
element order : typeSpec is the last one.
* SV_PlatformNeutralServiceSpecification_Type : Based on a new class.
Remove typeSpec element.
* SV_PlatformSpecificServiceSpecification_Type : Based on a new class.
No changes.
Then, 2 others elements are added:
* gco:ScopedName in srv:SV_CoupledResource which comes from CSW ISO
profil
* srv:keywords in srv:SV_ServiceIdentification : I don't know why this
one is in that XSD. It looks that this element should be removed from the
XSD.
----
Related discussion :
*-
td4028746.html#a4028746
--.
Hello Gabriel,
2010/1/16 Gabriel Roldan <groldan@...>:
> Further working on this I realize decoupling the configuration from the
> web application might be more a convoluted process than the afore
> mentioned patch. For instance, the McKoiActivator is being given the
> appPath instead of the configPath hence can't really decouple.
>
> I wonder if there's any chance to consider making sure the full
> configuration can be externalized out of the app context and of so
> whether using the configPath instead of the appPath where relevant might
> suffice, or is it a too naive assumption?
AFAIK, appPath is passed in all Jeeves services init methods, it's not
used so much.
You current issue is only for starting mckoi ? using your new config
file parameter should work fine in McKoiActivator.
Did you notice any other parameters which could not be externalized ?
Cheers.
Francois
> Cheers,
> Gabriel
>
> Gabriel Roldan wrote:
>> Hi all,
>>
>> I am working on a system where GeoNetwork is a dependency and I would
>> need to run it for automated integration testing. Hence, it would be
>> nice if I could have the configuration files out of WEB-INF, in order
>> for the automatically deployed geonetwork war to read the configuration
>> from an external directory.
>> To do so, I created the attached patch that allows JeevesServlet to read
>> the config files from a directory specified in a servlet init parameter.
>>
>> Do you think that'd be a useful contribution and if so could please
>> consider committing it to trunk? Alternatively, I am eager of any
>> feedback/alternate way to accomplish the same thing you might propose.
>>
>> Best regards,
>> Gabriel
>>
>>
>> ------------------------------------------------------------------------
>>
>> ------------------------------------------------------------------------------
>>
>
Hi Francois,
> -----Original Message-----
> From: Francois Prunayre [mailto:fx.prunayre@...]
> Sent: Friday, 15 January 2010 9:14 PM
> Subject: Re: [GeoNetwork-devel] Proposal: Improved keyword
> selection in advanced search
>
> Maybe you could add a simple GUI parameter in config-gui.xml to be
> able to turn this feature on/off.
Yep, not a problem!
Thanks,
Michael
>
> +1 for me then.
>
> Cheers.
>
> Francois
>
>
> > I'd love to have both functionalities combined though, if I
> had the time.
> > Cheers,
> > Michael
> >
> >> -----Original Message-----
> >> From: Francois Prunayre [mailto:fx.prunayre@...]
> >> Sent: Thursday, 14 January 2010 4:20 PM
> >> To: Stegherr, Michael (CESRE, Kensington)
> >> Cc: geonetwork-devel@...
> >> Subject: Re: [GeoNetwork-devel] Proposal: Improved keyword
> >> selection in advanced search
> >>
> >> Hello Michael, thanks for that proposal. One question : when user
> >> search for keywords you search for keywords on all
> thesaurus, you do
> >> not take into account keywords used in the metadata, but
> reading the
> >> patch, it looks like you keep the current selection
> mechanism clicking
> >> on the keyword search field ? So both selection mechanism are
> >> available.
> >>
> >> Cheers.
> >>
> >> Francois
> >>
> >> 2010/1/14 <Michael.Stegherr@...>:
> >> > Dear PSC members,
> >> >
> >> > I'd like to propose an enhancement to the advanced search.
> >> It involves adding the improved keyword selection in the
> >> metadata editor to the advanced search to be able to pick
> >> keywords from controlled thesauri.
> >> >
> >> > The complete proposal is here with a patch for GN trunk:
> >> >
> >> >
> >> >
> >> > Thank you for your votes.
> >> >
> >> > Cheers,
> >> > Michael
> >> >
> >> > --
> >> > Michael Stegherr, Computer Scientist
> >> > CSIRO Exploration and Mining | Phone 08 6436 8572
> >> > AARC, 26 Dick Perry Av, Kensington WA 6151, Australia
> >> >
> >> --------------------------------------------------------------
> >> ----------------
> >> >
> >>
> >> >
> >>
> | https://sourceforge.net/p/geonetwork/mailman/geonetwork-devel/?viewmonth=201001&viewday=19 | CC-MAIN-2018-05 | refinedweb | 1,744 | 62.88 |
SQLite plugin for Flutter. Supports both iOS and Android.
In your flutter project add the dependency:
dependencies: ... sqflite: ^1.1(); };)'); });
A transaction is committed if the callback does not throw an error. If an error is thrown, the transaction is cancelled. So to rollback a transaction one way is to throw an exception.);ite type. Personally I store them as
int (millisSinceEpoch) or string (iso8601)
bool is not a supported SQLite type. Use
INTEGER and 0 and 1 values.
int
num
String
Uint8List
List<int>is supported but not recommended (slow conversion)
continueOrErrorfor batches
Database.isOpenwhich becomes false once the database is closed
example/README.md
Demonstrates how to use the sqflite plugin.
flutter run
Specific app entry point
flutter run -t lib/main.dart
For help getting started with Flutter, view the online documentation.
Add this to your package's pubspec.yaml file:
dependencies: sqflite: ^1.1.0
You can install packages from the command line:
with Flutter:
$ flutter packages get
Alternatively, your editor might support
flutter packages get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:sqflite/sqflite.dart';
We analyzed this package on Apr 4, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter
References Flutter, and has no conflicting libraries. | https://dart-pub.mirrors.sjtug.sjtu.edu.cn/packages/sqflite/versions/1.1.0 | CC-MAIN-2019-18 | refinedweb | 225 | 60.82 |
I love to say that Python is a nice subset of Lisp, and I discover that it's getting even more true as time passes. Recently, I've stumbled upon the PEP 443 that describes a way to dispatch generic functions, in a way that looks like what CLOS, the Common Lisp Object System, provides.
What are generic functions
If you come from the Lisp world, this won't be something new to you. The Lisp object system provides a really good way to define and handle method dispatching. It's a base of the Common Lisp object system. For my own pleasure to see Lisp code in a Python post, I'll show you how generic methods work in Lisp first.
To begin, let's define a few very simple classes.
(defclass snare-drum () ()) (defclass cymbal () ()) (defclass stick () ()) (defclass brushes () ())
This defines a few classes:
snare-drum,
symbal,
stick and
brushes, without any parent class nor attribute. These classes compose a drum kit, and we can combine them to play sound. So we define a
play method that takes two arguments, and returns a sound (as a string).
(defgeneric play (instrument accessory) (:documentation "Play sound with instrument and accessory."))
This only defines a generic method: it has no body, and cannot be called with any instance yet. At this stage, we only inform the object system that the method is generic and can be then implemented with various type of arguments. We'll start by implementing versions of this method that knows how to play with the snare-drum.
(defmethod play ((instrument snare-drum) (accessory stick)) "POC!") (defmethod play ((instrument snare-drum) (accessory brushes)) "SHHHH!")
Now we just defined concrete methods with code. They also takes two arguments:
instrument which is an instance of
snare-drum and
accessory that is an instance of
stick or
brushes.
At this stage, you should note the first difference with object system as built into language like Python: the method isn't tied to any class in particular. The methods are generic, and any class can implement them, or not.
Let's try it.
* (play (make-instance 'snare-drum) (make-instance 'stick)) "POC!" * (play (make-instance 'snare-drum) (make-instance 'brushes)) "SHHHH!" * (play (make-instance 'cymbal) (make-instance 'stick)) debugger invoked on a SIMPLE-ERROR in thread #<THREAD "main thread" RUNNING {1002ADAF23}>: There is no applicable method for the generic function #<STANDARD-GENERIC-FUNCTION PLAY (2)> when called with arguments (#<CYMBAL {1002B801D3}> #<STICK {1002B82763}>). Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL. restarts (invokable by number or by possibly-abbreviated name): 0: [RETRY] Retry calling the generic function. 1: [ABORT] Exit debugger, returning to top level. ((:METHOD NO-APPLICABLE-METHOD (T)) #<STANDARD-GENERIC-FUNCTION PLAY (2)> #<CYMBAL {1002B801D3}> #<STICK {1002B82763}>) [fast-method]
As you see, the function called depends on the class of the arguments. The object systems dispatch the function calls to the right function for us, depending on the arguments classes. If we call
play with instances that are not know to the object system, an error will be thrown.
Inheritance is also supported and the equivalent (but more powerful and less error prone) equivalent of Python's
super() is available via
(call-next-method).
(defclass snare-drum () ()) (defclass cymbal () ()) (defclass accessory () ()) (defclass stick (accessory) ()) (defclass brushes (accessory) ()) (defmethod play ((c cymbal) (a accessory)) "BIIING!") (defmethod play ((c cymbal) (b brushes)) (concatenate 'string "SSHHHH!" (call-next-method)))
In this example, we define the
stick and
brushes classes as subclass of the
accessory class. The
play method defined will return the sound BIIING! regardless of the accessory instance that is used to play the cymbal. Except in the case where it's a
brushes instance; only the most precise method is always called. The
(call-next-method) function is used to call the closest parent method, in this case that would be the method returning _"BIIING!".
* (play (make-instance 'cymbal) (make-instance 'stick)) "BIIING!" * (play (make-instance 'cymbal) (make-instance 'brushes)) "SSHHHH!BIIING!"
Note that CLOS is also able to dispatch on object instances themself by using the
eql specializer.
But if you're really curious about all features CLOS provides, I suggest you read the brief guide to CLOS by Jeff Dalton as a starter.
Python implementation
Python implements a simpler equivalence of this workflow with the
singledispatch function. It will be provided with Python 3.4 as part of the
functools module. Here's a rough equivalence of the above Lisp program.
import functools class SnareDrum(object): pass class Cymbal(object): pass class Stick(object): pass class Brushes(object): pass @functools.singledispatch def play(instrument, accessory): raise NotImplementedError("Cannot play these") @play.register(SnareDrum) def _(instrument, accessory): if isinstance(accessory, Stick): return "POC!" if isinstance(accessory, Brushes): return "SHHHH!" raise NotImplementedError("Cannot play these")
We define our four classes, and a base
play function that raises
NotImplementedError, indicating that by default we don't know what to do. We can then write specialized version of this function with a first instrument, the
SnareDrum. We then check for the accessory type that we get, and return the appropriate sound or raise
NotImplementedError again if we don't know what to do with it.
If we run it, it works as expected:
>>> play(SnareDrum(), Stick()) 'POC!' >>> play(SnareDrum(), Brushes()) 'SHHHH!' >>> play(Cymbal(), Brushes()) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jd/Source/cpython/Lib/functools.py", line 562, in wrapper return dispatch(args[0].__class__)(*args, **kw) File "/home/jd/sd.py", line 10, in play raise NotImplementedError("Cannot play these") NotImplementedError: Cannot play these >>> play(SnareDrum(), Cymbal()) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jd/Source/cpython/Lib/functools.py", line 562, in wrapper return dispatch(args[0].__class__)(*args, **kw) File "/home/jd/sd.py", line 18, in _ raise NotImplementedError("Cannot play these") NotImplementedError: Cannot play these
The
singledispatch module looks through the classes of the first argument passed to the
play function, and calls the right version of it. The first defined version of the
play function is always run for the
object class, so if our instrument is a class that we did not register for, this base function will be called.
For whose eager to try and use it, the
singledispatch function is provided Python 2.6 to 3.3 through the Python Package Index.
Limitations
First, as you noticed in the Lisp version, CLOS provides a multiple dispatcher that can dispatch on the type of any of the argument defined in the method prototype, not only the first one. Unfortunately, Python dispatcher is named singledispatch for this good reason: it only knows to dispatch on the first argument. Guido van Rossum wrote a short article about the subject that he called multimethod a few years ago.
Then, there's no way to call the parent function directly. There's no equivalent of the
(call-next-method) from Lisp nor the
super() function that allows to do that in Python class system. This means you will have to use various trick to bypass this limitation.
So while I am really glad that Python is going toward that direction, as it's a really powerful way to enhance an object system, it really lacks a lot of more advanced features that CLOS provides out of the box.
Though, improving this could be an interesting challenge. Especially to bring more CLOS power to Hy. :-) | https://julien.danjou.info/python-3-4-single-dispatch-generic-function/ | CC-MAIN-2018-39 | refinedweb | 1,245 | 56.05 |
3.1. Linear Regression¶
To get our feet wet, we’ll start off by looking at the problem of regression. This is the task of predicting a real valued target \(y\) given a data point \(x\). Regression problems are extremely common in practice. For example, they are used for predicting continuous values, such as house prices, temperatures, sales, and so on. This is quite different from classification problems (which we study later), where the outputs are discrete (such as apple, banana, orange, etc. in image classification).
3.1.1. Basic Elements of Linear Regression¶
In linear regression, the simplest and still perhaps the most useful approach, we assume that prediction can be expressed as a linear combination of the input features (thus giving the name linear regression).
3.1.1.1. Linear Model¶
For the sake of simplicity we we will use the problem of estimating the price of a house based (e.g. in dollars) on area (e.g. in square feet) and age (e.g. in years) as our running example. In this case we could model
While this is quite illustrative, it becomes extremely tedious when dealing with more than two variables (even just naming them becomes a pain). This is what mathematicians have invented vectors for. In the case of \(d\) variables we get
Given a collection of data points \(X\), and corresponding target values \(\mathbf{y}\), we’ll try to find the weight vector \(w\) and bias term \(b\) (also called an offset or intercept) that approximately associate data points \(x_i\) with their corresponding labels \(y_i\). Using slightly more advanced math notation, we can express the long sum as \(\hat{y} = \mathbf{w}^\top \mathbf{x} + b\). Finally, for a collection of data points \(\mathbf{X}\) the predictions \(\hat{\mathbf{y}}\) can be expressed via the matrix-vector product:
It’s quite reasonable to assume that the relationship between \(x\) and \(y\) is only approximately linear. There might be some error in measuring things. Likewise, while the price of a house typically decreases, this is probably less the case with very old historical mansions which are likely to be prized specifically for their age. To find the parameters \(w\) we need two more things: some way to measure the quality of the current model and secondly, some way to manipulate the model to improve its quality.
3.1.1.2. Training Data¶
The first thing that we need is data, such as the actual selling price of multiple houses as well as their corresponding area and age. We hope to find model parameters on this data to minimize the error between the predicted price and the real price of the model. In the terminology of machine learning, the data set is called a ‘training data’ or ‘training set’, a house (often a house and its price) is called a ‘sample’, and its actual selling price is called a ‘label’. The two factors used to predict the label are called ‘features’ or ‘covariates’. Features are used to describe the characteristics of the sample.
Typically we denote by \(n\) the number of samples that we collect. Each sample (indexed as \(i\)) is described by \(x^{(i)} = [x_1^{(i)}, x_2^{(i)}]\), and the label. The expression for evaluating the error of a sample with an index of \(i\) is as follows:
The constant \(1/2\) ensures that the constant coefficient, after deriving the quadratic term, is 1, which is slightly simpler in form. Obviously, the smaller the error, the closer the predicted price is to the actual price, and when the two are equal, the error will be zero. Given the training data set, this error is only related to the model parameters, so we record it as a function with the model parameters as parameters. In machine learning, we call the function that measures the error the ‘loss function’. The squared error function used here is also referred to as ‘square loss’.
To make things a bit more concrete, consider the example below where we plot such a regression problem for a one-dimensional case, e.g. for a model where house prices depend only on area.
Fig. 3.1 Linear regression is a single-layer neural network..
In model training, we want to find a set of model parameters, represented by \(\mathbf{w}^*, b^*\), that can minimize the average loss of training samples:
3.1.1.4. Optimization Algorithm¶
When the model and loss function are in a relatively simple format, the solution to the aforementioned loss minimization problem can be expressed analytically in a closed form solution, involving matrix inversion. This is very elegant, it allows for a lot of nice mathematical analysis, but it is also very restrictive insofar as this approach only works for a small number of cases (e.g. multilayer perceptrons and nonlinear layers are no go). Most deep learning models do not possess such analytical solutions. The value of the loss function can only be reduced by a finite update of model parameters via an incremental optimization algorithm.
The mini-batch stochastic gradient descent is widely used for deep learning to find numerical solutions. Its algorithm is simple: first, we initialize the values of the model parameters, typically at random; then we iterate over the data multiple times, so that each iteration may reduce the value of the loss function. In each iteration, we first randomly and uniformly sample a mini-batch \(\mathcal{B}\) consisting of a fixed number of training data examples; we then compute the derivative (gradient) of the average loss on the mini batch the with regard to the model parameters. Finally, the product of this result and a predetermined step size \(\eta > 0\) is used to change the parameters in the direction of the minimum of the loss. In math we have ‘learning.5. Model Prediction¶
After model training has been completed, we then record the values of the model parameters \(\mathbf{w}, b\) as \(\hat{\mathbf{w}}, \hat{b}\). Note that we do not necessarily obtain the optimal solution of the loss function minimizer, \(\mathbf{w}^*, b^*\) (or the true parameters), but instead we gain an approximation of the optimal solution. We can then use the learned linear regression model \(\hat{\mathbf{w}}^\top x + \hat{b}\) to estimate the price of any house outside the training data set with area (square feet) as \(x_1\) and house age (year) as \(x_2\). Here, estimation also referred to as ‘model prediction’ or ‘model inference’.
Note that calling this step ‘inference’ is actually quite a misnomer, albeit one that has become the default in deep learning. In statistics ‘inference’ means estimating parameters and outcomes based on other data. This misuse of terminology in deep learning can be a source of confusion when talking to statisticians. We adopt the incorrect, but by now common, terminology of using ‘inference’ when a (trained) model is applied to new data (and express our sincere apologies to centuries of statisticians).
3.1.2. From Linear Regression to Deep Networks¶
So far we only talked about linear functions. Neural Networks cover a lot more than that. That said, linear functions are an important building block. Let’s start by rewriting things in a ‘layer’ notation.
3.1.2.1. Neural Network Diagram¶
While in deep learning, we can represent model structures visually using neural network diagrams. To more clearly demonstrate the linear regression as the structure of neural network, Figure 3.1 uses a neural network diagram to represent the linear regression model presented in this section. The neural network diagram hides the weight and bias of the model parameter.
Fig. 3.2 Linear regression is a single-layer neural network.
In the neural network shown above, the inputs are \(x_1, x_2, \ldots x_d\). Sometimes the number of inputs is also referred as feature dimension. In the above cases the number of inputs is \(d\) and the number of outputs is \(1\). It should be noted that we use the output directly as the output of linear regression. Since the input layer does not involve any other nonlinearities or any further calculations, the number of layers is 1. Sometimes this setting is also referred to as a single neuron. Since all inputs are connected to all outputs (in this case it’s just one), the layer is also referred to as a ‘fully connected layer’ or ‘dense layer’.
3.1.2.2. A Detour to Biology¶
Neural networks quite clearly derive their name from Neuroscience. To understand a bit better how many network architectures were invented,.
Fig. 3.3 The real neuron can be quite varied. Some look rather arbitrary whereas others have a very regular structure. E.g. the visual system of many insects is quite regular. The analysis of such structures has often inspired neuroscientists to propose new architectures, and in some cases, this has been successful. Note, though, that it would be a fallacy to require a direct correspondence - just like airplanes are inspired by birds, they have many distinctions. Equal sources of inspiration were mathematics and computer science.
3.1.2.3. Vectorization for Speed¶
In model training or prediction, we often use vector calculations and process multiple observations at the same time. To illustrate why this matters, consider two methods of adding vectors. We begin by creating two 1000 dimensional ones first.
In [1]:
from mxnet import nd from time import time a = nd.ones(shape=10000) b = nd.ones(shape=10000)
One way to add vectors is to add them one coordinate at a time using a for loop.
In [2]:
start = time() c = nd.zeros(shape=10000) for i in range(10000): c[i] = a[i] + b[i] time() - start
Out[2]:
1.0053093433380127
Another way to add vectors is to add the vectors directly:
In [3]:
start = time() d = a + b time() - start
Out[3]:
0.00019860267639160156\).
It can be visualized as follows:
In [4]:
%matplotlib inline from matplotlib import pyplot as plt from IPython import display from mxnet import nd import math x = nd.arange(-7, 7, 0.01) # mean and variance pairs parameters = [(0,1), (0,2), (3,1)] # display SVG rather than JPG display.set_matplotlib_formats('svg') plt.figure(figsize=(10, 6)) for (mu, sigma) in parameters: p = (1/math.sqrt(2 * math.pi * sigma**2)) * nd.exp(-(0.5/sigma**2) * (x-mu)**2) plt.plot(x.asnumpy(), p.asnumpy(), label='mean ' + str(mu) + ', variance ' + str(sigma)) plt.legend() plt.show()(Y|X)\). In the above case this works out to be
A closer inspection reveals that for the purpose of minimizing \(-\log P(Y?
- Compare the runtime of the two methods of adding two vectors using other packages (such as NumPy) or other programming languages (such as MATLAB). | http://gluon.ai/chapter_deep-learning-basics/linear-regression.html | CC-MAIN-2019-04 | refinedweb | 1,786 | 53.71 |
Here is a Java program with peculiar behavior
public class Main { public static void main(String[] args) { foo(); System.out.println("done with call to foo"); } static void foo() { try { foo(); } finally { foo(); } } }
This program will never reach the println call, but when it aborts may have no stack trace.
This silence is caused by multiple StackOverflowExceptions. First the infinite loop in the body of the method generates one, which the finally clause tries to handle. But this finally clause also generates an infinite loop which the current JVMs can't handle gracefully leading to the completely silent abort.
The following short aspect will also generate this behavior:
aspect A { before(): call(* *(..)) { System.out.println("before"); } after(): call(* *(..)) { System.out.println("after"); } }
Why? Because the call to println is also a call matched by the pointcut call (* *(..)). We get no output because we used simple after() advice. If the aspect were changed to
aspect A { before(): call(* *(..)) { System.out.println("before"); } after() returning: call(* *(..)) { System.out.println("after"); } }
Then at least a StackOverflowException with a stack trace would be seen. In both cases, though, the overall problem is advice applying within its own body.
There's a simple idiom to use if you ever have a worry that your advice might apply in this way. Just restrict the advice from occurring in join points caused within the aspect. So:
aspect A { before(): call(* *(..)) && !within(A) { System.out.println("before"); } after() returning: call(* *(..)) && !within(A) { System.out.println("after"); } }
Other solutions might be to more closely restrict the pointcut in other ways, for example:
aspect A { before(): call(* MyObject.*(..)) { System.out.println("before"); } after() returning: call(* MyObject.*(..)) { System.out.println("after"); } }
The moral of the story is that unrestricted generic pointcuts can pick out more join points than intended. | http://www.eclipse.org/aspectj/doc/released/progguide/pitfalls-infiniteLoops.html | CC-MAIN-2014-52 | refinedweb | 298 | 57.98 |
Small problem but need helpHe did use those brackets a lot during the video. I thought the bracket at the start of the function...
Small problem but need helpWrite your question here.
[code]
cout << "Enter your guess: ";
[/code]
#include "stdafx...
Extremely new Thx for the help guys I really appreciate it. This will help me get started for the long road ahead.
Extremely new Now it's telling me with the "std" "a namespace name is not allowed"
This worked... don't know why ...
Extremely new Write your question here.
[code]
cout <<"Hello World" ;
[/code]
I just downloaded the Visu...
This user does not accept Private Messages | http://www.cplusplus.com/user/Tim1010/ | CC-MAIN-2014-42 | refinedweb | 107 | 87.21 |
05 July 2006 17:06 [Source: ICIS news]
LONDON (ICIS news)--European Commission plans to impose anti-dumping duties on Chinese and Thai polyethylene (PE) bags were on Wednesday condemned by the British Retail Consortium (BRC) as misguided and unlikely to benefit significantly European producers.
?xml:namespace>
“The EC said they have done an analysis of production costs of PE bags and export prices but we see no evidence that these bags are being sold for less than the cost of production,” said a Richard Dodd of the BRC.
The BRC represents major UK buyers of plastic bags including leading supermarkets such as Tesco, Asda and Sainsbury.
According to Asian converters, the EC is likely to impose from 1 September a 10-12% duty on Chinese imports while Thai producers would pay 5-6% in punitive duty. Plans for anti-dumping duties on Asian PE bags were confirmed on Wednesday by an EU trade spokesman.
“They are doing it to protect the production of PE bags in the European Union but buyers won’t switch to EU suppliers because they don’t have the resources to produce these types of bags on anywhere near the same scale,” said Dodd.
“Making it more expensive for supermarkets to buy bags will raise consumer costs, which is in no-one’s interest,” he added.
In February, BRC Brussels director Alisdair Gray estimated a 20% anti-dumping duty imposed on China would cost the UK’s four largest supermarkets £60m (€87m) a year. Based on his calculations, a 12% duty would cost them around £36m a year.
The British Retail Consortium (BRC) forms part of EuroCommerce, which represents the retail, wholesale and international trade sectors | http://www.icis.com/Articles/2006/07/05/1073773/uk-retailers-slam-anti-dumping-duty-on-pe-bags.html | CC-MAIN-2013-48 | refinedweb | 281 | 52.73 |
Printing longest subsequence of letters using Python
longest common subsequence dynamic programming code
longest increasing subsequence
longest common subsequence python library
longest common subsequence javascript
longest common subsequence pseudocode
print longest common subsequence leetcode
longest common substring
Write a block of code to print the longest subsequence of letters out of an input string. - First, ask user to input any string. - You code should print the longest sub-string of the input string, which contains only letters of the English alphabet, including both uppercase and lowercase letters.
If there are multiple subsequences of the same longest length, the code returns the first one. If input_str doesn’t contain any letter, the function returns an empty string.
For example,
for input string 'ab24[AaBbCDExy0longest]', it should print 'AaBbCDExy'.
for input string 'a a a1234b|c|d ', it should print 'a'.
for input string '12345 ', it should print "" (empty string).
Tried the following code but in vain:
# Your code here #longest_letterSeq = '' def longestSubstring(s): longest_letterSeq = '' i = 0 while(i<len(s)): curr_letterSeq = '' # For letter substring while(i<len(s) and s[i].isalpha()): curr_letterSeq += s[i] i+= 1 # Case handling if the character is not letter if(i< len(s) and not(s[i].isalpha())) : i+= 1 if(len(curr_letterSeq) > len(longest_letterSeq) ): longest_letterSeq = curr_letterSeq return longest_letterSeq str = input("Please input your string here: ") print(longestSubstring(str))
Can someone help with the edited or correct code?
One option is using
re.findall with
max:
import re max(re.findall('[a-zA-Z]+', 'ab24[AaBbCDExy0longest]'), key=len) # 'AaBbCDExy' max(re.findall('[a-zA-Z]+', 'a a a1234b|c|d '), key=len) # 'a'
Small hack to contemplate cases where there are no matches:
max(re.findall('[a-zA-Z]+', '12345 ') or [''], key=len) # ''
Though I'd suggest you to go with the more readable approach:
r = re.findall('[a-zA-Z]+', '12345 ') if r: out = max(r, key=len) else: out = ''
Or as @deepstop suggests with the conditional expression:
out = max(r, key=len) if r else ''
Printing Longest Common Subsequence, In this post we will discuss how to print Longest Common Subsequence itself. If the current characters in X and Y are equal (shown in bold), then they are part to post code in comments using C, C++, Java, Python, JavaScript, C#, PHP and}.
like yatu, my first though on such a problem would be regex. However I have offered a solution based on your approach. the problem in your code is that you only increment i when the character is an alpha. So for the string
abc123 you will increment i 3 times. but as the next char is not alpha you dont incerase i. which means i is now stuck with a value of 3 and thats less than the length of the string 6. so your function is stuck in an infinite loop since you stop increasing i.
A simplified version of your code can be written as below. essentially there is no need for a second while loop. Infact there is no need for a while loop at all, you can just use a for loop to iterate over each character in the string
def longestSubstring(string): longest_letterSeq = '' curr_letterSeq = '' for char in string: if char.isalpha(): curr_letterSeq += char else: if len(curr_letterSeq) > len(longest_letterSeq): longest_letterSeq = curr_letterSeq curr_letterSeq = '' return longest_letterSeq my_strings = ['ab24[AaBbCDExy0longest]', 'a a a1234b|c|d ', '12345'] for string in my_strings: longest = longestSubstring(string) print(f'the longest string in "{string}" is "{longest}"')
OUTPUT
the longest string in "ab24[AaBbCDExy0longest]" is "AaBbCDExy" the longest string in "a a a1234b|c|d " is "a" the longest string in "12345" is ""
Python Program for Longest Common Subsequence, The idea is very similar to printing Longest Common Subsequence(LCS) of two strings. Refer this post for details. C++; Java; Python. C++..
An alternative approach is to replace all non-letters with a space, split, and then choose the longest string.
import re def func(s) : l = re.sub('[^a-zA-Z]+', ' ', s).split() l.append('') # Append an empty string so the list is bound not to be empty. return max(l, key=len) func('ab24[AaBbCDExy0longest]') func('foo2bar') func('')
Longest Common Subsequence, For each character in s1(outer loop) select all the characters in s2(inner loop) c. create a character array LCS[] to print the longest common subsequence. 3. Print Longest substring without repeating characters; Check if sum of Fibonacci elements in an Array is a Fibonacci number or not; Count of subarrays having exactly K perfect square numbers; Check if given words are present in a string; Kth most frequent Character in a given String; Sorting a Map by value in C++ STL; LRU Cache in Python using
Longest Repeated Subsequence Problem, Python Program to find Longest Common Subsequence using Dynamic of a string s if r can be obtained from s by dropping zero or more characters from s. A string r is a longest common subsequence (LCS) of s and t if there is no i, j + 1)) c[i][j] = q return q def print_lcs(u, v, c): """Print one LCS of u and v using table c. How to print the subsequence? The above solution only finds length of subsequence. We can print the subsequence using dp[n+1][n+1] table built. The idea is similar to printing LCS. // Pseudo code to find longest repeated // subsequence using the dp[][] table filled // above.
Print Longest common subsequence, The longest common subsequence (LCS) is defined as the The longest you will understand the working of LCS with working code in C, C++, Java, and Python. If the character correspoding to the current row and current column are 1 elif L[i-1][j] > L[i][j-1]: i -= 1 else: j -= 1 # Printing the sub sequences print("S1 : " + S1}.
Python Program to find Longest Common Subsequence using , What is Longest Common Subsequence: A longest subsequence is a add 1 to the result and remove the last character from both the strings and make recursive call to the modified strings. Complete Code( Include Printing Result): Python Basic - 1: Exercise-65 with Solution. In mathematics, a subsequence is a sequence that can be derived from another sequence by deleting some or no elements without changing the order of the remaining elements.
- Should be
key=lenon the 3rd line of the last example, or just
out = max(r, key=len) if r else ''
- Yes thanks @Deepstop . Yes just wanted to go for a more readable one for OP. Thanks anyways, lets add that too
- NP. I thought as much. | http://thetopsites.net/article/58199385.shtml | CC-MAIN-2021-04 | refinedweb | 1,089 | 60.75 |
I've been trying to insert normally distributed noise onto my cells via IClamp, but the amp can only be set to one number/integer/float. I have a vector of say 1x1000 (1 per ms) that I want to set as the amp value. I guess one way would be to make a 1000 instances of the IClamp set at each time point, but that seems rather brutish.
I've looked through the previous post on this matter, where Ted posted the hoc code to do this (viewtopic.php?t=2986), but I'm looking to build my network using Python as the interpreter.
Code: Select all
def Insert_Noise(self, noise_mean, noise_std_dev): noise_mean, noise_std_dev = 1, 0.5 self.noise_list = [] for idx1 in range(self.N): t_list = np.arange(0,self.stop_time,h.dt) noise_current = np.random.normal(noise_mean, noise_std_dev, len(t_list)) noise_current_vector = h.Vector() noise_current_vector.from_python(noise_current) noise_input = h.IClamp(0.5, sec = self.cells[idx1].soma) noise_input.delay = 0 noise_input.dur = 1e9 #noise_input.amp = 0.1#noise_current_vector noise_current_vector.play(noise_input.amp, t_list, True) self.noise_list.append(noise_input)
If there is another solution I would be happy to pursue it. I would also be happy with some guidance/pointers on where to look/how I could translate the hoc code into a python format. I have access to the NEURON book as well.
Thanks in advance!
Ian | https://www.neuron.yale.edu/phpBB2/viewtopic.php?f=2&t=4129&p=18734&sid=e022d29961c94e42a532a5a736af72a2 | CC-MAIN-2021-21 | refinedweb | 228 | 70.19 |
Learning lists and arrays and I am not sure where I went wrong with this program. Keep in mind I am still new to python. Unsure if i am doing it right. Ive read a few tutorials and maybe Im not grasping list and arrays. Ive got it to where you can type a name but it doesnt transfer to a list and then i get list is empty constantly as well as other errors under other functions in the code.
def display_menu():
print("")
print("1. Roster ")
print("2. Add")
print("3. Remove ")
print("4. Edit ")
print("9. Exit ")
print("")
return int(input("Selection> "))
def printmembers():
if namelist > 0:
print(namelist)
else:
print("List is empty")
def append(name):
pass
def addmember():
name = input("Type in a name to add: ")
append(name)
def remove():
pass
def removemember():
m = input("Enter Member name to delete:")
if m in namelist:
remove(m)
else:
print(m, "was not found")
def index():
pass
def editmember():
old_name = input("What would you like to change?")
if old_name in namelist:
item_number = namelist.index(old_name)
new_name = input("What is the new name? ")
namelist[item_number] = new_name
else:
print(old_name, 'was not found')
print("Welcome to the Team Manager")
namelist = 0
menu_item = display_menu()
while menu_item != 9:
if menu_item == 1:
printmembers()
elif menu_item == 2:
addmember()
elif menu_item == 3:
removemember()
elif menu_item == 4:
editmember()
menu_item = display_menu()
print("Exiting Program...")
For starting out, you've got the right ideas and you're making good progress. The main problem is how you defined
namelist = 0, making it a number. Instead,
namelist needs to be an actual
list for you to add or append anything to it. Also, you're
append() method is not necessary since once you define
namelist as a
list, you can use the built-in
list.append() method, without having to write your own method.
So here are a few suggestions/corrections, which once you have the basis working correctly, you should be able to work out the rest of the bug fixes and logic.
Since you don't have any main() method, you can define
namelist on
the first line of code, before any other code, so that it is
referenced in each method:
namelist = [] # an empty list
Change
addmember() method to:
def addmember():
name = raw_input("Type in a name to add: ")
namelist.append(name)
Since
namelist is a list, we can use the
built-in
len() method on
nameslist to check if it's empty when printing out its contents (if any):
def printmembers():
if len(namelist) > 0: # Get the length of the list
print(namelist)
else:
print("List is empty")
Now that the
Add() menu option is working for adding a name to the
namelist, you should be able to implement removing, and editing names to the list using similar logic. | https://codedump.io/share/NtBFEcw16L8S/1/adding-accessing-and-deleting-items-from-a-python-list | CC-MAIN-2017-51 | refinedweb | 463 | 65.66 |
sensor_get_delay()
Get the sensor delay between events.
Synopsis:
#include <sensor/libsensor.h>
int sensor_get_delay(sensor_t *sensor, uint32_t *delay)
Since:
BlackBerry 10.0.0
Arguments:
- sensor
The sensor to access.
- delay
The delay (in microseconds).
Library:libsensor (For the qcc command, use the -l sensor option to link against this library)
Description:
This function returns the sensor delay between events (i.e., the period of time that elapses before the sensor delivers the next event).
Returns:
EOK on success, an errno value otherwise.
- See also:
sensor_set_delay() has not been called, the default delay is returned. Otherwise, the rate set by sensor_set_delay() is returned. Note that the rate returned may be different than what was set.
Last modified: 2014-05-14
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/core/com.qnx.doc.sensor.lib_ref/topic/sensor_get_delay.html | CC-MAIN-2015-22 | refinedweb | 135 | 53.98 |
in reply to
An Apology for Puncish
The beauty of puncish variables is in the ease with which they can be recognized as perlvars. There is an elegance to the namespace.
I absolutely agree.
I've two counter arguments to the use English; justifictions:
And they all do it for the same good reasons. Perhaps primary of these is that it simpler and more efficient for those familar with the field.
And what about use Cockney;, use Scouser;, use Geordie; & use Glaswegian;?
Maybe we need something along the lines of use Dialect qw/ ... /;. Isn't that going to make code much clearer!?
Perl Cookbook
How to Cook Everything
The Anarchist Cookbook
Creative Accounting Exposed
To Serve Man
Cooking for Geeks
Star Trek Cooking Manual
Manifold Destiny
Other
Results (155 votes),
past polls | http://www.perlmonks.org/?node_id=739368 | CC-MAIN-2014-41 | refinedweb | 132 | 73.78 |
Difference between revisions of "M2T-JET-FAQ/How do I navigate an XMI model with JET?"
Latest revision as of 10:45, 11 August 2009
Contents these resort, code for your model, and you have installed that plug-in into your workspace, the org.eclipse.jet.emfxml model loader will recognize the namespace URI in the root element, and do the same thing as above.
If neither of the above are true (perhaps because you are still defining the model in the workspace), you will either get very confusing error message ("Error: Feature 'version' not found."), or JET will load the raw XMI. But, you can still get JET to read your XMI as EObjects by including an xsi:schemaLocation attribute on your model. The easiest way is to use the EMF reflective editor (which will also let you create your model). You will']
JET Tags and XPath expressions
JET Tags and XPath expressions (1.0.0 and later, only)
Finally, if you are using JET 1.0.0 or later, you can simplify the example code as follows - its the same logic, but the JET tags are less intrusive.
<c:with <c:setVariable <c:iterate <c:if Orders for customer: ${customer/@name} <c:setVariable </c:if> Order ${@id} <c:iterate Article: ${@description} </c:iterate> </c:iterate> </c:with>
Downloads
- JET 1.0.0 example - contains project demo.ecore.nav.pre10
- Pre JET 1.0.0 example - contains project demo.ecore.nav.jet10
The above downloads are Eclipse Project Archives. Import them into your workspace as follows:
- Click File > Import. Select General > Existing Projects into Workspace.
- Click Next
- Click Select archive file then click Browse
- Find an select the downloaded ZIP file and click OK
- Click Finish
Each project contains pre-created JET Launch configurations and test data:
- The test data is located in the 'models' directory of each project
- To run against the test data:
- Click Run > Run Configurations
- Select JET Transformation, and choose demo.ecore.nav.jet10 (OrderSystem.xmi) or demo.ecore.nav.pre10 (OrderSystem.xmi)
- Click Run
- The output of the transformation is the file models/OrderSystem.report.txt. Note that JET does not overwrite files that have not changed. Do not be surprised if the transformation runs with only the message Successful Execution. This merely indicates no were changed. Deleting the file and re-running the transformation will produce log output similar to the following:
templates/main.jet(34,2): <ws:file Writing file: demo.ecore.nav.pre10/models/OrderSystem.report.txt Successful Execution | http://wiki.eclipse.org/index.php?title=M2T-JET-FAQ/How_do_I_navigate_an_XMI_model_with_JET%3F&diff=167427&oldid=167333 | CC-MAIN-2015-11 | refinedweb | 418 | 57.47 |
In my previous blog post we went over creating an Ember project with Node or iojs.
To keep it simple we created an Express application with only one route that directed everything to our public/index.html. The index.html was the compiled Ember.js route.
Dan Hutchinson in the comments pointed out that we could have also created an Ember server and used the proxy command so all XHR requests would be sent to the Node server. Something like this.
$ ember server --proxy $ node server.js //listening port 3000
The Ember server is running on port 4200 with a proxy to the Node server running on port 3000. This works fine and is a great way to test. Later on you can compile it if needed.
In this tutorial we'll add a connection to a MongoDB database and create a simple REST service to get and post quotes. You can follow along on Github if you like. (The Ember project is not included yet, only the Node server)
Setup
We left off from the last post with a working Node Express server. We'll continue from there.
Before we begin we'll need to make sure we have mongoDB installed. After this is done we can install mongoose. This will allow us to interface to our mongoDB database in Node.
$ npm install mongoose --save $ npm install body-parser --save
In addition we'll need the middleware body-parser. This will help us parse different type of response formats from and to our Node server.
Node Server File
Here is the new server.js file with comments. (We are using Express 4.12)
// server.js // modules ================================================= var express = require('express'); var app = express(); var bodyParser = require('body-parser'); // set our port var port = process.env.PORT || 3000; // set up mongoose, assume locally installed var mongoose = require('mongoose'); mongoose.connect('mongodb://localhost/RESTServer'); // set the static files location for our Ember application app.use(express.static(__dirname + '/public')); //bodyParser Middleware to allow different encoding requests app.use(bodyParser.urlencoded({ extended: true })); app.use(bodyParser.json()); // to support JSON-encoded bodies //Routes API var router = express.Router(); app.use('/', router); require('./app/routes')(router); // configure our routes // startup our app at app.listen(port); // expose app exports = module.exports = app;
There are a few changes from our last post. We've included the bodyParser as well as Mongoose which will talk to our local MongoDB database. We are now using the Express router to setup our routes.
Directory Structure
The directory structure will look like this.
├── api │ └── quote.js ├── app │ └── routes.js ├── models │ └── quote.js ├── package.json ├── TestProject ├── public └── server.js
The api folder will hold our logic to add and retrieve posts. The models.js file shows the model for our quote schema. The TestProject folder holds our Ember.js application and the public folder is where the compiled Ember project resides.
Model
Let's take a look at this first.
// models/quote.js var mongoose = require('mongoose'); var Schema = mongoose.Schema; var QuoteSchema = new Schema({ quote: String, author: String }); module.exports = mongoose.model('Quote', QuoteSchema);
All we are doing here is following the proper schema setup for mongoose and exporting it out as a module. The schema entails just a quote and an author.
API
We want to be able to add new quotes and retrieve them if needed. Let's take a look at that.
// api/quote.js var Quote = require('../models/quote'); module.exports.getAllQuotes = function(req, res) { Quote.find(function(err, quotes) { if (err) { res.send(err); } res.json({quotes: quotes}); }); }; module.exports.addQuote = function(req,res) { var quote = new Quote(req.body.quote); quote.save(function(err) { if (err) { res.send(err); } res.json({quote: quote}); }); };
This is a little more complicated. Once again we'll use module exports to create addQuote and getAllQuotes. getAllQuotes uses the Quote object which is the mongoose schema we created earlier. We have a lot of different queries available to us. This will find all the quotes and then display the response in json. addQuote will save the quote to the MongoDB database.
Router
Everything is in place we can now take a peek at the router.
// app/router.js var quotes = require('../api/quote'); module.exports = function(router) { router.route('/quotes').post(function(req, res) { console.log(req.body); quotes.addQuote(req,res); }) .get(function(req,res) { quotes.getAllQuotes(req,res) }); router.route('*').get(function(req, res) { res.sendfile('./public/index.html'); // load our public/index.html file }); };
These routes are similar to what we had before except now we are using the router object. We have a new quotes route that executes the getAllQuotes module that we created earlier for get requests. If the agent connects with a post request then we send them to the addQuote module. The default route remains the same.
Testing Out Node Server
Before we move on let's test this with Postman! Postman is a simple tool to help send different requests to a server.
First we'll start the server
$ node server.js
We'll send a quote to the server. (FYI there seemed to be an issue with Postman sending a Content-Type application/json to the server so I had to specifically set that in the headers for this to work!)
We sent over this json message.
{"quote":{"quote":"If it is to be it's up to me!","author":"Unknown"}}
You could always just use the curl command as well.
$ curl -d '{"quote":{"quote":"You know me!","author":"Unknown"}}' -H "Content-Type: application/json"
We received a positive response so everything should be working.
Ember Setup
Since we have everything setup in our server let's see if we can get Ember working with it now. Once again we'll assume you're using the old Ember project that we created in the last post. If not that's OK, all you need to do is create a new project.
Setup
Before we begin let's create a few files and setup our CSP.
//'" } };
The above CSP will allow us to connect to the Node server running on port 3000.
$ ember g resource quote quote:string author:string
This will generate our route, model and template for us. When it generates the model it will populate the DS.attr for us for quote and author.
It's worth noting that we many have issues in the future since now we have a quote route using Ember and a quotes route in our Node server. For now this is OK, however in future posts I might just set all our REST node server routes to /api so it won't conflict.
$ ember g serializer application
We'll need to create a serializer so Ember data knows what the primary key is. It defaults to id however since MongoDB defaults to _id so we need to change it.
// app/serializers/application.js import DS from 'ember-data'; export default DS.RESTSerializer.extend({ primaryKey: '_id' });
Route
Our route is very simple, it just retrieves all the quotes.
import Ember from 'ember'; export default Ember.Route.extend({ model: function() { return this.store.find('quote'); } });
Template
We should have everything in place to display our posts!
Quotes!<br/> {{#each quote in model}} Quote: {{quote.quote}}<br/> Author: {{quote.author}}<br/> {{/each}}
The each block will iterate through all the quotes sent from our model.
That should be it for Ember. The model is already created so we shouldn't have to worry about that.
Putting it all together
First we'll run both the Node and Ember server together, then we'll build Ember into our public folder.
$ node server.js $ ember server --proxy
This should allow us to check to see if everything is working. If everything is working should show a list of quotes we added.
If that's working we can go ahead and build.
$ ember build --environment=production --output-path=../public/
This is once again assuming the Ember application is in the same folder as the Node server. If all is well we can just run the application server. I created a Github for this project. You'll need to create the Ember project separately but this should get you started.
Future
Now that we've been able to get our application talking to a MongoDB database we can now look into doing some more things. For example, deleting and editing quotes. Or retrieving individual quotes. I'll be investigating that in the future.
Thanks to Connor for his great guide and Scotch.IO for their MEAN stack tutorial. Both of which helped me out in writing this.
Questions? Leave a comment below! | https://www.programwitherik.com/how-to-setup-your-ember-project-with-mongo-and-node/ | CC-MAIN-2019-26 | refinedweb | 1,445 | 69.18 |
Problem: Course Grade Modification
Write a program that uses a structure to store the following information:
Name, idNum, tests, avg, and grade
I got the most of the program completed so far, however I still need to do the average and grade, if anyone can assist me I would greatly appreciate it and can you explain what I did wrong and how your input resolved it. I am new at this, thanks!
Here's my code:
#include <iostream> #include <string> using namespace std; struct Student { string name, grade; int idNum, *tests; double avg; }; int main() { int numStudents, numTests; double total = 0; //Get Number of Students cout << "How many students will be entered? "; cin >> numStudents; //Get Number of Tests cout << "How many test scores are there? "; cin >> numTests; Student * student; student = new Student[numStudents]; //Get Student Names for (int i = 0; i < numStudents; i++) { cout << endl; cout << "Enter Student Name: "; cin.ignore(); getline(cin, (student + i)->name); //Get Student ID# cout << "Enter Student ID Number: "; cin >> (student + i)->idNum; (student + i)->tests = new int[numTests]; //Get Test Scores for (int t = 0; t < numTests; t++) { cout << "Enter Test Score " << t + 1 << ": "; cin >> (student + i)->tests[t]; //Validate Test entries while((student + i)->tests[t] < 0) { cout << "You must enter a value higher than zero.\n"; cout << "Enter Test Score " << t + 1 << ": "; cin >> (student + i)->tests[t]; } //Add all scores together for (int count = 0; count < (student + i)->tests[t]; count++) { total += student[count]; } //Get Average Score student->avg = total / numTests; } } //Display output of information entered for (int s = 0; s < numStudents; s++) { cout << endl; cout << "Student Name: " << (student + s)->name << endl; cout << "Student ID Number: " << (student + s)->idNum << endl; for (int t = 0; t < numTests; t++) { cout << "Test " << t + 1 << " score: " << (student + s)->tests[t] << endl; } cout << "The Average score for " << (student + s)->name << " is: " << student->avg << endl; } //Deletes Memory Locations for (int i = 0; i < numStudents; i++) delete [] student[i].tests; delete [] student; return 0; }
For some reason my average calculates the same no matter how many students I enter. It also doesnt look correct in the output screen. After I get this working I need to figure out how to output a grade level.
IE: 91-100 = A, 81-90 = B, etc. | https://www.daniweb.com/programming/software-development/threads/272983/pointer-struct-homework-help-needed | CC-MAIN-2018-30 | refinedweb | 373 | 56.73 |
Other Aliasmem_check, mem_alloc, mem_realloc, mem_free, mem_close
LIBRARIESDebug Library (-ldebug)
SYNOPSIS
#include <debug/memory.h>
void mem_open(void (*fail)(const char *fmt, ...));
void mem_check(void);
void *mem_alloc(size_t size);
void *mem_realloc(void *ptr, size_t size);
void mem_free(void *ptr);
void mem_close(void);
DESCRIPTIONmem_open() initializes the memory debugging system. It should be called before any of the other routines. You can specify a callback function which should be called whenever something bad happens, or NULL in which case the default error handler will be used. The default error handler logs error messages using the debug logging routines and exit.
mem_check() check all the allocated memory areas. This is called every time memory is allocated or freed. You can also call it anytime you think memory might be corrupted.
mem_alloc() allocates size bytes and returns a pointer to the allocated memory. The memory is not cleared.
mem_realloc() changes the size of the memory block pointed to by ptr to size bytes. The contents will be unchanged to the minimum of the old and new sizes; newly allocated memory will be uninitialized. If ptr is NULL, the call is equivalent to mem_alloc(size); if size is equal to zero, the call is equivalent to mem_free(ptr). Unless ptr is NULL, it must have been returned by an earlier call to mem_alloc() or mem_realloc().
mem_free() frees the memory space pointed to by ptr, which must have been returned by a previous call to mem_alloc() or mem_realloc(). If ptr is NULL, no operation is performed.
mem_close() checks for leaks and possible memory corruptions.
RETURN VALUEFor mem_alloc(), the value returned is a pointer to the allocated memory, which is suitably aligned for any kind of variable, or NULL if the request fails.
mem_realloc() returns a pointer to the newly allocated memory, which is suitably aligned for any kind of variable and may be different from ptr, or NULL if the request fails or if size was equal to 0. If mem_realloc() fails the original block is left untouched - it is not freed or moved.
All other functions returns no value.
NOTESIf the default fail callback is used or if this routines are combined with the log
AUTHORWritten by Abraham vd Merwe <[email protected]> | http://manpages.org/mem_open/3 | CC-MAIN-2019-51 | refinedweb | 366 | 64.3 |
I just computed multiple ring buffers for a point in EPSG 4326 with a 100m intervall for a maximum distance of 1000m. I was observing a calcualtion time of approx. 50s within ArcMap, 4min in ArcGIS Pro and 40s as a processing service.
Yet I would love to see the same performance in ArcMAP as well as ArcGIS Pro because I am shipping a toolbox with this arcpy command:)
arcpy.analysis.MultipleRingBuffer("Your single point feature", r"your ring buffer name", distancesString, "Meters", "distance", "ALL", "FULL")
Are there ways to increase calculation times in ArcGIS Pro?
I am running this on a X270 with this spec:
Intel(R) Core(TM) i7-7600U CPU @ 2.80GHz, 2904 MHz, 2 Cores, 4 logical processors
16Gb Ram, SSD
no dedicated GPU.
Hey r.klingeresri-de-esridist. You might want to share the question with the... community.
Wondering if there was a solution or more chatter to this?
We are beginning to implement Pro in our organization but have found that when creating multi-ring buffers there is a delay (up to 10 minutes???) to finish the process? IS this a system resource/hardware issue or do others find that this simple tool doesn't work very well with Pro?
Hi Collin,
Just checked this python toolbox back in ArcMAP 10.6 AND ArcGIS Pro 2.1.3 and 2.2."""
point = arcpy.Parameter(
displayName= "buffer center",
name="buffer center",
datatype="GPFeatureRecordSetLayer",
parameterType="Required",
direction="Input")
params = [point]):
import time;
"""The source code of the tool.""")
i=0;
while i<10:
start = time.time()
arcpy.analysis.MultipleRingBuffer(parameters[0].value, "tester" + str(i), distancesString, "Meters", "distance", "ALL", "FULL")
end = time.time()
arcpy.AddMessage(end - start)
i+=1
return
The results show significant perfomance boost on my machine!
The raw values in seconds are shown here:
I can see a performance bioost of about 20x compared to ArcMAP and 50x compared to ArcGIS Pro 2.1.3 using ArcGIS Pro 2.2
Riccardo - This is great to see! Thanks for putting this effort in to prove this out. I can already report back that others on my team have tested this out in Pro 2.2 since reading this response and have reported that the tool does seem to run much faster.
Sure wish this Pro app was really up to the challenge of Production usage. It's things like this that are delaying our implementation. Frustrating.
Cheers! | https://community.esri.com/t5/geodev-germany-questions/multiple-ring-buffer-performance-issues/m-p/732132 | CC-MAIN-2021-17 | refinedweb | 403 | 66.33 |
Code :
import java.util.Scanner; public class BouncingBall { public static void main(String[] args) { Scanner keyboard = new Scanner(System.in); int bounce = 0, time = 0; double height = 0.0; double velocity = 0; System.out.println("Enter the initial velocity of the ball: "); velocity = keyboard.nextDouble(); do { System.out.print("Time: % Height: 1.1f", time, height); time ++; height += velocity; velocity -= 32; if (height < 0) { height *= -0.5; velocity *= -0.5; bounce++; System.out.println("Bounce!"); } } while (bounce < 5); } }
I need all height values in this program to be rounded to the nearest 0.1. The part after printf is what I wrote down from what my professor said to do. However, the program is not running properly. The program does run properly when I do this instead:
Code :
System.out.println("Time: " + time + " Height: " + height); time ++;
However, the Height values don't get rounded to the nearest 0.1. How to I make it so the program rounds the height values to the nearest 0.1 using printf? | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/32255-rounding-printf-printingthethread.html | CC-MAIN-2014-41 | refinedweb | 167 | 63.05 |
Catch me at:
The List!
Now that you have MonoGame & Visual Studio installed, go ahead and create a MonoGame Windows Project. You can call it whatever you want.
In the Solution Explorer, you’ll see a Content folder (more on this in a bit), an icon file (put your cursor on it), and finally two .cs files: Program.cs and Game1.cs
Program.cs is our game launcher. Go ahead and click on it. There’s not much there, other than a call to your Game1() class, and you will rarely have to do anything in this file.
Program.cs
The other file, Game1.cs, is a lot more interesting. This is where we’ll be spending most of our time. Go ahead and click on it and take a look around.
The first thing you’ll probably notice are the namespace references at the top of the page.
Even though there’s no actual dependancy on XNA in MonoGame, the decision was made to keep the namespaces in order to ensure maximum conpatibility with existing XNA code.
Next up, we have the GraphicsDeviceManager and SpriteBatch. Those will be discussed further in a later article.
The constructor is pretty basic, and includes a reference to the GraphicsDeviceManager and the Content folder.
There are five methods in this class, which I’ll cover now.
Initialize() – This method is called ONCE, when your game first starts up. Use it to set up any services or external things like non-graphical content.
LoadContent() – This is where you would load your games graphical content, such as sprites, spritefonts (more on those later), and 3D models.
UnloadContent() – THis is where you unload the content you referenced in the LoadContent() method.
Update() – After loading your assets, much of your coding will occur here. This is where you check for user input, and also update the state of objects being used in your game.
Draw() – This is where you draw onscreen, using the data you updated in the Update() method.
These last two methods, Update() and Draw(), form what is known as the game loop, which I’ll cover in the next section.
I’ll cover the Game Loop. If you came directly to this page, you can find the complete list of articles here. | http://gamecontest.geekswithblogs.net/cwilliams/archive/2017/02/07/233163.aspx | CC-MAIN-2020-50 | refinedweb | 378 | 75.61 |
What’s new in Celery 3.1 (Cipater.6, 2.7, and 3.3, and also supported on PyPy.
Table of Contents
Make sure you read the important notes before upgrading to this version.
-
-
Prefork Pool Improvements
Django supported out of the box
Events are now ordered using logical time
New worker node name format (
name@host)
-
Mingle: Worker synchronization
Gossip: Worker <-> Worker communication
Bootsteps: Extending the worker
-
Time limits can now be set by the client
Redis: Broadcast messages and virtual hosts
pytz replaces python-dateutil dependency
Support for setuptools extra requirements
subtask.__call__()now executes the task directly
-
-
Deprecation Time-line Changes
-
-
Preface¶
Deadlocks have long plagued our workers, and while uncommon they’re not acceptable. They’re also infamous for being extremely hard to diagnose and reproduce, so to make this job easier I wrote a stress test suite that bombards the worker with different tasks in an attempt to break it.
What happens if thousands of worker child processes are killed every second? what if we also kill the broker connection every 10 seconds? These are examples of what the stress test suite will do to the worker, and it reruns these tests using different configuration combinations to find edge case bugs.
The end result was that I had to rewrite the prefork pool to avoid the use of the POSIX semaphore. This was extremely challenging, but after months of hard work the worker now finally passes the stress test suite.
There’s probably more bugs to find, but the good news is that we now have a tool to reproduce them, so should you be so unlucky to experience a bug then we’ll write a test for it and squash it!
Note that I’ve also moved many broker transports into experimental status: the only transports recommended for production use today is RabbitMQ and Redis.
I don’t have the resources to maintain all of them, so bugs are left unresolved. I wish that someone will step up and take responsibility for these transports or donate resources to improve them, but as the situation is now I don’t think the quality is up to date with the rest of the code-base so I cannot recommend them for production use.
The next version of Celery 4.0 will focus on performance and removing rarely used parts of the library. Work has also started on a new message protocol, supporting multiple languages and more. The initial draft can be found here.
This has probably been the hardest release I’ve worked on, so no introduction to this changelog would be complete without a massive thank you to everyone who contributed and helped me test it!
Thank you for your support!
— Ask Solem
Important Notes¶
Dropped support for Python 2.5¶
Celery now requires Python 2.6 or later.
The new dual code base runs on both Python 2 and 3, without
requiring the
2to3 porting tool.
Note
This is also the last version to support Python 2.6! From Celery 4.0 and on-wards Python 2.7 or later will be required.
Last version to enable Pickle by default¶
Starting from Celery 4.0 the default serializer will be json.
If you depend on pickle being accepted you should be prepared
for this change by explicitly allowing your worker
to consume pickled messages using the
CELERY_ACCEPT_CONTENT
setting:
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
Make sure you only select the serialization formats you’ll actually be using, and make sure you’ve properly secured your broker from unwanted access (see the Security Guide).
The worker will emit a deprecation warning if you don’t define this setting.
for Kombu users
Kombu 3.0 no longer accepts pickled messages by default, so if you use Kombu directly then you have to configure your consumers: see the Kombu 3.0 Changelog for more information.
Old command-line programs removed and deprecated¶
Everyone should move to the new celery umbrella command, so we’re incrementally deprecating the old command names.
In this version we’ve removed all commands that aren’t used in init-scripts. The rest will be removed in 4.0.
If this isn’t a new installation then you may want to remove the old commands:
$ pip uninstall celery $ # repeat until it fails # ... $ pip uninstall celery $ pip install celery
Please run celery --help for help using the umbrella command.
News¶
Prefork Pool Improvements¶
These improvements are only active if you use an async capable transport. This means only RabbitMQ (AMQP) and Redis are supported at this point and other transports will still use the thread-based fallback implementation.
Pool is now using one IPC queue per child process.
Previously the pool shared one queue between all child processes, using a POSIX semaphore as a mutex to achieve exclusive read and write access.
The POSIX semaphore has now been removed and each child process gets a dedicated queue. This means that the worker will require more file descriptors (two descriptors per process), but it also means that performance is improved and we can send work to individual child processes.
POSIX semaphores aren’t released when a process is killed, so killing processes could lead to a deadlock if it happened while the semaphore was acquired. There’s no good solution to fix this, so the best option was to remove the semaphore.
Asynchronous write operations
The pool now uses async I/O to send work to the child processes.
Lost process detection is now immediate.
If a child process is killed or exits mysteriously the pool previously had to wait for 30 seconds before marking the task with a
WorkerLostError. It had to do this because the out-queue was shared between all processes, and the pool couldn’t be certain whether the process completed the task or not. So an arbitrary timeout of 30 seconds was chosen, as it was believed that the out-queue would’ve been drained by this point.
This timeout is no longer necessary, and so the task can be marked as failed as soon as the pool gets the notification that the process exited.
Rare race conditions fixed
Most of these bugs were never reported to us, but were discovered while running the new stress test suite.
Caveats¶
Long running tasks
The new pool will send tasks to a child process as long as the process in-queue is writable, and since the socket is buffered this means that the processes are, in effect, prefetching tasks.
This benefits performance but it also means that other tasks may be stuck waiting for a long running task to complete:
-> send T1 to Process A # A executes T1 -> send T2 to Process B # B executes T2 <- T2 complete -> send T3 to Process A # A still executing T1, T3 stuck in local buffer and # won't start until T1 returns
The buffer size varies based on the operating system: some may have a buffer as small as 64KB but on recent Linux versions the buffer size is 1MB (can only be changed system wide).
You can disable this prefetching behavior by enabling the
-Ofair worker option:
$ celery -A proj worker -l info -Ofair
With this option enabled the worker will only write to workers that are available for work, disabling the prefetch behavior.
Max tasks per child
If a process exits and pool prefetch is enabled the worker may have already written many tasks to the process in-queue, and these tasks must then be moved back and rewritten to a new process.
This is very expensive if you have the
--max-tasks-per-child
option set to a low value (e.g., less than 10), you should not be
using the
-Ofast scheduler option.
Django supported out of the box¶
Celery 3.0 introduced a shiny new API, but unfortunately didn’t have a solution for Django users.
The situation changes with this version as Django is now supported in core and new Django users coming to Celery are now expected to use the new API directly.
The Django community has a convention where there’s a separate
django-x package for every library, acting like a bridge between
Django and the library.
Having a separate project for Django users has been a pain for Celery, with multiple issue trackers and multiple documentation sources, and then lastly since 3.0 we even had different APIs.
With this version we challenge that convention and Django users will use the same library, the same API and the same documentation as everyone else.
There’s no rush to port your existing code to use the new API, but if you’d like to experiment with it you should know that:
You need to use a Celery application instance.
The new Celery API introduced in 3.0 requires users to instantiate the library by creating an application:
from celery import Celery app = Celery()
You need to explicitly integrate Celery with Django
Celery won’t automatically use the Django settings, so you can either configure Celery separately or you can tell it to use the Django settings with:
app.config_from_object('django.conf:settings')
Neither will it automatically traverse your installed apps to find task modules. If you want this behavior, you must explicitly pass a list of Django instances to the Celery app:
from django.conf import settings app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
You no longer use
manage.py
Instead you use the celery command directly:
$ celery -A proj worker -l info
For this to work your app module must store the
DJANGO_SETTINGS_MODULEenvironment variable, see the example in the Django guide.
To get started with the new API you should first read the First Steps with Celery tutorial, and then you should read the Django-specific instructions in First steps with Django.
The fixes and improvements applied by the django-celery library
are now automatically applied by core Celery when it detects that
the
DJANGO_SETTINGS_MODULE environment variable is set.
The distribution ships with a new example project using Django
in
examples/django:
Some features still require the django-celery library:
-
Celery doesn’t implement the Django database or cache result backends.
-
- Celery doesn’t ship with the database-based periodic task
-
scheduler.
Note
If you’re still using the old API when you upgrade to Celery 3.1
then you must make sure that your settings module contains
the
djcelery.setup_loader() line, since this will
no longer happen as a side-effect of importing the django-celery
module.
New users (or if you’ve ported to the new API) don’t need the
setup_loader
line anymore, and must make sure to remove it.
Events are now ordered using logical time¶
Keeping physical clocks in perfect sync is impossible, so using time-stamps to order events in a distributed system isn’t reliable.
Celery event messages have included a logical clock value for some time, but starting with this version that field is also used to order them.
Also, events now record timezone information
by including a new
utcoffset field in the event message.
This is a signed integer telling the difference from UTC time in hours,
so for example, an event sent from the Europe/London timezone in daylight savings
time will have an offset of 1.
app.events.Receiver will automatically convert the time-stamps
to the local timezone.
Note
The logical clock is synchronized with other nodes in the same cluster (neighbors), so this means that the logical epoch will start at the point when the first worker in the cluster starts.
If all of the workers are shutdown the clock value will be lost
and reset to 0. To protect against this, you should specify the
celery worker --statedb option such that the worker can
persist the clock value at shutdown.
You may notice that the logical clock is an integer value and increases very rapidly. Don’t worry about the value overflowing though, as even in the most busy clusters it may take several millennium before the clock exceeds a 64 bits value.
New worker node name format (
name@host)¶
Node names are now constructed by two elements: name and host-name separated by ‘@’.
This change was made to more easily identify multiple instances running on the same machine.
If a custom name isn’t specified then the worker will use the name ‘celery’ by default, resulting in a fully qualified node name of ‘celery@hostname’:
$ celery worker -n example.com celery@example.com
To also set the name you must include the @:
$ celery worker -n worker1@example.com worker1@example.com
The worker will identify itself using the fully qualified node name in events and broadcast messages, so where before a worker would identify itself as ‘worker1.example.com’, it’ll now use ‘celery@worker1.example.com’.
Remember that the
-n argument also supports
simple variable substitutions, so if the current host-name
is george.example.com then the
%h macro will expand into that:
$ celery worker -n worker1@%h worker1@george.example.com
The available substitutions are as follows:
Bound tasks¶
The task decorator can now create “bound tasks”, which means that the
task will receive the
self argument.
@app.task(bind=True) def send_twitter_status(self, oauth, tweet): try: twitter = Twitter(oauth) twitter.update_status(tweet) except (Twitter.FailWhaleError, Twitter.LoginError) as exc: raise self.retry(exc=exc)
Using bound tasks is now the recommended approach whenever
you need access to the task instance or request context.
Previously one would’ve to refer to the name of the task
instead (
send_twitter_status.retry), but this could lead to problems
in some configurations.
Mingle: Worker synchronization¶
The worker will now attempt to synchronize with other workers in the same cluster.
Synchronized data currently includes revoked tasks and logical clock.
This only happens at start-up and causes a one second start-up delay to collect broadcast responses from other workers.
You can disable this bootstep using the
celery worker --without-mingle option.
Gossip: Worker <-> Worker communication¶
Workers are now passively subscribing to worker related events like heartbeats.
This means that a worker knows what other workers are doing and can detect if they go offline. Currently this is only used for clock synchronization, but there are many possibilities for future additions and you can write extensions that take advantage of this already.
Some ideas include consensus protocols, reroute task to best worker (based on resource usage or data locality) or restarting workers when they crash.
We believe that although this is a small addition, it opens amazing possibilities.
You can disable this bootstep using the
celery worker --without-gossip option.
Bootsteps: Extending the worker¶
By writing bootsteps you can now easily extend the consumer part of the worker to add additional features, like custom message consumers.
The worker has been using bootsteps for some time, but these were never documented. In this version the consumer part of the worker has also been rewritten to use bootsteps and the new Extensions and Bootsteps guide documents examples extending the worker, including adding custom message consumers.
See the Extensions and Bootsteps guide for more information.
Note
Bootsteps written for older versions won’t be compatible with this version, as the API has changed significantly.
The old API was experimental and internal but should you be so unlucky to use it then please contact the mailing-list and we’ll help you port the bootstep to the new API.
New RPC result backend¶
This new experimental version of the
amqp result backend is a good
alternative to use in classical RPC scenarios, where the process that initiates
the task is always the process to retrieve the result.
It uses Kombu to send and retrieve results, and each client uses a unique queue for replies to be sent to. This avoids the significant overhead of the original amqp result backend which creates one queue per task.
By default results sent using this backend won’t persist, so they won’t
survive a broker restart. You can enable
the
CELERY_RESULT_PERSISTENT setting to change that.
CELERY_RESULT_BACKEND = 'rpc' CELERY_RESULT_PERSISTENT = True
Note that chords are currently not supported by the RPC backend.
Time limits can now be set by the client¶
Two new options have been added to the Calling API:
time_limit and
soft_time_limit:
>>> res = add.apply_async((2, 2), time_limit=10, soft_time_limit=8) >>> res = add.subtask((2, 2), time_limit=10, soft_time_limit=8).delay() >>> res = add.s(2, 2).set(time_limit=10, soft_time_limit=8).delay()
Contributed by Mher Movsisyan.
Redis: Broadcast messages and virtual hosts¶
Broadcast messages are currently seen by all virtual hosts when using the Redis transport. You can now fix this by enabling a prefix to all channels so that the messages are separated:
BROKER_TRANSPORT_OPTIONS = {'fanout_prefix': True}
Note that you’ll not be able to communicate with workers running older versions or workers that doesn’t have this setting enabled.
This setting will be the default in a future version.
Related to Issue #1490.
pytz replaces python-dateutil dependency¶
Celery no longer depends on the python-dateutil library, but instead a new dependency on the pytz library was added.
The pytz library was already recommended for accurate timezone support.
This also means that dependencies are the same for both Python 2 and
Python 3, and that the
requirements/default-py3k.txt file has
been removed.
Support for setuptools extra requirements¶
Pip now supports the setuptools extra requirements format, so we’ve removed the old bundles concept, and instead specify setuptools extras.
You install extras by specifying them inside brackets:
$ pip install celery[redis,mongodb]
The above will install the dependencies for Redis and MongoDB. You can list as many extras as you want.
Warning
You can’t use the
celery-with-* packages anymore, as these won’t be
updated to use Celery 3.1.
The complete list with examples is found in the Bundles section.
subtask.__call__() now executes the task directly¶
A misunderstanding led to
Signature.__call__ being an alias of
.delay but this doesn’t conform to the calling API of
Task which
calls the underlying task method.
This means that:
@app.task def add(x, y): return x + y add.s(2, 2)()
now does the same as calling the task directly:
>>> add(2, 2)
In Other News¶
Now depends on Kombu 3.0.
Now depends on billiard version 3.3.
Worker will now crash if running as the root user with pickle enabled.
Canvas:
group.apply_asyncand
chain.apply_asyncno longer starts separate task.
That the group and chord primitives supported the “calling API” like other subtasks was a nice idea, but it was useless in practice and often confused users. If you still want this behavior you can define a task to do it for you.
New method
Signature.freeze()can be used to “finalize” signatures/subtask.
Regular signature:
>>> s = add.s(2, 2) >>> result = s.freeze() >>> result <AsyncResult: ffacf44b-f8a1-44e9-80a3-703150151ef2> >>> s.delay() <AsyncResult: ffacf44b-f8a1-44e9-80a3-703150151ef2>
Group:
>>> g = group(add.s(2, 2), add.s(4, 4)) >>> result = g.freeze() ]> >>> g() ]>
Chord exception behavior defined (Issue #1172).
From this version the chord callback will change state to FAILURE when a task part of a chord raises an exception.
See more at Error handling.
New ability to specify additional command line options to the worker and beat programs.
The
app.user_optionsattribute can be used to add additional command-line arguments, and expects
optparse-style options:
from celery import Celery from celery.bin import Option app = Celery() app.user_options['worker'].add( Option('--my-argument'), )
See the Extensions and Bootsteps guide for more information.
All events now include a
pidfield, which is the process id of the process that sent the event.
Event heartbeats are now calculated based on the time when the event was received by the monitor, and not the time reported by the worker.
This means that a worker with an out-of-sync clock will no longer show as ‘Offline’ in monitors.
A warning is now emitted if the difference between the senders time and the internal time is greater than 15 seconds, suggesting that the clocks are out of sync.
Monotonic clock support.
A monotonic clock is now used for timeouts and scheduling.
The monotonic clock function is built-in starting from Python 3.4, but we also have fallback implementations for Linux and macOS.
celery worker now supports a new
--detachargument to start the worker as a daemon in the background.
app.events.Receivernow sets a
local_receivedfield for incoming events, which is set to the time of when the event was received.
app.events.Dispatchernow accepts a
groupsargument which decides a white-list of event groups that’ll be sent.
The type of an event is a string separated by ‘-‘, where the part before the first ‘-‘ is the group. Currently there are only two groups:
workerand
task.
A dispatcher instantiated as follows:
>>> app.events.Dispatcher(connection, groups=['worker'])
will only send worker related events and silently drop any attempts to send events related to any other group.
New
BROKER_FAILOVER_STRATEGYsetting.
This setting can be used to change the transport fail-over strategy, can either be a callable returning an iterable or the name of a Kombu built-in failover strategy. Default is “round-robin”.
Contributed by Matt Wise.
Result.revokewill no longer wait for replies.
You can add the
reply=Trueargument if you really want to wait for responses from the workers.
Better support for link and link_error tasks for chords.
Contributed by Steeve Morin.
Worker: Now emits warning if the
CELERYD_POOLsetting is set to enable the eventlet/gevent pools.
The -P option should always be used to select the eventlet/gevent pool to ensure that the patches are applied as early as possible.
If you start the worker in a wrapper (like Django’s
manage.py) then you must apply the patches manually, for example by creating an alternative wrapper that monkey patches at the start of the program before importing any other modules.
There’s a now an ‘inspect clock’ command which will collect the current logical clock value from workers.
celery inspect stats now contains the process id of the worker’s main process.
Contributed by Mher Movsisyan.
New remote control command to dump a workers configuration.
Example:
$ celery inspect conf
Configuration values will be converted to values supported by JSON where possible.
Contributed by Mher Movsisyan.
New settings
CELERY_EVENT_QUEUE_TTLand
CELERY_EVENT_QUEUE_EXPIRES.
These control when a monitors event queue is deleted, and for how long events published to that queue will be visible. Only supported on RabbitMQ.
New Couchbase result backend.
This result backend enables you to store and retrieve task results using Couchbase.
See Couchbase backend settings for more information about configuring this result backend.
Contributed by Alain Masiero.
CentOS init-script now supports starting multiple worker instances.
See the script header for details.
Contributed by Jonathan Jordan.
AsyncResult.iter_nativenow sets default interval parameter to 0.5
Fix contributed by Idan Kamara
New setting
BROKER_LOGIN_METHOD.
This setting can be used to specify an alternate login method for the AMQP transports.
Contributed by Adrien Guinet
The
dump_confremote control command will now give the string representation for types that aren’t JSON compatible.
Function celery.security.setup_security is now
app.setup_security().
Task retry now propagates the message expiry value (Issue #980).
The value is forwarded at is, so the expiry time won’t change. To update the expiry time you’d’ve to pass a new expires argument to
retry().
Worker now crashes if a channel error occurs.
Channel errors are transport specific and is the list of exceptions returned by
Connection.channel_errors. For RabbitMQ this means that Celery will crash if the equivalence checks for one of the queues in
CELERY_QUEUESmismatches, which makes sense since this is a scenario where manual intervention is required.
Calling
AsyncResult.get()on a chain now propagates errors for previous tasks (Issue #1014).
The parent attribute of
AsyncResultis now reconstructed when using JSON serialization (Issue #1014).
Worker disconnection logs are now logged with severity warning instead of error.
Contributed by Chris Adams.
events.Stateno longer crashes when it receives unknown event types.
SQLAlchemy Result Backend: New
CELERY_RESULT_DB_TABLENAMESsetting can be used to change the name of the database tables used.
Contributed by Ryan Petrello.
- SQLAlchemy Result Backend: Now calls
enginge.disposeafter fork
(Issue #1564).
If you create your own SQLAlchemy engines then you must also make sure that these are closed after fork in the worker:
from multiprocessing.util import register_after_fork engine = create_engine(*engine_args) register_after_fork(engine, engine.dispose)
A stress test suite for the Celery worker has been written.
This is located in the
funtests/stressdirectory in the git repository. There’s a README file there to get you started.
The logger named
celery.concurrencyhas been renamed to
celery.pool.
New command line utility
celery graph.
This utility creates graphs in GraphViz dot format.
You can create graphs from the currently installed bootsteps:
# Create graph of currently installed bootsteps in both the worker # and consumer name-spaces. $ celery graph bootsteps | dot -T png -o steps.png # Graph of the consumer name-space only. $ celery graph bootsteps consumer | dot -T png -o consumer_only.png # Graph of the worker name-space only. $ celery graph bootsteps worker | dot -T png -o worker_only.png
Or graphs of workers in a cluster:
# Create graph from the current cluster $ celery graph workers | dot -T png -o workers.png # Create graph from a specified list of workers $ celery graph workers nodes:w1,w2,w3 | dot -T png workers.png # also specify the number of threads in each worker $ celery graph workers nodes:w1,w2,w3 threads:2,4,6 # …also specify the broker and backend URLs shown in the graph $ celery graph workers broker:amqp:// backend:redis:// # …also specify the max number of workers/threads shown (wmax/tmax), # enumerating anything that exceeds that number. $ celery graph workers wmax:10 tmax:3
Changed the way that app instances are pickled.
Apps can now define a
__reduce_keys__method that’s used instead of the old
AppPicklerattribute. For example, if your app defines a custom ‘foo’ attribute that needs to be preserved when pickling you can define a
__reduce_keys__as such:
import celery class Celery(celery.Celery): def __init__(self, *args, **kwargs): super(Celery, self).__init__(*args, **kwargs) self.foo = kwargs.get('foo') def __reduce_keys__(self): return super(Celery, self).__reduce_keys__().update( foo=self.foo, )
This is a much more convenient way to add support for pickling custom attributes. The old
AppPickleris still supported but its use is discouraged and we would like to remove it in a future version.
Ability to trace imports for debugging purposes.
The
C_IMPDEBUGcan be set to trace imports as they occur:
$ C_IMDEBUG=1 celery worker -l info
$ C_IMPDEBUG=1 celery shell
Message headers now available as part of the task request.
Example adding and retrieving a header value:
@app.task(bind=True) def t(self): return self.request.headers.get('sender') >>> t.apply_async(headers={'sender': 'George Costanza'})
New
before_task_publishsignal dispatched before a task message is sent and can be used to modify the final message fields (Issue #1281).
New
after_task_publishsignal replaces the old
task_sentsignal.
New
worker_process_shutdownsignal is dispatched in the prefork pool child processes as they exit.
Contributed by Daniel M Taub.
celery.platforms.PIDFilerenamed to
celery.platforms.Pidfile.
MongoDB Backend: Can now be configured using a URL:
MongoDB Backend: No longer using deprecated
pymongo.Connection.
MongoDB Backend: Now disables
auto_start_request.
MongoDB Backend: Now enables
use_greenletswhen eventlet/gevent is used.
subtask()/
maybe_subtask()renamed to
signature()/
maybe_signature().
Aliases still available for backwards compatibility.
The
correlation_idmessage property is now automatically set to the id of the task.
The task message
etaand
expiresfields now includes timezone information.
All result backends
store_result/
mark_as_*methods must now accept a
requestkeyword argument.
Events now emit warning if the broken
yajllibrary is used.
The
celeryd_initsignal now takes an extra keyword argument:
option.
This is the mapping of parsed command line arguments, and can be used to prepare new preload arguments (
app.user_options['preload']).
New callback:
app.on_configure().
This callback is called when an app is about to be configured (a configuration key is required).
Worker: No longer forks on
HUP.
This means that the worker will reuse the same pid for better support with external process supervisors.
Contributed by Jameel Al-Aziz.
Worker: The log message
Got task from broker …was changed to
Received task ….
Worker: The log message
Skipping revoked task …was changed to
Discarding revoked task ….
Optimization: Improved performance of
ResultSet.join_native().
Contributed by Stas Rudakou.
The
task_revokedsignal now accepts new
requestargument (Issue #1555).
Worker: New
-Xcommand line argument to exclude queues (Issue #1399).
Adds
C_FAKEFORKenvironment variable for simple init-script/celery multi debugging.
This means that you can now do:
$ C_FAKEFORK=1 celery multi start 10
or:
$ C_FAKEFORK=1 /etc/init.d/celeryd start
to avoid the daemonization step to see errors that aren’t visible due to missing stdout/stderr.
A
dryruncommand has been added to the generic init-script that enables this option.
New public API to push and pop from the current task stack:
celery.app.push_current_task()and
celery.app.pop_current_task`().
RetryTaskErrorhas been renamed to
Retry.
The old name is still available for backwards compatibility.
New semi-predicate exception
Reject.
Semipredicates documented: (Retry/Ignore/Reject).
Scheduled Removals¶
The
BROKER_INSISTsetting and the
insistargument to
~@connectionis no longer supported.
The
CELERY_AMQP_TASK_RESULT_CONNECTION_MAXsetting is no longer supported.
Use
BROKER_POOL_LIMITinstead.
The
CELERY_TASK_ERROR_WHITELISTsetting is no longer supported.
You should set the
ErrorMailattribute of the task class instead. You can also do this using
CELERY_ANNOTATIONS:
from celery import Celery from celery.utils.mail import ErrorMail class MyErrorMail(ErrorMail): whitelist = (KeyError, ImportError) def should_send(self, context, exc): return isinstance(exc, self.whitelist) app = Celery() app.conf.CELERY_ANNOTATIONS = { '*': { 'ErrorMail': MyErrorMails, } }
Functions that creates a broker connections no longer supports the
connect_timeoutargument.
This can now only be set using the
BROKER_CONNECTION_TIMEOUTsetting. This is because functions no longer create connections directly, but instead get them from the connection pool.
The
CELERY_AMQP_TASK_RESULT_EXPIRESsetting is no longer supported.
Use
CELERY_TASK_RESULT_EXPIRESinstead.
Fixes¶
AMQP Backend: join didn’t convert exceptions when using the json serializer.
Non-abstract task classes are now shared between apps (Issue #1150).
Note that non-abstract task classes shouldn’t be used in the new API. You should only create custom task classes when you use them as a base class in the
@taskdecorator.
This fix ensure backwards compatibility with older Celery versions so that non-abstract task classes works even if a module is imported multiple times so that the app is also instantiated multiple times.
Worker: Workaround for Unicode errors in logs (Issue #427).
Task methods:
.apply_asyncnow works properly if args list is None (Issue #1459).
Eventlet/gevent/solo/threads pools now properly handles
BaseExceptionerrors raised by tasks.
autoscaleand
pool_grow/
pool_shrinkremote control commands will now also automatically increase and decrease the consumer prefetch count.
Fix contributed by Daniel M. Taub.
celery control pool_commands didn’t coerce string arguments to int.
Redis/Cache chords: Callback result is now set to failure if the group disappeared from the database (Issue #1094).
Worker: Now makes sure that the shutdown process isn’t initiated more than once.
Programs: celery multi now properly handles both
-fand
--logfileoptions (Issue #1541).
Internal changes¶
Module
celery.task.tracehas been renamed to
celery.app.trace.
Module
celery.concurrency.processeshas been renamed to
celery.concurrency.prefork.
Classes that no longer fall back to using the default app:
Result backends (
celery.backends.base.BaseBackend)
celery.worker.WorkController
celery.worker.Consumer
celery.worker.request.Request
This means that you have to pass a specific app when instantiating these classes.
EventDispatcher.copy_bufferrenamed to
app.events.Dispatcher.extend_buffer().
Removed unused and never documented global instance
celery.events.state.state.
app.events.Receiveris now a
kombu.mixins.ConsumerMixinsubclass.
celery.apps.worker.Workerhas been refactored as a subclass of
celery.worker.WorkController.
This removes a lot of duplicate functionality.
The
Celery.with_default_connectionmethod has been removed in favor of
with app.connection_or_acquire(
app.connection_or_acquire())
The
celery.results.BaseDictBackendclass has been removed and is replaced by
celery.results.BaseBackend. | https://docs.celeryq.dev/en/v5.0.0/history/whatsnew-3.1.html | CC-MAIN-2022-33 | refinedweb | 5,309 | 56.96 |
XAML Namespaces and Namespace Mapping for WPF XAML
This topic further explains the presence and purpose of the two XAML namespace mappings as often found in the root tag of a WPF XAML file. It also describes how to produce similar mappings for using elements that are defined in your own code, and/or within separate assemblies.
This topic contains the following sections.
- What is a XAML Namespace?
- The WPF and XAML Namespace Declarations
- Mapping to Custom Classes and Assemblies
- Mapping CLR Namespaces to XML Namespaces in an Assembly
- Designer Namespaces and Other Prefixes From XAML Templates
- WPF and Assembly Loading
- Related Topics
A XAML namespace is really an extension of the concept of an XML namespace. The techniques of specifying a XAML namespace rely on the XML namespace syntax,. This latter consideration is also influenced by the concept of a XAML schema context. But for purposes of how WPF works with XAML namespaces, you can generally think of XAML namespaces in terms of a default XAML namespace, the XAML language namespace, and any further XAML namespaces as mapped by your XAML markup directly to specific backing CLR namespaces and referenced assemblies.
Within the namespace declarations in the root tag of many XAML files, you will see that there are typically two XML namespace declarations. The first declaration maps the overall WPF client / framework XAML namespace as the default:
xmlns=""
The second declaration maps a separate XAML namespace, mapping it (typically) to the x: prefix.
xmlns:x=""
The relationship between these declarations is that is followed by project templates, sample code, and the documentation of language features within this SDK. The XAML namespace defines many commonly-used features that are necessary even for basic WPF applications. For instance, in order to join any code-behind to a XAML file through a partial class, you must name that class as the x:Class attribute in the root element of the relevant XAML file. Or, any element as defined in a XAML page that you wish to access as a keyed resource should have the x:Key attribute set on the element in question. For more information on these and other aspects of XAML see XAML Overview (WPF) or XAML Syntax In Detail.
You can map XML namespaces to assemblies using a series of tokens within an xmlns prefix declaration, similar to how the standard WPF and XAML-intrinsics XAML namespaces are mapped to prefixes.
The syntax takes the following possible named tokens and following values:
clr-namespace: The CLR namespace declared within the assembly that contains the public types to expose as elements.
assembly= The assembly that contains some or all of the referenced CLR namespace. This value is typically just the name of the assembly, not the path,:
assembly can be omitted if the clr-namespace referenced is being defined within the same assembly as the application code that is referencing the custom classes. Or, an equivalent syntax for this case is to specify assembly=, with no string token following the equals sign.
Custom classes cannot be used as the root element of a page if defined in the same assembly. Partial classes in order to map multiple CLR namespaces to a single XAML namespace. This attribute, XmlnsDefinitionAttribute, is placed at the assembly level in the source code that produces the assembly. The WPF assembly source code uses this attribute to map the various common namespaces, such as System.Windows and System.Windows.Controls, to the namespace.
The XmlnsDefinitionAttribute takes two parameters: the XML/XAML.
If you are working with development environments and/or design tools for WPF XAML, you may notice that there are other defined XAML namespaces / prefixes within the XAML markup.
WPF Designer for Visual Studio uses a designer namespace that is typically mapped to the prefix d:. More recent project templates for WPF might pre-map this XAML namespace to support interchange of the XAML between WPF Designer for Visual Studio and other design environments. This design XAML namespace is used to perpetuate design state while roundtripping XAML-based UI in the designer. It is also used for features such as d:IsDataSource, which enable runtime data sources in a designer.
Another prefix you might see mapped is mc:. mc: is for markup compatibility, and is leveraging a markup compatibility pattern that is not necessarily XAML-specific. To some extent, the markup compatibility features can be used to exchange XAML between frameworks or across other boundaries of backing implementation, work between XAML schema contexts, provide compatibility for limited modes in designers, and so on. For more information on markup compatibility concepts and how they relate to WPF, see Markup Compatibility (mc:) Language Features.
The XAML schema context for WPF integrates with the WPF application model, which in turn uses the CLR-defined concept of AppDomain. The following sequence describes how XAML schema context interprets how to either load assemblies or find types at run time or design time, based on the WPF use of AppDomain and other factors.
Iterate through the AppDomain, looking for an already-loaded assembly that matches all aspects of the name, starting from the most recently loaded assembly.
If the name is qualified, call Assembly.Load(String) on the qualified name.
If the short name + public key token of a qualified name matches the assembly that the markup was loaded from, return that assembly.
Use the short name + public key token to call Assembly.Load(String).
If the name is unqualified, call Assembly.LoadWithPartialName.
Loose XAML does not use Step 3; there is no loaded-from assembly.
Compiled XAML for WPF (generated via XamlBuildTask) does not use the already-loaded assemblies from AppDomain (Step 1). Also, the name should never be unqualified from XamlBuildTask output, so Step 5 does not apply.
Compiled BAML (generated via PresentationBuildTask) uses all steps, although BAML also should not contain unqualified assembly names. | https://msdn.microsoft.com/en-us/library/vstudio/ms747086.aspx | CC-MAIN-2015-18 | refinedweb | 976 | 51.38 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.