text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Starting a project for a Linear Algebra class and I want to display 3D
rotation using JRuby and LWJGL. I already have working code written in
Java, but when I use the same .jar files in my /lib folder of the
project, it can’t find some of the classes.
For example, in the Java file, I can import:
import org.lwjgl.util.glu.GLU; #works!
But when I try to import it in JRuby:
require ‘java’
require ‘lwjgl.jar’
java_import org.lwjgl.util.glu.GLU; #Error!
I get an error with the 2nd; however, SOME of the imports DO work with
JRuby, just not this one and various others = /. If there’s someone out
there who’s done this and has experience with linking JRuby with LWJGL,
I would really appreciate any tips!
Thanks in advance. | https://www.ruby-forum.com/t/jruby-lwjgl-help-importing/224310 | CC-MAIN-2022-40 | refinedweb | 137 | 74.29 |
Dave Wants You/ Chris Owens/ CC BY
We are not superheroes on the Just Team.
We do not save lives or cure diseases, aside from the occasional software bug. We do not pull children out of burning buildings, though one colleague did adopt a stray puppy. We do not try to solve global issues such as overconsumption of natural resources, only the overconsumption of memory in applications.
What we actually do can be explained quite simply.
We provide you with tools to make your life as a developer just a little bit easier and perhaps a little more fun. We build tools that will save you time and cut your development cost.
Sounds simple? It's not at all.
To be honest, if it was just us sitting in a room, brainstorming in isolation on how to achieve our goals, we would never make any progress besides overdosing on coffee. That’s where you come in; that’s why we need you. You are part of the Just Team as much as any of us. In fact, you (yes, all of you) are the most important part of our team. You are our inspiration, our biggest critic, and our best ally. If you are a skeptic, as I tend to be, you may think this is all “just a bunch of marketing fluff.” As an aspiring skeptic, I am prepared to argue with you. Here are some tangible examples to show you why we at Just consider you are a valuable part of our team.
So far this year, you have created extensions for two of our products.
JustDecompile, our free .NET decompiler, now has a neat new plugin called GoToEntryPoint courtesy of Bernhard Lang. Thanks to Bernhard you can now add a simple context menu item in the assembly list to jump to the assembly/module entry-point.
JustCode, the essential Visual Studio productivity tool, has its very first, user-generated extension made available to the public: JustCodeStyleFormatExtension by Chad England. Chad wanted to have a way for JustCode to move using directives into the namespace, and he made it happen.
We are extremely grateful to both Chad and Bernhard for contributing to our efforts to build better tools.
Our products have received a lot of positive reviews and blog posts over the past few months. This feedback is both a validation and constant reminder that we are going in the right direction. Of course, not all feedback was positive, but we consider all feedback (positive or not) valuable in guiding our future direction and decisions on new features.
In recent months, we introduced new feedback portals for JustCode, JustMock, and JustTrace. On these sites, you can submit your feedback, share your ideas, and vote for your favorite features. Please do! Your feedback is what drives us forward. You are part of our team and your opinion counts. We promise to listen!
Subscribe to be the first to get our expert-written articles and tutorials for developers! | https://www.telerik.com/blogs/you-are-part-of-the-just-team | CC-MAIN-2019-09 | refinedweb | 498 | 72.87 |
Lang
Hi .Again me.. - Java Beginners
://
Thanks. I am sending running code...Hi .Again me.. Hi Friend......
can u pls send me some code on JPanel..
JPanel shoul have
1pic 1RadioButton
..
Like a Voter List
Java util package Examples
Java Util Package - Utility Package of Java
Java Util Package - Utility Package of Java
Java Utility package is one of the most commonly used packages in the java
program. The Utility Package of Java consist
java again - Date Calendar
java again I can't combine your source code yesterday, can you help me again. My problem is how we get result jtextfield2 from if jtexfield1 we enter(jTextfield keypressed) then the result out to jTextfield2,
This my jFrame
hi again - Java Beginners
/java/thread/thread-creation.shtml
code after changing..
import java.io.
matching database again - Java Beginners
matching database again Dear experts,
So happy I get through this ask a question page again. Thank God.
I want to say "A BIG THANK YOU" for helping me about the matching codes.
It is working now after fine tuning
Read data again - Java Beginners
Read data again sir,
i still hav a problem,first your code will be change like this :
in netbeans out message error 5. Can you help me again. My database like my question before.Can you fix and find the problem in my code
Read data again - Java Beginners
Read data again Hey,
i want to ask again about how to read data from txt,
My DB:
kd_div varchar(15),
nm_div varchar(30),
dep varchar(25),
jab varchar(35),
cab varchar(15),
ket varchar(30)
My data in txt file is://i
doesnt run again - Java Beginners
the soltion
Hi
I am sending u again the code, this code run in my
Hi..Again Doubt .. - Java Beginners
Hi..Again Doubt .. Thank u for ur Very Good Response...Really great..
i have completed that..
If i click the RadioButton,,ActionListenr should get call. It should add to the MS Acess table..Plz check this out....
hope u ill
call frame again - Java Beginners
read from jbutton1 in FrameA to FrameB,then i write "JAVA" in Jtextfield1(FrameB),then i click jbutton1 in FrameB. "JAVA" is a word i'am write in Jtexfield1
java util date - Time Zone throwing illegal argument exception
java util date - Time Zone throwing illegal argument exception Sample Code
String timestamp1 = "Wed Mar 02 00:00:54 PST 2011";
Date d = new Date...());
The date object is not getting created for IST time zone. Java
Read data again - Java Beginners
help again plz sorry - Java Beginners
help again plz sorry Thanks for giving me thread code
but i have a question
this code is comletelly right
and i want to make it runs much faster....
Thanks
util
Plz chk it and reply again - Java Beginners
Java util date
Java util date
The class Date in "java.util" package represents... to
string and string to date.
Read more at:
http:/
Drop Down Reloads again in IE..How to prevent this?
Drop Down Reloads again in IE..How to prevent this? Hi i was using two drop down box..One for Displaying date followed by another for Dispalying...? Its purely JavaScript and HTML page..Im not uing this concept in Java or any
Script on the page used too much memory. Reload to enable script again.
Script on the page used too much memory. Reload to enable script again. Using a java script to generate the dynamic report. If page open the full... to enable script again". After getting this error other pages also not working
again with xml - XML
again with xml hi all
i am a beginner in xml so pls give me the details regarding the methods used in it.
wat will return the methods... it is used.
pls post some example code for it..
thanks in advance hello
java - Java Interview Questions
information :
Thanks...java Can unreachable object become reachable again? Hi friend,
Yes,an unreachable object may become reachable again.
The garbage
Associate a value with an object
with an object in Java util.
Here, you
will know how to associate the value... of the several extentions
to the java programming language i.e. the "...;}
}
Download this example
this code will be problem it display the error again send jsp for registration form
this code will be problem it display the error again send jsp for registration form I AM ENTERING THE DETAILS OFTER IT DISPLAY THE ERROR PLEASE...;/option>
<option value="C#">C#</option>
<option value="Java
i written the program in the files but in adding whole file is writing once again - Java Beginners
Inheritance Example In Java
Inheritance Example In Java
In this section we will read about the Inheritance using a simple example.
Inheritance is an OOPs feature that allows to inherit...
the inheritance feature in Java programming. This example will demonstrate you
java - Java Beginners
java write a programme to to implement queues using list interface Hi Friend,
Please visit the following link:
Thanks
java - Applet
://
Thanks...java what is the use of java.utl Hi Friend,
The java
java persistence example
java persistence example java persistence example
stack and queue - Java Beginners
://
Hope...stack and queue write two different program in java
1.) stack
2
Java Client Application example
Java Client Application example Java Client Application example
STACK&QUEUE - Java Interview Questions
://
Hope that it will be helpful for you
Java - Java Interview Questions
://
Thank you for posting
Example of HashMap class in java
Example of HashMap class in java.
The HashMap is a class in java collection framwork. It stores values in the
form of key/value pair. It is not synchronized
Java set example
Java set example
In this section you will learn about set interface in java. In java set is a
collection that cannot contain duplicate element. The set... collection.
Example of java set interface.
import java.util.Iterator;
import to show class exception in java
Example to show class exception in java
In this Tutorial we are describing the example to show the use of
class exception in java .Exceptions are the condition
Java XStream
Java XStream
XStream is a simple library used to serialize the objects to XML and back
again into the objects.
Features of the XStream APIs
XStream provides better
java - Java Interview Questions
more information to visit....... an explanation with simple real time example
Hi friend,
Some points
Java Example Update Method - Java Beginners
Java Example Update Method I wants simple java example for overriding update method in applet .
please give me that example - Java Interview Questions
to :
Thanks
Static Method in java with realtime Example
Static Method in java with realtime Example could you please make me clear with Static Method in java with real-time Example
array example - Java Beginners
i cannot solve this example final Keyword Example
Java final Keyword Example
In this section we will read about the final... in Java in various different context. In this example we will create... Example : In this example we will create a
Java class into which we
enters an invalid value, they should be prompted again. For example:
Grade.... In this example, user input is in dark red,
prompts are in navy blue, and all other... a flexible range of values for the user's inputted answer. For example,
above you'll
Switch Statement example in Java
also provides you an example with complete
java source code for understanding... Switch Statement example in Java
This is very simple Java program
Database Connectivity Example In Java
Database Connectivity Example In Java
In this section we will read about how... the
SQL queries. In this example we will create a Java class into which we... and execute the above example you will get the output
as follows :
Then again
Working With File,Java Input,Java Input Output,Java Inputstream,Java io
Tutorial,Java io package,Java io example
;
Lets see an example that checks the existence of
a specified file...:\nisha>java CreateFile1
New file "myfile.txt" has been created... again then after
checking the existence of the file, it will not be created and you
Java Hello World code example
Java Hello World code example Hi,
Here is my code of Hello World program in Java:
public class HelloWorld {
public static void main...");
}
}
Thanks
Deepak Kumar Hi,
Learn Java
There and Back Again
There and Back Again
The weblog of Joshua Eichorn, AJAX, PHP and Open Source
Read full Description
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/93962 | CC-MAIN-2015-40 | refinedweb | 1,430 | 63.8 |
Red Hat Bugzilla – Full Text Bug Listing
Description of problem:?
Version-Release number of selected component (if applicable):
3.3.1
How reproducible:
Been like this on this machine for 3-4days.
Steps to Reproduce:
1. Install libreoffice-calc with libreoffice-pyuno
2. put the attached python script in <home>/.libreoffice/3/user/Scripts/python/era/ (create missing folders as necessary)
3. Start Calc and type-in 2 in A1 (make sure there are three sheets ... Sheet1, Sheet2, Sheet3)
4. Goto Tools Menu -> macros -> run-macro -> My Macros -> era -> *
5. Run Start() from the Macro dialog
5. Type 4 in A1
Actual results:
Nothing
Expected results:
Two more sheets should be added
Additional info:
I have confirmed this to be a bug. It is possible that this anomaly was
introduced during packaging. I have also filed a similar report here:
Created attachment 515983 [details]
a python listener macro that monitors A1 and adds/removes sheets based on its values
It works fine with the following change:
@@ -19,7 +19,7 @@ class myListener(XModifyListener, unohel
self.formerVal = numOfSheets-1
def modified(self, oEv):
- currentVal = oEv.Source.Value
+ currentVal = int(oEv.Source.Value)
self.diff = currentVal - self.formerVal
if (self.formerVal < self.minSheets) and (currentVal < self.minSheets):
sheetManager("Reserve", "") #pass
It seems that the type of oEv.Source.Value changed from int to float and range() does not like that. That is actually not an error, as the description of method com.sun.star.sheet.XCell.getValue is "returns the floating point value of the cell." It is even possible that the change was in Python, as the tarball from openoffice.org site contains bundled python 2.3, which might allow floats as arguments to range().
Btw, I do think you should use instance variables for diff and formerVal. | https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=726907 | CC-MAIN-2016-30 | refinedweb | 297 | 61.83 |
Pixiv API client.
Project description
python-pixiv: Pixiv API client for moe girls.
- Free software: LGPLv3
- Documentation:.
- Contribute:
Quickstart
Install python-pixiv:
$ pip install pixiv
Login to pixiv:
from pixiv import login pixiv = login('username', 'password')
Save the work from a particular user:
user = pixiv.user(7631951) for art in user.works(): art.save()
See the full documentation for more!
History
0.1.0 (2015-01-20)
- First release on PyPI.
- Basic things like logging in and viewing a list of works a user has created work.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pixiv/ | CC-MAIN-2020-29 | refinedweb | 116 | 60.92 |
Numerically calculating an effectiveness factor for a porous catalyst bead
Posted February 13, 2013 at 09:00 AM | categories: bvp | tags: reaction engineering | View Comments
Updated January 05, 2015 at 09:59 AM.
A mole balance on the particle volume in spherical coordinates with a first order reaction leads to:
with boundary conditions
and
at
. We convert this equation to a system of first order ODEs by letting
. Then, our two equations become:
and
We have a condition of no flux (
) at r=0 and Ca(R) = CAs, which makes this a boundary value problem. We use the shooting method here, and guess what Ca(0) is and iterate the guess to get Ca(R) = CAs.
The value of the second differential equation at r=0 is tricky because at this place we have a 0/0 term. We use L'Hopital's rule to evaluate it. The derivative of the top is
and the derivative of the bottom is 1. So, we have
Which leads to:
or
at
.
Finally, we implement the equations in Python and solve.
import numpy as np from scipy.integrate import odeint import matplotlib.pyplot as plt De = 0.1 # diffusivity cm^2/s R = 0.5 # particle radius, cm k = 6.4 # rate constant (1/s) CAs = 0.2 # concentration of A at outer radius of particle (mol/L) def ode(Y, r): Wa = Y[0] # molar rate of delivery of A to surface of particle Ca = Y[1] # concentration of A in the particle at r # this solves the singularity at r = 0 if r == 0: dWadr = k / 3.0 * De * Ca else: dWadr = -2 * Wa / r + k / De * Ca dCadr = Wa return [dWadr, dCadr] # Initial conditions Ca0 = 0.029315 # Ca(0) (mol/L) guessed to satisfy Ca(R) = CAs Wa0 = 0 # no flux at r=0 (mol/m^2/s) rspan = np.linspace(0, R, 500) Y = odeint(ode, [Wa0, Ca0], rspan) Ca = Y[:, 1] # here we check that Ca(R) = Cas print 'At r={0} Ca={1}'.format(rspan[-1], Ca[-1]) plt.plot(rspan, Ca) plt.xlabel('Particle radius') plt.ylabel('$C_A$') plt.savefig('images/effectiveness-factor.png') r = rspan eta_numerical = (np.trapz(k * Ca * 4 * np.pi * (r**2), r) / np.trapz(k * CAs * 4 * np.pi * (r**2), r)) print(eta_numerical) phi = R * np.sqrt(k / De) eta_analytical = (3 / phi**2) * (phi * (1.0 / np.tanh(phi)) - 1) print(eta_analytical)
At r=0.5 Ca=0.200001488652 [<matplotlib.lines.Line2D object at 0x114275550>] <matplotlib.text.Text object at 0x10d5fe890> <matplotlib.text.Text object at 0x10d5ff890> 0.563011348314 0.563003362801
>>IMAGE.
The effectiveness factor is the ratio of the actual reaction rate in the particle with diffusion limitation to the ideal rate in the particle if there was no concentration gradient:
We will evaluate this numerically from our solution and compare it to the analytical solution. The results are in good agreement, and you can make the numerical estimate better by increasing the number of points in the solution so that the numerical integration is more accurate. is a good thing you can figure this out numerically!
Thanks to Radovan Omorjan for helping me figure out the ODE at r=0!
Copyright (C) 2015 by John Kitchin. See the License for information about copying.
Org-mode version = 8.2.10 | http://kitchingroup.cheme.cmu.edu/blog/2013/02/13/Numerically-calculating-an-effectiveness-factor-for-a-porous-catalyst-bead/ | CC-MAIN-2017-39 | refinedweb | 551 | 59.6 |
Clopper Almon
Feb 1999 Version
1. Basic Syntax.......................................................................................................................1
2. Pointers and Dynamic Space Allocation............................................................................7
3. Structures..........................................................................................................................11
4. Classes..............................................................................................................................14
5. Constructors and Destructors...........................................................................................18
6. Overloading Operators and Friends..................................................................................24
There are plenty of fat books that explain C and C++ programming in hundreds of pages. In
fact, however, some 30 pages are sufficient to explain them to anyone who has programmed in
other languages such as FORTRAN or BASIC. This introduction does just that. Examples are
very important in teaching programming, and every concept is illustrated. At the end of each
section, a working program shows the concepts in use and exercises call for the application of
the concepts to extend the program.
1. Basic Syntax
1. C and C++ are free-form; anywhere that a blank may appear, any number of blanks or new
line characters may appear. This fact is used to give programs indentation which makes
them easy to read. The indentation, however, has no effect on the operation of the program.
They are case-sensitive: x is not the same as X.
2. Every expression statement ends with a ";" . Examples:
x = 1;
printf("Hello, World!");
Usually expression statements are assignments (like the first example), or function calls (like
the second).
3. A single character is represented thus: 'a'. This is one byte.
4. A zero-terminated string is enclosed in " ". For example: "Smith" or "a" . This latter is two
bytes: a0 . C provides a number of functions for working with such strings. Four
commonly used ones ares:
strcpy(a,b) string copy: copies string b to string a.
strlen(a) string length: returns the length of the string, the number of bytes in the
string, not counting the 0 at the end.
1
strcmp(a,b) string compare: returns 0 if a and b are the same string.
strcat(a,b) string concatenation: tacks string b onto the end of string a, making one
string.
5. Anywhere that one statement is called for, a group of statements enclosed in { } can be used.
A ";" is not required after the "}" except after the struct or class keywords.
6. The following logical operators are recognized:
&& and || or < less than
> greater than < = less than or equal >= greater than or equal
== equal ! not != not equal
Note in particular that the logical equals is TWO = signs written together: == .
7. Subscripts are shown by [ ].
8. "If" statements have the form:
if(logical expression) statement
For example:
if(a == b){
x = z;
x = 0;
}
To illustrate point 1, this code could also be written:
if(a==b){x = z; x=0;}
2
x += y is the same as x = x + y
x -= y is the same as x = x - y
x *= y is the same as x = x*y
x /= y is the same as x = x/y.
10. Looping may be done by while, for, or do...until statements. The format of the for
statement is:
for(initial; while; increment) statement
For example:
for(i=1; i <=n; i++){
x[i] = y[i];
y[i] = 0;
}
For while, the format is
while(logical expression) statement
For example, the same thing that was done with the for statement above could be done with
a while loop as follows:
i = 0;
while( i <= n){
x[i] = y[i];
y[i] = 0;
i++;
}
The last two statements could be compressed to
y[i++] = 0;
since the incrementing of i is done after the use of i as a subscript. This sort or coding,
however, is not recommended because it tends to be hard to read.
11. A comment may be shown by enclosing the comment in /* ... */. For example:
/* This kind of comment can extend over many lines or be at the beginning of a line
*/
In C++ but not in C, a comment extending to the end of a line may be shown by just a // followed
by the comment. For example:
3
i++; // This is the same as i = i+1.
13. Labels, comparable to statement numbers in Fortran, and the goto keyword are used in this
way:
top: x = y;
....
goto top;
14. The statements break and continue can occur only inside loops. The break statement
breaks out of the present loop. The continue statement jumps to the next iteration of the
loop. Note that C's continue is radically different from the Fortran CONTINUE.
15. When calling a function, C passes a copy of the value of an argument whereas Fortran
passes the pointer to the argument. In a Fortran program, it is therefore very dangerous to
modify the value of, say, an integer that was passed to it. In C, that modification causes no
problem. It is passing arguments by value rather than by reference that makes C recursive,
that is, that give it the possibility for a function to call itself. (The G regression program
uses this feature to simplify greatly the programing of the evaluation of the right side of f
commands.) If it is desired to pass the pointer to something, then the argument should be
the pointer.
Here is a simple program for practice in reading C. You will find it in the rolodex.cpp file.
Although it uses many of the basic C control statements, it is fairly close to Fortran or Basic
in its logic. In the following sections, we will expand it and make it more C-like and finally
C++ like.
4
/* The Rolodex Program -- First Version************************************\
This program works like a Rolodex file. It asks the user "Whom do you want?" and looks up
the answer in a file of names and addresses. If it finds the name, it displays the line in
the file which begins with that name. It begins by asking the user to supply the name of
the data file. Each line in the data file should begin with a name followed by a comma.
\***************************************************************************/
#include <stdio.h> /* The #include directs the compiler to include the "header" file
stdio.h which defines for the compiler the "standard input -output"
functions, including printf, which is used below. The <> tells the
compiler to look in its own "include" directory to find this file.*/
#include <string.h> /* Similarly, for string.h, which is needed for all the string
functions. If the compiler complains that it has no prototype for
the function you have used, look up the function in the Library
Reference manual. It will show what what header file you need.*/
void main() // The name of the main program is always main.
{
FILE *fp; //fp will be a "file pointer", C's identification for a file
char filename[40],name[40],person[120],him[40],found;
int go, i;
5
Exercises
1. Compile, link, and run rolodex. If you machine has be set up properly, these commands will
do the compiling and linking:
cp rolodex
m rolodex
Steal a look at rolodex.dat so you will know whom you can find. Then start the program with
rolodex
When asked "Filename:", reply "rolodex.dat".
2. If there are two people by the same last name, this program will find only the first. Make
the program ask "Is this the right one?" and if the answer is no, make it keep looking. Use
the getch() function. Look it up in the Library Reference.
3. Make the program accept either a comma or a space as terminating the name in the input
file.
6
2. Pointers and Dynamic Space Allocation
Our rolodex.cpp has to read the whole file every time it wants to find someone. A
smarter program would read it all once, extract the names, put them in an index kept in RAM,
and record with each the location in the file of the first byte of that line. Then when the user asks
for someone, the program just looks through the index and, if it finds the name in question,
positions the file to the beginning of that line, reads it and displays it.
Now some names are long like Krollpfeiffer while others are short like Ma. In our index
we would like to use just as many characters as necessary to store the name, no more, no less.
To do so, we must allocate space for storing the name after we know how long it is. That is, we
must allocate the space dynamically, after the program is running. This we will do with the
"new" command of C++. (In C, the same thing was done a little less elegantly by the malloc
function, which still works in C++.)
The other new idea in this section is that of pointers. There are two aspects of any number
or letter used in a program: (1) where it is stored, its address and (b) the value that is stored there,
its content. In the code
int x;
x = 2;
the second line makes the content of x equal to 2. Often it is enough to deal only with contents,
but sometimes, as we shall see, it is convenient to know the address of x. In C, the address of
any item is given by putting a & in front of it. Thus &x is the address of x; in C jargon, &x is the
pointer to x. Conversely, if name is a pointer, then *name is the content of what it points to. In
particular, *&x is the content of x. Now when an array is declared, for example, by
char name[40],
name itself is a pointer to the first byte in this array. Thus, name is exactly the same as
&name[0]. We can say that name is a pointer to a character. An almost equivalent alternative to
the above declaration would be
char *name;
name = new char[40];
The first line declares name to be a pointer to a character. The notation is intended to be
mnemonic; put a * in front of name and you have a character. The second line grabs 40 bytes
from "heap" memory, assigns them to this program, and sets name to point to them.1
Now what we need is not just space for one name but space for a large number. We need
an array of names. Let us use maxrolo as a maximum number of names in our rolodex and set it
equal to 1000. Then we need something like this near the beginning of our program:
1 The only difference between the effect of these two lines and the "char name[40];"
declaration is where the memory is allocated. If the declaration occurs within a program, as in
the first case, the memory is allocated on the "stack", which is where there are stored values of
local variables and information about where functions should return control when they finish.
The second takes memory from the operating systems general bank of memory.
7
const int maxrolo = 1000;
char **names;
names = new char*[maxrolo];
In the first line, the "const" protects the value of maxrolo from inadvertent change in the
program. The second line says that names is to be a pointer to pointers to characters. The third
line grabs enough space for 1000 pointers to characters and sets names to point to the beginning
of this space. Then as we read through our rolodex data file the first time and have the name
from the kth line in the array "him", we just do
len = strlen(him);
names[k] = new char[len+1];
strcpy(names[k],him);
The middle line grabs space for len+1 characters and sets names[k] to point to the beginning
of it. We need len+1, not len, to allow space for the zero at the end of the string.
With the names taken care of, we can quickly attend to the remaining matter, the position
in the file of the beginning of the corresponding line. The ftell function tells us where a file is
at any time. We store these positions in an array of unsigned long (four-byte) integers. We will
also put this array on the heap, so the program is something like this
unsigned long *positions;
...
positions = new unsigned long[maxrolo];
...
Just before the kth read of the fp file, we do
positions[k] = ftell(fp);
When we are looking for a name and have found that it is the kth one, we need to put the file
back where it was just before reading that kth line. The statement is just
fseek(fp,positions[k],0);
The final 0 argument means to position from the beginning of the file.
We now have all the elements we need. Here is the new version of the program modified to use
pointers and dynamic allocation.
8
/* The Rolodex Program with an index*/
#include <stdio.h> // for printf()
#include <string.h> // for strcmp()
#include <conio.h> // for getch()
void main()
{
FILE *fp;
char filename[40],name[40],person[120],him[40],found;
int go, i, n, k, len;
char **names,c;
unsigned long *psn;
const int maxrolo = 1000;
query: printf("Filename:");
gets(filename);
if((fp = fopen(filename,"rt")) == 0){
printf("Cannot open %s.\n",filename);
goto query;
}
// Make the index
k = 0;
while(k < maxrolo){
psn[k] = ftell(fp);
if(fgets(person,120,fp) == NULL) break;
for(i=0;i<40;i++){
if(person[i] != ',' && person[i] != ' ') him[i] = person[i];
else break;
}
him[i] = '\0';
len = strlen(him);
names[k]= new char[len+1];
strcpy(names[k],him);
k++;
}
n = k;
go = 1;
9
while(go == 1){
printf("\nWhom do you want?");
gets(name);
if(strcmp(name,"q") == 0) break;
search:
for(k= 0; k < n; k++){
if(strcmp(name,names[k]) == 0){
fseek(fp,psn[k],0);
fgets(person,120,fp);
printf("%s\n",person);
printf("Is this the right one? (y or n):");
c = getch();
printf("%c\n",c);
if(c == 'n') continue;
break;
}
}
if(k == n) {
printf("%s is not in the Rolodex.\n",name);
continue;
}
}
}
10
3. Structures
The arrays which we have used so far, and which are characteristic of Fortran programs,
are collections of similar items. We have seen:
char filename[40]; // an array of characters
char **names; // an array of pointers to characters
unsigned long *positions; // an array of unsigned long integers.
We have not used but you can readily imagine how to use
float x[40]; // an array of 4-byte floating point numbers
int years[30]; // an array of 2-byte integers
double **a; // a matrix of 8-byte floating point numbers.
It frequently happens, however, that we would like to have unlike elements grouped
together. Thus, we may have a data bank with various sorts of information on individuals. On
each individual, we might have
name a character string
date of birth an array of three integers
income a floating point number
and so on. C provides a device for grouping together such diverse pieces of information. It
is known as a structure. Structures are useful for two reasons. First, one can pass all information
about an individual to a function (or subroutine in Fortran terms) by just passing a pointer to the
structure. In Fortran, if you want a function to work on a matrix, you have to pass to the function
the pointer to the rectangular array and the pointer to the number of rows and the pointer to the
number of columns. (Yes, you used pointers in Fortran and never knew it.) In C, you just put
these three elements into a structure and pass the pointer to the structure. Thus the calls to
functions are much simplified. Second, the main thing that C++ did was to generalize the C
structure just slightly. The whole power of "object oriented programming" is achieved through
these structures.
We will now re-write rolodey.cpp into rolodez.cpp to use a structure. We need a very
simple one. For each line in the data file, we just put together the items we need for the
indexing. We will call our structure a "line". Here is the code that defines its contents.
struct line{
char *name;
unsigned long position;
};
Note the ; after the }. This is the one place that it is necessary. Once "line" has been
defined in this way, it is just as much a data type as int, float, char, and so on. We put the
11
definition of the structure above the main() line so the compiler knows what a "line" is when it
encounters the following declaration.
line *Lines;
Lines = new line[maxrolo];
Here Lines is an array of pointers to line structures. The "new" statement grabs enough
space for maxrolo (1000) of these pointers and sets Lines to point to the beginning of that space.
Note that at this point, no space has been grabbed for the content of the name in each line, only
for the pointer to that content. Now as we read the rolodex data file, we fill in the position and
name for each line. Note how the elements of the structure are referred to by the pointer to the
structure followed by a "." followed by the name of the element. The part of the program which
reads the file becomes the following.++;
}
The part of the code that looks up the name the user has given now has these two lines:
if(strcmp(name,Lines[k].name) == 0){
fseek(fp,Lines[k].position,0);
In this simple example, I would not claim that the use of the structure offers much
advantage. The point has been, rather, to show the mechanics of using a structure in an example
that is simple enough to do without it. The full code for this third stage of the rolodex program,
rolodez.cpp, follows.
/* The Rolodex Program with a structure*/
#include <stdio.h> // for printf()
#include <string.h> // for strcmp()
#include <conio.h> // for getch()
struct line {
char *name;
unsigned long position;
}; // note the ; here.
12
void main()
{
FILE *fp;
char filename[40],name[40],person[120],him[40];
int go, i, n, k, len;
char c;
const int maxrolo = 1000;
line *Lines;
Lines = new line[maxrolo];
query: printf("Filename:");
gets(filename);
if((fp = fopen(filename,"rt")) == 0){
printf("Cannot open %s.\n",filename);
goto query;
}
// Make index++;
}
n = k;
go = 1;
while(go == 1){
printf("\nWhom do you want?");
gets(name);
if(strcmp(name,"q") == 0) break;
search:
for(k= 0; k < n; k++){
if(strcmp(name,Lines[k].name) == 0){
fseek(fp,Lines[k].position,0);
fgets(person,120,fp);
printf("%s\n",person);
printf("Is this the right one? (y or n):");
c = getch();
printf("%c\n",c);
if(c == 'n') continue;
break;
}
}
if(k == n) {
printf("%s is not in the Rolodex.\n",name);
continue;
}
}
}
13
4. Classes
The most significant advances of C++ over C lie in the expansion of the structure concept.
They are:
to allow functions to be part of the structure.
to allow operators such as +, -, *, and = to be overloaded so that they apply to structures
for which the programmer has defined them.
to make it possible to derive one structure from another so that the derived structure has
all the elements of the original plus some of its own.
to make it possible to restrict access to some elements of a structure.
The first of these will be illustrated with the rolodex example. The others are abundantly
illustrated in the Beginners' Understandable Matrix Package, BUMP, whose study should follow
that of this brief introduction.
Structures with these characteristics are often called classes, and C++ has introduced the
rather redundant keyword class for such a structure. The difference between a structure and a
class lies in the last of the four points and indeed only in the default accessibility of its elements.
In a structure, all elements are by default accessible from any part of the program, although the
programmer can explicitly restrict access; in a class, they are all restricted to "private" by default
but the programmer can explicitly make them accessible. The purpose of the final point, in case
it seems rather strange, is to facilitate the division of labor where a number of programmers are
working on one project. The programmer working on a particular structure commits to giving it
certain functions that programmers working on other parts can depend upon. The internal
working of the structure, however, she is free to change anyway she likes without
inconveniencing her colleagues. It is obviously rather difficult to demonstrate the usefulness of
the feature in small programs.
Instances of a class are called objects. The rolodex program with an object, rolodaze.cpp,
augments the structure definition at the top as follows:
struct line {
char *name;
unsigned long position;
void load(char *who, unsigned long psn);
char check(char *who);
}; // note the ; here.
Here, load is a function which will allocate space for the name, copy the name to that space,
and store the position number. Here is its code.
14
void line :: load(char *who, unsigned long psn){
int len;
position = psn;
len = strlen(who);
name = new char[len+1];
strcpy(name,who);
}
Note the way it is identified in the first line as a member function of the line structure.
Also note that, as a member of the line structure, it is on a first-name basis with the other
elements of the structure. It can refer to "position" and "name" without having to precede these
names with a reference to the structure.
The check function is used to compare the name requested by the user with the name in
this line and, if a match is found, to read the line from the rolodex data file and display it. It
returns 'y' if it finds a match and 'n' otherwise. Here is the code.
With this work pushed into the structure, the main program is briefer although the total
length is greater. Here is rolodaze.cpp.
15
/* The Rolodex Program with an Object */
#include <stdio.h> // for printf()
#include <string.h> // for strcmp()
#include <conio.h> // for getch()
struct line {
char *name;
unsigned long position;
void load(char *who, unsigned long psn);
char check(char *who);
}; // note the ; here.
FILE *fp; // This has been moved outside any program to make fp global, accessible from
anywhere.
void main()
{
char filename[40],name[40],person[120],him[40],found;
int go, i, n, k;
char c;
line *Lines;
unsigned long psn;
const int maxrolo = 1000;
query: printf("Filename:");
gets(filename);
if((fp = fopen(filename,"rt")) == 0){
printf("Cannot open %s.\n",filename);
goto query;
}
16
while(go == 1){
printf("\nWhom do you want?");
gets(name);
if(strcmp(name,"q") == 0) break;
for(k= 0; k < n; k++){
if(Lines[k].check(name) == 'y'){
printf("Is this the right one? (y or n):");
c = getch();
printf("%c\n",c);
if(c == 'n') continue;
break;
}
}
if(k == n) {
printf("%s is not in the Rolodex.\n",name);
continue;
}
}
}
position = psn;
len = strlen(who);
name = new char[len+1];
strcpy(name,who);
}
17
5. Constructors and Destructors
The objects in rolodaze.cpp were still simple enough that we did not have to construct or
destroy them. Let us now make a larger object which we shall call rolo, an instance of a class
called Rolodex. With this object, we can reduce the main program to
void main(){
char filename[40],name[40];
query: printf("Filename:");
gets(filename);
Rolodex rolo(filename);
Note that in the first line in bold print we declare that rolo is an object of type Rolodex. In
the second, we call the "find" function of this object. We want a Rolodex object to construct
itself when declared, that is, we want it to open the file whose name was passed to it, read this
file, and construct a rolodex index such as we have been using. Obviously, so complicated a
constructor has to be specially written. Likewise, when we are through with an object, it is
important to be able to destroy it, that is, to free up the RAM it is occupying so that it can be
used again, if need be, later in the program. The definition of the Rolodex structure must show
that it has these various functions. Here is that definition of both the line and the Rolodex
structures.
18
struct line {
private:
char *name;
unsigned long position;
public:
line(){name=0;position= 0;}
void load(char *who, unsigned long psn);
char check(char *who, FILE *fp);
~line();
};
struct Rolodex{
private:
FILE *fp;
line *Lines;
int maxrolo,nlines;
public:
Rolodex(char *filename);
~Rolodex();
void find(char *name);
};
In the Rolodex function, the three functions we have mentioned appear below the line
"public:". That line makes those functions accessible from anywhere in the program, whereas
the elements defined to be "private" are accessible only to these three member functions and
other functions explicitly declared to be "friends" of the structure. This structure has no friends;
in BUMP we will see structures with lots of friends. A constructor always has the same name as
the structure, so Rolodex(char *filename) is the constructor. A structure may have several
constructors if they are distinguishable by the number and type of arguments which they have.
The destructor always has as a name the name of the structure preceded by a ~. Thus,
~Rolodex() is the destructor. There is never more than one destructor and it has no arguments.
Constructors and destructors cannot have return values, so their names are not preceded by a
return type in the structure definition. Because one and the same main program might now
conceivably have several Rolodex objects, we have put fp, maxrolo, and nlines (the number of
lines in the data file) into the Rolodex structure. Since the constructor cannot return a value to
let the calling program know if it had trouble, a global variable, RoloOpen, has been introduced
to allow it to communicate with the program which calls it. Here is the new part of the Rolodex
constructor. Putting in the now-familiar code for making the index has been left as an exercise.
19
Rolodex::Rolodex(char *filename) {
int k,i;
char person[120], him[40];
unsigned long psn;
maxrolo = 1000;
Lines = new line[maxrolo];
if((fp = fopen(filename,"rt")) == 0){
printf("Cannot open %s.\n",filename);
RoloOpen = 'n'; /* This round-about communication is necessary
because a constructor cannot return a value. */
return;
}
RoloOpen = 'y';
// Make the index
k = 0;
while(k < maxrolo){
/* exercise */
}
nlines = k;
}
Now we have to deal with the destructor. Here it is. The necessary points are noted in the
Rolodex::~Rolodex(){
delete [] Lines; // the [] is required to show that Lines is an array.
fclose(fp); /* Since fp was opened in the constructor, it must be
closed in the destructor. */
}
When the program hits the line "delete [] Lines" it will recall that is has maxrolo "line"
objects in Lines, and will call the destructor for line that many times in inverse order. That is, it
will delete line maxrolo-1 first and line 0 last. The destructor for a line object is
line::~line(){
if(name != 0)
delete [] name;
}
Note the test on the value of name . Unfortunately, it is very destructive to "delete"
something which has not been assigned with a "new" or something which has already been
20
deleted. Look back now at the definition of the line structure. Its constructor was so short and
simple that it could be written in the definition without cluttering it up. It was just
line(){name=0;position= 0;}
Note, however, that it assures us that name is initially equal to 0 so that the test in the
destructor will work correctly.
It is clear when a constructor is called. When is the destructor called? In C++ jargon, the
answer is When the object goes out of scope. But what does that mean? In the simplest case,
which is all we shall deal with, it means that if the object was declared as a local variable in a
function, its destructor is called when that function is completed and returns. To check that our
destructor is working correctly, we will put the declaration of the Rolodex in a function called
sub(). We will check the free core left before we call sub and again after it returns. If we get the
same answers both times, we know that our destructors are working correctly.
// Function prototypes
void sub(void);
// Global variables
char RoloOpen;
void main(){
unsigned long core;
core = coreleft();
sub();
printf("Original core left: %ld\n",core);
core = coreleft();
printf("Final core left: %ld\n", core);
}
21
void sub(void){
char filename[40],name[40];
query: printf("Filename:");
gets(filename);
Rolodex rolo(filename); // This is a declaration, just as is the "char" line.
if(RoloOpen == 'n') goto query;
printf("Intermediate core left: %ld\n", coreleft());
C++ insists upon knowing the format, or prototype, of each function before that function
is used. Most of these prototypes have been provided in the structure specification. The function
sub(), however, is not part of any of these so it must have its own prototype statement. It differs
from the first line of the declaration of the function by ending in a ";". Here it is.
// Function prototypes
void sub(void);
With these components, you should be able to put together the final version of the rolodex
program, complete with constructors and destructors. It is called roldover.cpp on the disk, but it
would be a good exercise for you to write it from what has been given here.
Once it works properly, try removing the fclose from the Rolodex destructor. Do you get
back all your core?
Exercise
Create a Vector structure which has the following definition.
struct Vector{
private:
int n; // number of elements
float *v; // the elements
public:
Vector (char *filename); // Read the vector from the named file text file.
22
~Vector(); // Destructor
int show(char *title, int FieldWidth = 8, int DecimalPlaces = 2);
// Display the vector on the screen
float sum(); // return the sum
float enorm(); // Return the Euclidian norm (Square root of sum of squares)
float lnorm(); // Return the l-norm. (sum of absolute values)
float mnorm(); // Return the m-norm. (max absolute value)
};
You should write all the functions and verify that they work. You may choose any format
you like for the text file from which you read the vectors. Perhaps the easiest format for which
to program is to put the number of elements on the first line and the elements on the following
lines, one per line. You may use the function atof() to convert text strings to floats. Read about
atof in the compiler help files.
23
6. Overloading Operators and Friends
The Vector structure that you wrote in the last exercise was fine as far as it went, but
perhaps it occurred to you that the Vectors were lonely. There was no way for them to interact
with one another. It should be possible to add Vectors together or to find the angle between two
of them. In this section, we show how to overload the + and = operators so that one can write
code such as
Vector a("a.vec"),b("b.vec"),c(4);
c = a + b;
c.show("C = A + B");
To do so, we will need to expand the definition of the Vector class to the following.
struct Vector{
private:
int n; // number of elements
float *v; // the elements
char temp; // y if the vector has been created by an operator
void freet(){if(temp == 'y') freeh();}
public:
Vector(char *filename); // Read the vector from the named file text file.
Vector(int n, char temporary = 'n');
Vector(Vector& a); // Copy constructor
~Vector(); // Destructor
void Vector::show(char *title,int FieldWidth=8, int DecimalPlaces=2);
float sum(); // return the sum
float enorm(); // Return the Euclidian norm
float lnorm(); // Return the l-norm. (sum of absolute values)
float mnorm(); // Return the m-norm. (max absolute value)
void freeh(); // Free the heap memory.
Vector& operator = (const Vector& a);
float& operator [](const int i);
friend Vector operator + (const Vector &a, const Vector &b);
};
The new elements have been shown in bold type. The simplest is the "temp" character.
Why is it necessary?
Consider the problem of adding three vectors:
d = a + b + c.
The computer will first have to add a and b to get an intermediate result. Then to this
intermediate result, it must add c. Then it should throw away the intermediate result. If it does
24
not throw it away, the computer's RAM will soon become clogged with these intermediate
results and the program will grind to a halt. Thus, operators like + will need to create temporary
vectors to hold the intermediate results. Timely disposal of these intermediate results is the
trickiest part of achieving the goals of this section. This "temp" flag will be used by the
operators to indicate that the vector is such an intermediate product and can -- and must -- be
thrown away when it is no longer needed.
Now let us turn to the first of two new constructors. It just constructs a vector of n
elements and sets the temp element to the letter passed by the call. Writing it can be left as a
exercise. If one calls the constructor by, say, just "Vector c(4);", the default value of the temp
flag, n, will be used. On the other hand, this same constructor handles a declaration like "Vector
c(4,'y');" within an operator to create an intermediate Vector with the temp set to 'y'. Writing this
constructor is simple enough to be left as an exercise.
The next item to considered is the "friend" function that overloads the operator +. A
function declared within the definition of the structure to be a friend can access the private
elements of the structure. Here is the code for this function.
Vector operator + (const Vector& a, const Vector& b){
int i;
if(b.n != a.n){
printf("Vector dimensions do not match in + operator.\n");
exit(1);
}
Vector Temp(a.n,'y');
for(i = 0; i < a.n; i++)
Temp.v[i] = a.v[i] + b.v[i];
a.freet();
b.freet();
return (Temp);
}
After checking that the dimensions match, a vector called Temp is constructed with the right
number of elements and marked as temporary. Now the way that the + operator works makes the
vector on the left of the + the first argument and the vector on the right the second argument.
The &'s in the declaration of the function means that these vectors will be passed to the function
by reference, not by copying. The for loop adds the two vectors together and puts the sum in
Temp. The keyword const in the declaration of the function is a compiler assistant. It tells the
compiler that the following argument is not changed by the function and allows faster
compilation. It should be unnecessary. Unfortunately, the Borland Builder C++ compiler has
made const mandatory in some contexts, even where the argument is not constant. However,
changing a variable declared as const produces only a compiler warning and the code works
correctly, while omitting it in these contexts produces a compiler error, and nothing works. All
of the code in these notes previously worked without const.
We have just seen a temporary vector created by an operator. A little thought about how
25
you would use temporary scratch paper if you were adding vectors by hand should soon
convince you that no temporary is ever used more than once. Hence, we check to see if a was a
temporary, and if so we delete it, and likewise for b. These checks are the "freet" calls. (Freet is
short for Free if Temporary.) The code for freet was given "in line" in the definition of the
structure. That code used the function freeh(), for free heap memory, which is defined as
follows:
void Vector::freeh(){
if(v != 0 ) delete v;
v = 0;
}
If the pointer to the array in the vector is not zero, this frees the heap memory assigned to it and
sets the pointer equal to 0.
Now finally note that operator + returns the Temp vector it has just created. But Temp
was created in this function and therefore must be destroyed when the function "goes out of
scope" or "returns". Now the compiler cannot both return Temp and destroy it, so what does it
do? It first makes a copy using the copy constructor, the one whose argument is just a pointer to
another object of the same type. Then it destroys the original of Temp. We can take advantage
of this knowledge in writing the copy constructor. If we see that the object being copied is a
temporary, the copy constructor can just steal the heap memory that belonged to the object being
copied, for we can be sure that the temporary is headed straight for the destructor. This theft
prevents the heap from becoming fragmented. If we had allocated new memory on the heap for
the copy, it would have been above the memory for the original temporary, so when this latter
memory was freed, we would have had a hole in the heap. While such a hole is not necessarily
the end of the world, it is certainly better to have a compact heap, for any one call of the "new"
command can only allocate one continuous chunk of contiguous memory. If the heap becomes
fragmented, "new" may not be able to find enough space all in one piece even though it available
in scattered parcels. Here is the code for the copy constructor and for the destructor that keeps
the memory compact.
// The Copy constructor
Vector::Vector(Vector& a) {
int i;
n = a.n;
temp = a.temp;
26
else{
if((v = new float[n]) == 0){
printf("Out of memory trying to create vector.\n");
exit(1);
}
for(i = 0; i < n; i++)
v[i] = a.v[i];
}
}
// The destructor
Vector::~Vector(){
if(v != 0 ) // check to see if it has been robbed.
delete v; //if not, delete it.
v = 0;
}
The meaning of the operator & in function definitions requires some explanation. In the
declaration of the copy constructor,
Vector::Vector(Vector& a),
it simply meant that it was sufficient to pass just a reference to a to the routine; it was not
necessary to make a separate copy of a for the purpose. The matter is somewhat more
perplexing when we come to write the overloading of the [] and = operators for Vectors. We
want to define [ ] so that if b is a Vector, we can write b[i] for the ith element of b on either the
right or the left side of an equal. Now note that in a state as simple as
x = x+1;
what the compiler does with the x on the right of the = is totally different from what it does on
the left. On the right, it takes data from "x" while on the left, it puts data into "x". An expression
which can be used in this way on either side of an = but with a very different meaning on the left
side is called an lvalue, meaning something that can be used on the left of an = sign. In the
example, the expression x is an lvalue. The expression x+1 is not an lvalue; it can only be
used on the right side of the = sign. Clearly, we want the [] function to return an lvalue value so
that we can use b[i] on either side of an = sign.
We get this lvalue by a & in the declaration of the function that overloads [] thus:
float& Vector::operator [] (const int i)
What is returned? NOT a pointer to a float, but an "lvalue" for the float. Of course, what is
returned must itself be an lvalue. The return statement in this function is
return (v[i]);
and v[i] is certainly an lvalue. But it requires the & in the return type of the function to
27
preserve the lvalue character of what is returned. The complete code for the [] operator, which
also checks that the subscript is in range, is as follows.
The code for the = operator also uses the & to indicate that what is returned is an lvalue,
in this case the lvalue of the Vector on the left of the = sign. The overloading of the = operator
can only be a member function, not a friend. In it, the Vector a, the argument, is the vector on
the right side of the = sign. The Vector on the left is the present Vector. Here is the code.
Vector& Vector::operator = (const Vector& a){
int i;
int s = (n < a.n) ? n : a.n;
for(i = 0; i < s; i++){
v[i] = a.v[i];
}
a.freet();
return(*this);
}
The code clearly copies the a Vector into the present vector and frees a if it is a
temporary. The word this is a keyword in C++ and is a pointer to the present instance of the
structure. We do not, however, want to return a pointer to the present vector but a reference to it.
Thus we return *this, not this. The & in the return type of the function ensures that we get an
lvalue, which will make the compiler happy. If the explanation of why the [] and = operators are
written the way they are seems to you a little strained, I fully agree. The fat books on C give
next to no explanation but only an example. The const keyword in the declaration will cause a
compiler warning, but if omitted, there is an a compiler error. The code seems to work fine.
Surely requiring the const is a bug in the compiler.
Exercise
Expand the vector structure so that x - y, x*y, a*x, and x/a are defined, where x and y are
Vectors and a is a float. For x*y, use the "dot" or "inner" product definition. In your test
program, try such expressions as a*(x -y) or (w-x)*(y - z).
28 | https://id.scribd.com/document/352502841/Cp-Programming | CC-MAIN-2019-35 | refinedweb | 6,996 | 71.44 |
Apple consistently marks "their", "there", "it’s" and several other similar common words as misspelled in all of my apps. Why is this happening and how do I prevent it?
Tag: common
c++ – Compare folders and find common files
I found this Powershell command useful in comparing folders and find common and different files.
Since I really like C and C++, I’ve decided to create a program to do that.
It will get all files in 2 folders given as arguments, will store them in an std::map,is that the correct container?
After, it will compare the 2 maps and give the common files.
Some notes:
The findFiles method should benefit from RAII treatment, but since I have ZERO work or internship experience, I am unable to implement that.
Some functions like finding a file size and iterating over a folder are present in C++ 17, but I use Digital Mars, an old compiler not up to date.
I use this compiler because it is small, provided as a compressed folder aka portable in the mainstream lexicon (even though portable means something else) and its use is straightforward.
I used an online code beautifier for indentation.
The sanitizePath method is used to eliminate trailing “/” or “” from the given path.
Please give all your valuable comments on this work.
#include <iostream> #include <iterator> #include <map> #include <string> #include <sys/stat.h> #include <windows.h> #ifndef INVALID_FILE_ATTRIBUTES constexpr DWORD INVALID_FILE_ATTRIBUTES = ((DWORD)-1); #endif bool IsDir(const std::string &path) { DWORD Attr; Attr = GetFileAttributes(path.c_str()); if (Attr == INVALID_FILE_ATTRIBUTES) return false; return (bool)(Attr & FILE_ATTRIBUTE_DIRECTORY); } std::string sanitizePath(std::string const &input) { auto pos = input.find_last_not_of("/\"); return input.substr(0, pos + 1); } std::map<std::string, unsigned long > findFiles(std::string &spath) { size_t i = 1; WIN32_FIND_DATA FindFileData; std::map<std::string, unsigned long > list; std::string sourcepath = spath + std::string("\*.*"); HANDLE hFind = FindFirstFile(sourcepath.c_str(), &FindFileData); if (hFind != INVALID_HANDLE_VALUE) do { std::string fullpath = std::string(spath) + std::string("\") + std::string(FindFileData.cFileName); if (*(fullpath.rbegin()) == '.') continue; else if (FindFileData.dwFileAttributes &FILE_ATTRIBUTE_DIRECTORY) findFiles(fullpath); else { list(FindFileData.cFileName) = FindFileData.nFileSizeHigh *(MAXWORD + 1) + FindFileData.nFileSizeLow; } } while (FindNextFile(hFind, &FindFileData)); FindClose(hFind); return list; } void displayMap(std::map<std::string, unsigned long > &map) { std::map<std::string, unsigned long>::const_iterator itr; for (itr = map.begin(); itr != map.end(); itr++) std::cout << "File Name: " << itr->first << " Size: " << itr->second << " bytes" << std::endl; } std::map<std::string, unsigned long > map_intersect(std::map<std::string, unsigned long > const &source, std::map<std::string, unsigned long > const &dest) { std::map<std::string, unsigned long > inter; std::map<std::string, unsigned long>::const_iterator iter = dest.begin(); std::map<std::string, unsigned long>::const_iterator end = dest.end(); for (; iter != end; iter++) { if (source.find(iter->first) != source.end()) { inter(iter->first) = iter->second; } } return inter; } std::map<std::string, unsigned long > map_difference(std::map<std::string, unsigned long > const &source, std::map<std::string, unsigned long > const &dest) { std::map<std::string, unsigned long > diff = source; std::map<std::string, unsigned long>::const_iterator iter = dest.begin(); std::map<std::string, unsigned long>::const_iterator end = dest.end(); for (; iter != end; iter++) { if (source.find(iter->first) != source.end()) { diff.erase(iter->first); } } return diff; } int main(int argc, char **argv) { if (argc <= 2) { std::cerr << "No path or filename provided" << std::endl; return EXIT_FAILURE; } const char *source = argv(1); const char *destination = argv(2); if (!IsDir(source)) { std::cerr << "Source path doesn't exist" << std::endl; return EXIT_FAILURE; } if (!IsDir(destination)) { std::cerr << "Destination path doesn't exist" << std::endl; return EXIT_FAILURE; } std::string spath = sanitizePath(source); std::string dpath = sanitizePath(destination); std::cout << "Comparing " << spath << " and " << dpath << std::endl; std::map<std::string, unsigned long > slist, dlist, ilist, diflist; slist = findFiles(spath); dlist = findFiles(dpath); ilist = map_intersect(slist, dlist); diflist = map_difference(slist, dlist); if (ilist.empty()) std::cout << "There is no common files" << std::endl; else { std::cout << "The common files are" << std::endl; displayMap(ilist); } if (diflist.empty()) std::cout << "The 2 folder are the same" << std::endl; return EXIT_SUCCESS; }
code quality – Is it common to have to iterate on a design due to overlooking problems with it?
Iterating a through multiple versions of a design is a great thing to do! It is rare to create a design that has all the good properties at the first try. As software engineers, we should be humble and accept that we will make mistakes or overlook things. It is arrogant to think that you can create good design at your first try.
But as you say, it can be exhausting to work on same piece of code for prolonged period of time. But there might be practices and disciplines that make it more bearable.
Test automation, preferably TDD
This this the one discipline that enables us to actually change the design. By having solid and reliable suite of automated tests, the design can be changed drastically without fear of breaking existing functionality. It is that fear which is most exhausting.
Doing TDD also makes it more likely that you create working and ‘good enough’ design at your first try. This design then requires only small improvements to push it into greatness.
Refactoring
Instead of focusing on changing the whole design, focus on small problems and fix those. Fixing many small problems, will result in big changes in overall design. Making small changes is less mentally exhausting as you get feedback about your design sooner and you can stagger your attention between multiple designs, slowly improving all of them.
Good vs. Perfect
The saying ‘Perfect is the enemy of good.’ comes to mind here. Knowing when to stop trying to improve the design is learned skills. If the design is being used and changed, then you will have lots of small oportunities to improve the design, so you don’t have to invest all that time in the beginning. As long as you follow Boy Scouts rule of ‘Always leave code cleaner than you found it.’, then the design will improve over time.
object oriented – What does “common interface” mean in OOP?
I have seen the term “common interface” used a lot while reading books about OOP.
For example, the book The Essence of Object-Oriented Programming with Java and UML says the following:
Abstract classes usually define a common interface for subclasses by
specifying methods that all subclasses must override and define
My understanding of the term “common interface” is the following:
Assume that we have a superclass (or an
interface or an
abstract class) called
Animal and two subclasses called
Dog and
Cat, and
Animal have two virtual methods called
makeSound() and
move().
Now the common interface would be composed of two methods which are
Animal.makeSound() and
Animal.move().
Assume that we have the following code:
Animal animal1 = new Dog(); animal1.makeSound(); animal1.move(); animal1 = new Cat(); animal1.makeSound(); animal1.move();
The explanation of the above code is the following:
Animal animal1 = new Dog() creates an
Animal common interface and associate a
Dog object with it:
animal1.makeSound() sends an
Animal.makeSound() message to the common interface, and then the common interface sends a
Dog.makeSound() message to the
Dog object:
Same thing happens in the case of
animal1.move() (which is the
Animal.move() message is sent to the common interface, etc.).
animal1 = new Cat() removes the
Dog object from the common interface, and associate a
Cat object with the common interface:
animal1.makeSound() sends an
Animal.makeSound() message to the common interface, and then the common interface sends a
Cat.makeSound() message to the
Cat object:
Same thing happens in the case of
animal1.move() (which is the
Animal.move() message is sent to the common interface, etc.).
Am I correct in my understanding?
database design – common columns in all tables in mysql
I want to create a table like base_table with below columns –
id, created_at, created_by.
and for all other tables, I want created_at and create_by columns available through inheritance.
I don’t want to create these common columns in all other tables.
Why is password confirmation common in password resets and updates?
I’ve seen multiple websites with only one field for the password during registration, whereas there are two fields – Enter Password and Confirm Password – for password reset and update tasks.
Why a confirm-password is quite common in password-reset and update password?
I’ve seen multiple websites with only one field for the password during registration, whereas there are two fields password and confirm-password for password-reset and password-update pages.
authentication – Client certificate common name? Subject alternative name?
For an IoT project, I want to secure client server communication. I want both the server (Apache) and the clients identify/authenticate each other (a client won’t communicate with other clients) before clients can post some data.
There is much less information about client certificates. Besides documentations, there are best practices. I would like to know, how to set common name and subject alternative names for clients, as they won’t have a domain name and a fix IP address.
Do I simply tell the server to ignore a mismatch? Can I use a wild card only CN (CN=*)? I also would like the cert to identify specific client. Server needs to be able to tell apart client 1 from client 2, etc…
Thanks!
sony alpha – Tethering DSLR camera to PC via any common WiFi network
I am aware that it is possible to tether a camera to PC via the WiFi network that is created by the WiFi-enabled camera itself. But I want to know if it is possible to tether by connecting both camera and PC to any other common WiFi network.
Specifically, I am using Sony Alpha6400 and qDSLRDashboard as PC client for tethering. I connected the camera to my home WiFi network (to which my PC is connected). But I do not know how to go ahead. qDSLRDashboard does not seem to recognize the camera connected to same WiFi network.
Note: I have not tried this in Sony Imaging Edge. This question is specific to qDSLRDashboard.
Thank you for your answers.
tripod heads – Which are the most common quick release plate systems out there
The camera has a hole in the bottom that will be meant to take either a 1/4-20 UNC or 3/8-16 UNC threaded screw.
Most attachments for this will either come with both types of screws or will necessitate the use of an adapter if what the item comes with is not fit to your camera. This monopod, for example, has a reversible screw for both. Point is, these attachments are standardized so the tripod world is your oyster, so to speak.
The quick release plate will attach to your camera via one of these screws – so you can use the same plate across any of your cameras. Or, if you’re lazy like me, you’ll buy extra plates and just keep ’em on your bodies.
The plate will be designed to fit whatever head you’re using. They could be custom designed for the head or could be something more standard, like the Arca-Swiss style plate.
That being said, I’ve never tried to mix brands and I have heard stories of one brand’s arca-swiss plate not quite meshing well with another’s arca-swiss head, even though those should be universal.
To summarize – because of the universality of the attachment screw, don’t let this impact your tripod head decision. Buy that for the features you want and then worry about the attachment, whether you need an adapter or not.
If you want universality in the QR plates, then go for a head that supports Arca-Swiss style plates. Though again, be warned that that is no guarantee of a great meshing between the head and plate if you choose to mix brands. | https://proxies-free.com/tag/common/ | CC-MAIN-2020-29 | refinedweb | 1,981 | 56.25 |
formStage s and graph junctions like Merge or Broadcast. For the full list of built-in processing stages see Overview of built-in stages and their semantics. val sum: Future[Int] = runnable.run() which will represent the result of the folding process over the stream. In general, a stream can expose multiple materialized values, but it is quite common to be interested in only the value of the Source or the Sink in the stream. For this reason there is a convenience method called runWith() available for Sink, Source or Flow requiring, respectively, a supplied Source (in order to run a Sink), a Sink (in order to run a Source) or both a Source and a Sink (in order to run a Flow, since it has neither attached yet).(_))
There are various ways to wire up different parts of a stream, the following examples show some of the available options:
//.
Note
Reusing instances of linear computation stages (Source, Sink, Flow) inside composite Graphs is legal, yet will materialize that stage multiple times.
Operator Fusion
Akka Streams 2.0 contains an initial version of stream operator fusion support. This means that the processing steps of a flow or stream graph can be executed within the same Actor and has three consequences:
- starting up a stream may take longer than before due to executing the fusion algorithm
- passing elements from one processing stage to the next is a lot faster between fused stages due to avoiding the asynchronous messaging overhead
- fused stream processing stages do no longer run in parallel to each other, meaning that only up to one CPU core is used for each fused part
The first point can be countered by pre-fusing and then reusing a stream blueprint as sketched below:
import akka.stream.Fusing val flow = Flow[Int].map(_ * 2).filter(_ > 500) val fused = Fusing.aggressive(flow) Source.fromIterator { () => Iterator from 0 } .via(fused) .take(1000)
In order to balance the effects of the second and third bullet points you will have to insert asynchronous boundaries manually into your flows and graphs by way of adding Attributes.asyncBoundary using the method async on Source, Sink and Flow to pieces that shall communicate with the rest of the graph in an asynchronous fashion.
Source(List(1, 2, 3)) .map(_ + 1).async .map(_ *.
Warning })
Note or GraphStage – which gives you full control over how the merge is performed.
Contents | http://doc.akka.io/docs/akka/2.4/scala/stream/stream-flows-and-basics.html | CC-MAIN-2017-13 | refinedweb | 405 | 58.52 |
Creating and Deploying a Model (QuickStart)
Using Cloudera Data Science Workbench, you can create any function within a script and deploy it to a REST API. In a machine learning project, this will typically be a predict function that will accept an input and return a prediction based on the model's parameters.
- Create a new project. Note that models are always created within the context of a project.
- Click Open Workbench and launch a new Python 3 session.
- Create a new file within the project called
add_numbers.py. This is the file where we define the function that will be called when the model is run. For example:add_numbers.py
def add(args): result = args["a"] + args["b"] return result
- Before deploying the model, test it by running the
add_numbers.pyscript, and then calling the
addfunction directly from the interactive workbench session. For example:
add({"a": 3, "b": 5})
- Deploy the
addfunction to a REST endpoint.
- Go to the project Overview page.
- Click .
- Give the model a Name and Description.
- Enter details about the model that you want to build. In this case:
- File: add_numbers.py
- Function: add
- Example Input: {"a": 3, "b": 5}
- Example Output: 8
- Select the resources needed to run this model, including any replicas for load balancing.
- Click Deploy Model.
- Click on the model to go to its Overview page. Click Builds to track realtime progress as the model is built and deployed. This process essentially creates a Docker container where the model will live and serve requests.
- Once the model has been deployed, go back to the model Overview page and use the Test Model widget to make sure the model works as expected.If you entered example input when creating the model, the Input field will be pre-populated with those values. Click Test. The result returned includes the output response from the model, as well as the ID of the replica that served the request.Model response times depend largely on your model code. That is, how long it takes the model function to perform the computation needed to return a prediction. It is worth noting that model replicas can only process one request at a time. Concurrent requests will be queued until the model can process them. | https://docs.cloudera.com/cdsw/1.9.0/models/topics/cdsw-creating-and-deploying-a-model--quickstart-.html | CC-MAIN-2021-04 | refinedweb | 376 | 65.73 |
I am trying to benchmark a set of computations like so -
def benchmark(func, index, array)
start = Time.now
func(index, array)
start - Time.now #returns time taken to perform func
end
def func1(index, array)
#perform computations based on index and array
end
def func2(index, array)
#more computations....
end
benchmark(func1, index1, array1)
benchmark(func1, index2, array2)
`func1': wrong number of arguments (0 for 2) (ArgumentError)
benchmark(func1(index1, array1), index1, array1)
undefined method `func' for main:Object (NoMethodError)
In Ruby, methods can be called without including empty parentheses after the method name, like so:
def func1 puts "Hello!" end func1 # Calls func1 and prints "Hello!"
Because of this, when you write
benchmark(func1, index1, array1), you're actually calling
func1 with no arguments and passing the result to
benchmark, not passing
func1 to the benchmark function as you expected. In order to pass
func1 as an object, you may obtain a wrapper object for the function using the
method method, like this:
def func1 puts "Hello!" end m = method(:func1) # Returns a Method object for func1 m.call(param1, param2)
Most of the time though, that's not something you really want to do. Ruby supports a construct called blocks which is much better suited for this purpose. You may already be familiar with blocks from the
each iterator Ruby uses for looping through arrays. Here's what it would look like to use blocks for your use case:
def benchmark start = Time.now yield Time.now - start # Returns time taken to perform func end # Or alternately: # def benchmark(&block) # start = Time.now # block.call # Time.now - start # Returns time taken to perform func # end def func1(index, array) # Perform computations based on index and array end def func2(index, array) # More computations.... end benchmark { func1(index1, array1) } benchmark { func1(index1, array2) }
In fact, Ruby has a standard library for benchmarking called Benchmark which uses blocks and probably already does exactly what you want.
Usage:
require 'benchmark' n = 5000000 Benchmark.bm do |x| x.report { for i in 1..n; a = "1"; end } x.report { n.times do ; a = "1"; end } x.report { 1.upto(n) do ; a = "1"; end } end
The result:
user system total real 1.010000 0.000000 1.010000 ( 1.014479) 1.000000 0.000000 1.000000 ( 0.998261) 0.980000 0.000000 0.980000 ( 0.981335) | https://codedump.io/share/mfRfLa1ffQFD/1/benchmarking-methods-in-ruby | CC-MAIN-2017-17 | refinedweb | 394 | 64 |
Tim Armstrong has posted comments on this change.
Change subject: IMPALA-5113: fix dirty unpinned invariant
......................................................................
Patch Set 4:
(8 comments)
File be/src/runtime/bufferpool/buffer-pool-internal.h:
PS4, Line 219: or
> ?
Done
Line 222: /// locks should be held by the caller.
> this should talk specifically about reservations.
Done
PS4, Line 233: CleanPagesBeforeAllocationLocked
> the name seems a bit misleading since we have to use it in places other tha
Done
Line 237: /// accounting. No page or client locks should be held by the caller.
> let's mention specifically that it releases the buffer to the client's rese
Done
PS4, Line 336: Only used for debugging.
> remove this now that the byte count is presumably used for non-debugging.
Done
File be/src/runtime/bufferpool/buffer-pool.cc:
Line 131: client->impl_->reservation()->AllocateFrom(len);
> why does this happen after AllocateBufferInternal()? In the case of Allocat
Changed so that new pages are constructed by calling AllocateBuffer() to allocate a buffer,
then passing that into a different function that creates the page.
Line 505: // Clean pages before updating the accounting.
> that's obvious from the code. explain the "why" instead.
Done
File be/src/runtime/bufferpool/buffer-pool.h:
Line 251: class PageList;
> is this only for tests? if so, let's comment that.
It needs to be a type in the BufferPool namespace because it accesses Page, which is private
to that namespace.
--
To view, visit
To unsubscribe, visit
Gerrit-MessageType: comment
Gerrit-Change-Id: I07e08acb6cf6839bfccbd09258c093b1c8252b25
Gerrit-PatchSet: 4
Gerrit-Project: Impala-ASF
Gerrit-Branch: master
Gerrit-Owner: Tim Armstrong <tarmstrong@cloudera.com>
Gerrit-Reviewer: Dan Hecht <dhecht@cloudera.com>
Gerrit-Reviewer: Tim Armstrong <tarmstrong@cloudera.com>
Gerrit-HasComments: Yes | http://mail-archives.apache.org/mod_mbox/impala-reviews/201703.mbox/%3C201703271859.v2RIxddE024389@ip-10-146-233-104.ec2.internal%3E | CC-MAIN-2017-39 | refinedweb | 282 | 53.07 |
Creates a “results” folder in the current directory to store all of the load testings.# Simple usage. pyronos get 25 simple # Send head request. pyronos head 25 simple # Dump logs. pyronos get 25 simple -d # Send requests sequentially. pyronos get 25 simple -s # Print progress of sequential requests. pyronos get 25 simple -s -p $ pyronos -h usage: pyronos [-h] [-f {simple,stem,step}] [-o {csv,json,yml}] [-s] [-p] [-d] [-v] url {get,head,options,delete,post,put} num_of_reqs Simple and sweet load testing module. positional arguments: url url of website {get,head,options,delete,post,put} http method num_of_reqs number of requests optional arguments: -h, --help show this help message and exit -f {simple,stem,step}, --figure {simple,stem,step} type of figure -o {csv,json,yml}, --output {csv,json,yml} type of output -s, --sequential sequential requests -p, --print-progress print progress -d, --dump-logs dump logs -v, --version show program's version number and exit
Python Envelope: Mailing for human beings.
Envelopes is a wrapper for Python’s email and smtplib modules. It aims to make working with outgoing e-mail in Python simple and fun.)
Converting Unicode in Python 3: from Character Code to Decimal
Given the Control Code column in the Wikipedia List of Unicode Characters:
Example 1: The Cent character
> Python Prompt:
> code = ‘0041’
>>> decimal = int(code,16)
>>> decimal
65
>>> chr(decimal)
‘A’
Example 2: The Cent character
> Python Prompt:
> code = ’00A2′
>>> decimal = int(code,16)
>>> decimal
162
>>> chr(decimal)
‘¢’
Example 3: The Greek Sigma character
> Python Prompt
> code = ’03A3′
>>> decimal = int(code,16)
>>> decimal
931
>>> chr(decimal)
‘Σ’
Example 4: Soccer Ball
> Python Prompt:
> code = ’26BD’
>>> decimal = int(code,16)
>>> decimal
9917
>>> chr(decimal)
‘⚽’
Note: The Soccer ball did not display correctly in my Windows Shell, but rendered properly when I copied it into a Chrome WordPress textarea.
Example 5: Emoticons
>>> code = ‘1F60E’
>>> decimal = int(code,16)
>>> decimal
128526
>>> chr(decimal)
‘😎’
Unicode & Character Encodings in Python: A Painless Guide
Python
import unicodedata
>> print(u”Test\u2014It”)
Test—It
>> s = u”Test\u2014It”
>> ord(s[4])
8212
>>> chr(732)
‘˜’
>>> c = chr(732)
>>> ord(c)
732
escape_characters = set()
if ord(c) in escape_characters:
>> unicodedata.name(c)
‘SMALL TILDE’
JavaScript:
String.fromCharCode(parseInt(unicode,16))>> c = String.fromCharCode(732);
“˜”>> c.charCodeAt(0);732>> String.fromCharCode(0904)>> c = String.fromCharCode(parseInt(‘2014’,16)) 2014 = hex
“—”>> c.charCodeAt(0);
8212c = String.fromCharCode(39);>> c.charCodeAt(0);
39
jsFiddle
var str = String.fromCharCode(e.which);
$(‘#charCodeAt’)[0].value = str.charCodeAt(0);
$(‘#fromCharCode’)[0].value = encodeURIComponent(str);
jQuery String Functions
- charAt(n): Returns the character at the specified index in a string. The index starts from 0.
- charCodeAt(n): Returns the Unicode of the character at the specified index in a string. The index starts from 0.
Mathias Bynens: JavaScript Has a Unicode Problem:
As my JavaScript escapes tool would tell you, the reason is the following:
>> 'ma\xF1ana' == 'man\u0303ana' false >> 'ma\xF1ana'.length 6 >> 'man\u0303ana'.length 7
The first string contains U+00F1 LATIN SMALL LETTER N WITH TILDE, while the second string uses two separate code points (U+006E LATIN SMALL LETTER N and U+0303 COMBINING TILDE) to create the same glyph. That explains why they’re not equal, and why they have a different
length.
However, if we want to count the number of symbols in these strings the same way a human being would, we’d expect the answer
6for both strings, since that’s the number of visually distinguishable glyphs in each string. How can we make this happen?
In ECMAScript 6, the solution is fairly simple:
function countSymbolsPedantically(string) { // Unicode Normalization, NFC form, to account for lookalikes: var normalized = string.normalize('NFC'); // Account for astral symbols / surrogates, just like we did before: return punycode.ucs2.decode(normalized).length; }
The
normalizemethod on
String.prototypeperforms Unicode normalization, which accounts for these differences. If there is a single code point that represents the same glyph as another code point followed by a combining mark, it will normalize it to the single code point form.
>> countSymbolsPedantically('mañana') // U+00F1 6 >> countSymbolsPedantically('mañana') // U+006E + U+0303 6
For backwards compatibility with ECMAScript 5 and older environments, a
String.prototype.normalizepolyfill can be used.
Turning a code point into a symbol
String.fromCharCodeallows you to create a string based on a Unicode code point. But it only works correctly for code points in the BMP range (i.e. from U+0000 to U+FFFF). If you use it with an astral code point, you’ll get an unexpected result.
>> String.fromCharCode(0x0041) // U+0041 'A' // U+0041 >> String.fromCharCode(0x1F4A9) // U+1F4A9 '' // U+F4A9, not U+1F4A9
The only workaround is to calculate the code points for the surrogate halves yourself, and pass them as separate arguments.
>> String.fromCharCode(0xD83D, 0xDCA9) '💩' // U+1F4A9
If you don’t want to go through the trouble of calculating the surrogate halves, you could resort to Punycode.js’s utility methods once again:
>> punycode.ucs2.encode([ 0x1F4A9 ]) '💩' // U+1F4A9
Luckily, ECMAScript 6 introduces
String.fromCodePoint(codePoint)which does handle astral symbols correctly. It can be used for any Unicode code point, i.e. from U+000000 to U+10FFFF.
>> String.fromCodePoint(0x1F4A9) '💩' // U+1F4A9
For backwards compatibility with ECMAScript 5 and older environments, use a
String.fromCodePoint()polyfill.
Getting a code point out of a string
Similarly, if you use
String.prototype.charCodeAt(position)to retrieve the code point of the first symbol in the string, you’ll get the code point of the first surrogate half instead of the code point of the pile of poo character.
>> '💩'.charCodeAt(0) 0xD83D
Luckily, ECMAScript 6 introduces
String.prototype.codePointAt(position), which is like
charCodeAtexcept it deals with full symbols instead of surrogate halves whenever possible.
>> '💩'.codePointAt(0) 0x1F4A9
For backwards compatibility with ECMAScript 5 and older environments, use a
String.prototype.codePointAt()polyfill.
Real-world bugs and how to avoid them
This behavior leads to many issues. Twitter, for example, allows 140 characters per tweet, and their back-end doesn’t mind what kind of symbol it is — astral or not. But because the JavaScript counter on their website at some point simply read out the string’s
lengthwithout accounting for surrogate pairs, it wasn’t possible to enter more than 70 astral symbols. (The bug has since been fixed.)
Many JavaScript libraries that deal with strings fail to account for astral symbols properly.
Introducing… The Pile of Poo Test™
Whenever you’re working on a piece of JavaScript code that deals with strings or regular expressions in some way, just add a unit test that contains a pile of poo (
💩) in a string, and see if anything breaks. It’s a quick, fun, and easy way to see if your code supports astral symbols. Once you’ve found a Unicode-related bug in your code, all you need to do is apply the techniques discussed in this post to fix it.
Stack Overflow on String.fromCharCode():
inArrayreturns the index of the element in the array, not a boolean indicating if the item exists in the array. If the element was not found,
-1will be returned.
So, to check if an item is in the array, use:
if(jQuery.inArray("test", myarray) !== -1)
- String.fromCodePoint() Not supported by Internet Explorer. From Safari 10
-. | https://www.openpolitics.com/tag/python/ | CC-MAIN-2020-29 | refinedweb | 1,212 | 55.84 |
Properties in Kotlin ?
Properties are nothing but the variable with getter and setters, by Default kotlin provide the getter and setter for kotlin variables, So all the Kotlin variable are Properties.
In the below example, name and age are variable, we should initialize these variables as they are non-nullable dataTypes.
I have declared them as var so that we can change values. If I declare as Val then we cannot change the values of them.
class Person{ var name:String = "chercher tech" var age:Int = 1 // year }
class Person{ var name:String = "chercher tech" var age:Int = 1 // year } fun main(args: Array<String>) { var per = Person() // at zeroth month, my site name was selenium-webdriver.com // then I renamed it per.age = 0 per.name = "selenium-webdriver.com" println(per.name) println(per.age) }
Getters are used to retrieve the value of the variable from the class, both val and var variable will have the getter
Setters are used to set the value of the variables, only the var variable has setter as we cannot change the value of the val variable in kotlin.
In Kotlin, Getter and setter are automatically done by the kotlin compiler. Let's see how to add getter and setters manually in kotlin
Kotlin Compiler calls these getters and setters internally whenever you access or modify a property using the dot(.) operator on the class object.
Let's see the compiled code of the above program. Please do adjust with below image size.
To see the decompiled code in your system, Navigate to Tools->Kotlin->Show Kotlin Byte Code and the select deCompile option but you might see the java code
In the above image, you can see that getter and setter are only to help with getting and setting the value, but if we create our own getters and setters then we can write the code according to our needs.
Example of Getters and setter. In setter function, I have written my own code like println
class Person(age: Int) { val name: String = "chercher tech" get()=field var age: Int = age get() = field // set the value set(value) { println("Age value will be changed to : $age") field = value } } fun main(args: Array<String>) { // setting age var per = Person(13) println(per.name) println(per.age) }
We use value as the name of the setter parameter. This is the standard convention in Kotlin but you are free to use any other name if you want.
value identifier will be having the value that we are trying to assign to the variable.
In below example, I have to change the identifier to some other name, does it sound familiar
class Person(age: Int) { val name: String = "chercher tech" get()=field var age: Int = age get() = field // using thanos instead of value identifier set(thanos) { println("Age value will be changed to : $age") field = thanos } } fun main(args: Array<<String>) { // setting age var per = Person(13) println(per.name) println(per.age) }
Backing field helps you refer to the property inside the getter and setter methods.
This is required because if you use the property directly inside the getter or setter then you’ll run into a recursive call which will generate a StackOverflowError.
Stack part of the RAM and it will be occupied completely because of the recursive call(recursive call is nothing but a function calling the same function, Have you watched a movie called Triangle)
class Person() { var age: Int= 12 // initial value // set the value set(thanos) { age = thanos } } fun main(args: Array<String>) { // setting age var per = Person() println(per.age) // below assignment will make recursive calls // as we are performing setting operation per.age = 89 }
Value of the variables that are known at compile-time can be marked as compile-time constants in kotlin using the const keyword
A property with a custom getter can never be a compile-time constant. Kotlin does not support evaluating code at compile-time.
const val age: String= "12" fun main(args: Array<String>) { println("age cost is : $age") }
consts are compile-time constants. Meaning that their value has to be assigned during compile-time, unlike vals, where it can be done at runtime. This means, that consts can never be assigned to a function or any class constructor, but only to a String or primitive. For example:
const val foo = complexFunctionCall() //Not okay val fooVal = complexFunctionCall() //Okay const val bar = "Hello world" //Also okay fun complexFunctionCall(){ // do something } fun main(args: Array<String>) { println("age cost is : $age") }
In Kotlin, when you define variable either we have to initialize with some value or define them in such way value can be null for those variables.
In some cases, you want to initialize the value later but don't want the variables to have the null value.
We can initialize the kotlin variables in different ways at different times of the execution of kotlin program, few methods are:
Sometimes variable are assigned with lazy Initialization, when you want that variable to get execute only once.
Lazy Initialization accepts a lambda and the lambda will be executed only once, when you call the variable again the lazy initialization will not be executed again but you will get the previous result.
Basically his results in singleton pattern kind of thing. For example, you have 10 pages in your application, each one of the page requires a database connection, and also you have a logic and get the details of the database every day for a per day token( so that you cannot store them in const as the token changes every day).
In this, you might want to run the code to generate the token and use for the day, so do you think it is better to calculate that token (returns the same token). 10 or 100 times, would thing to store the result of the first time execution in some place then use it for remaining times.
With lazy initialization you can achieve this.
In the below example, I am println '****I have got executed*****' inside the lambda and returning nameAppend, but I am calling three time same var but it will execute the lambda only once that is the first time
var nameAppend = "Kotlin" // lazy works with lambda val site: String by lazy { println("****I have got executed*****") // results KotlinKotlin for only first time nameAppend = nameAppend + nameAppend nameAppend } fun main(args: Array<String>) { // I have called it three times println(site) println(site) println(site) }
Let's create lambda and the val Lazy keyword and execute the above kotlin functions. You will get to know the difference
var nameAppend = "Kotlin" // lazy works with lambda val site: String by lazy{ println("****I have got executed*****") // results KotlinKotlin for only first time nameAppend = nameAppend + nameAppend nameAppend } // normal lambda val returnVauleLambda = { println("****I Am normal Lambda*****") // results KotlinKotlin for only first time nameAppend = nameAppend + nameAppend nameAppend } fun main(args: Array<String>) { // call the lazy println(site) println(site) println(site) // call the lambda println(returnVauleLambda()) println(returnVauleLambda()) println(returnVauleLambda()) }
In kotlin, you must initialize non-null variable while declaring, if you do not then the compiler will show error for it.
But sometimes, you might do not want to initialize the variable, but you may want to initialize the variable, only before using it.
You should use the lateInit in case if you do not want to initialize a variable in while declaring.
LateInit initialization means you should be initializing the value for this variable before accessing it. In case we try to access the variable before initializing it we will see this error
// if you note below line we have not initialize the 'site' lateinit var site: String fun main(args: Array<String>) { site = "Chercher tech" println("Length of string is "+site.length) }
Let's try without initializing the lateinit variable in kotlin
// if you note below line we have not initialize the 'site' lateinit var site: String fun main(args: Array<String>) { println("Length of string is "+site.length) }
Delegates.Observable makes sure that when a value changes of a particular variable, it can perform some actions like notifies you or it performs some changes. To use Delegates Observable you need to import a package from the kotlin language.
Basically, it is something like a listener, it listens to the variable that we have declared with Delegates Observable.
You can provide the initial value for the Variable in Observable method as an argument, Delegates Observable works with a lambda with three parameters, kotlin handles those parameters.
Delegates Observable can be applied only on var type, not on val type variables in kotlin
import kotlin.properties.Delegates // lazy works with lambda var site: String by Delegates.observable("Selenium-mentor.com"){ variableToObserver, oldValue, newValue -> // will be executed when there is change println("I was '$oldValue', but they changed me to '$newValue'") } fun main(args: Array%ltString>) { site = "selenium-webdriver.com" site = "chercher.tech" } | https://chercher.tech/kotlin/properties-initialization-kotlin | CC-MAIN-2020-10 | refinedweb | 1,488 | 53.95 |
Hello,
I'm trying to solve my problem with lot of topics in this and other forums.
I hope anyone yould give me the missing and solving tip.
I'm sending data from the RPi with UART to my Arduino Pro Mini.
The test on the RPI (connecting TX to RX) is successfull and shows the echo.
Just for testing i have written the following code in python on RPi:
import serial ser = serial.Serial('/dev/ttyAMA0', 9600) ser.write(b'1') #print(ser.read()) ser.close
This works.
But I am not able to get the data from UART on my arduino.
I tried lot of code parts i've found. Nothing helps.
Tried it with codes); } }
I connected the RX/TX and TX/RX.
Builded it up like:
Arduino Pro Mini is connected to my PC with a TTL Converter.
Arduino Pro Mini connected to the RPI with RX/TX, 3,3v and ground.
Hopefully anyone finds a mistake or can help me out with some tips.
best regards,
oskar | https://forum.arduino.cc/t/problems-while-raspberry-pi-3b-is-sending-data-via-uart-to-pro-mini/632888 | CC-MAIN-2022-21 | refinedweb | 172 | 77.13 |
While most MyFaces apps work seamlessly as a servlet or a portlet, changing a MyFaces appliction into a portlet can sometimes raise.
MyFaces uses also external resources like CSS and JavaScripts files. Some of these files are added by the framework only once to the head part of the generated HTML output. Running as portlet, there is no access to this head as the portlet do not know about its container. For this, special External_Resources implementations are required for specific portal servers.
Generated IDs:
MyFaces uses internal also IDs to identify the controls, but some portal servers rename these IDs (prefixes them for example) which could cause problems with generated JavaScript code. Actually, namespace prefixes are part of the JSF spec and should work okay, presuming that the component is written well
Caching:
CreatingJSFPortlets - Quick and dirty instructions to turn your MyFaces app into a portlet.
UsingPortletUtil - What to do if your application needs to know whether or not it is running as a portlet.
UsingPortletModes - Make your MyFaces portlet use portlet modes such as Edit and Help.
MavenLiferayPortlet - How to build a MyFaces app with Maven 2 and deploy it under Liferay.
Remote Debugging could be useful if you are using Portals. How to set up Eclipse for remote debugging, have a look at Levent Gurses' great article: .: | https://wiki.apache.org/myfaces/Using_Portlets?action=diff | CC-MAIN-2017-22 | refinedweb | 219 | 61.36 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
what is lambda in OpenERP? [Closed]
The Question has been closedby
_defaults = { 'user_id': lambda self, cr, uid, context: uid,
need to clarify above code segment..? is it work as a function.? if yes self, cr, uid, context: uid are act as a parameters of that method.i'm confused with that phase
please help me
Lambda is not openerp object it is from python.
And yes is work like anonymous functions
plz check above comment
def _function(self,cr,uid): return uid
is simple function.Above link has example like this
`def make_incrementor (n): return lambda x: x + n f = make_incrementor(2) g = make_incrementor(6) print f(42), g(42)`
Output is : 44 48
lambda is not specific to OpenERP. It is a statement used by
Python to declare functions.
Here is a good tutorial about it:
In you case
lambda is used to reference the field
uid from the function arguments.
You can see in
openerp/osv/orm.py how the default function is called:
default = self._defaults[k](self, cr, SUPERUSER_ID, context)
when we wrote above code under the function standard def _function(self,cr,uid): return uid
is this correct.? if not please mention correct one
I am not sure what you would like to do, but lambda is used to "extract" the parameters from the call. I have updated my answer with a call of the default-function.
Can we save the result and get it printed since i saw somewhere that lambda cannot return any value!
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/what-is-lambda-in-openerp-6097 | CC-MAIN-2017-34 | refinedweb | 306 | 65.01 |
Firmware Release v1.20.1
This fixed the issue with lora.nvram_erase(). Also, I was having troubles deleting the flash from the pycom firmware upgrade software (it will time out if I select that option, working fine if I just upgrade the FW).
I have observed that having pybytes active and doing my own LoRa OTTA (saving on nvram after joining the network) caused that the join flag was deleted and never connected again. I have disabled the pybytes in the
pybytes_config.jsonfile adding
{"pybytes_autostart": false}and that fixed the issue.
I have been running the program on v1.20.1.r1 without pybytes and haven't had any Guru Meditation errors for a while. I'll keep trying...
@oligauc I tried another device, a LoPy now, and that one was completely messy. After setting a Lora parameter wrong, I could not change that any more. Only erasing flash and re-installing the firmware made it working again. Besides that, below is a nice picture showing how a message with DR=0 looks in the waterfall diagram.
@oligauc I just verified that, after a clean erase & upload. Then only one frequency is used. There may have been a modified set-up. I purchased a SDR recently, which allows to see the frequencies used. It's interesting to see all the different activities in the 868 band.
@robert-hh We carried out some tests with the abp node / RAK gateway in the EU region. If you add 3 channels with the same frequency and remove the others, it will send at 1 frequency.
If you do not add / remove channels manually, then the node uses frequency hopping.
@berni Using a terminal / command prompt, please can you go to the pycom firmware updater folder (e.g "C:\Program Files (x86)\Pycom\Pycom Firmware Update" in windows) and type the following command:
pycom-fwtool-cli.exe -p "your com port" erase_all
After erasing please re-flash your device.
Please let us know how it goes, Thanks
@berni @iwahdan Setting the frequency and/or removing channels is a topic which anyhow does not seem to work. I have to look at it. Even when I set in the abp_node.py sample code the device to use a single frequency only, it transmits at the three default frequencies. That was different once.
Hello @berni , Thanks for reporting that issue, we will work on reproducing it, meanwhile regarding the exception you get when downgrading to rc13 , this should be due to the new formatting mechanism of NVS introduced in IDF_v3.2 (1.20.1) , so when you downgrade the FW to previous version of IDF, NVS might not be initialise successfully, to solve this issue you can erase the device and flash the Firmware again.
Thanks
After upgrading to 1.20.1, I started having crashes when setting up the frequencies for the Nano-Gateway code:
Guru Meditation Error: Core 1 panic'ed (Cache disabled but cached memory region accessed) Guru Meditation Error: Core 1 panic'ed (IllegalInstruction). Exception was unhandled. Memory dump at 0x40201f50: bad00bad bad00bad bad00bad Guru Meditation Error: Core 1 panic'ed (IllegalInstruction). Exception was unhandled. Memory dump at 0x40201f50: bad00bad bad00bad bad00bad Guru Meditation Error: Core 1 panic'ed (IllegalInstruction). Exception was unhandled. [...]
This happens somehow randomly, after a few cycles of deep sleep/transmission, but even the Watchdog timer becomes useless.
Then I downgraded to 1.20.rc13. Now, the code doesn't work and stops in this line:
>>> from network import LoRa >>> lora = LoRa(mode=LoRa.LORAWAN) >>> lora.nvram_erase() Traceback (most recent call last): File "<stdin>", line 1, in <module> OSError: the requested operation failed >>> lora.nvram_save() Traceback (most recent call last): File "<stdin>", line 1, in <module> OSError: the requested operation failed
Is there any way to go back to 1.20.rc13 and have these function working again? I suspect there must be some low level changes on the LoRa radio.
One software is after the repeated firmware update with the checked option erase filesystem now running. The second one outputs still the same error.
- rcolistete last edited by
Please update the links for all Pycom devices in :
to include the version 1.20.1 for manual firmware download.
@Clemens That wifi_on_boot() issue is a different topic and has been fixed with today's build for the Dev branch. AFIK it is not yet in the 'official' version.
Edit: It looks like there is a lag in the Pycom update. The updater does not show V1.20.1, and the download page for firmware packages is also not updated.
@robert-hh said in Firmware Release v1.20.1:
pycom.wifi_on_boot(True, True)
>>> pycom.wifi_on_boot() False >>> pycom.wifi_on_boot(True, True) Traceback (most recent call last): File "<stdin>", line 1, in <module> OSError: the requested operation failed
@Clemens I had a similar problem too on one Lopy4 device - Wifi was not available. Erasing the flash and reloading the firmware solved it.
I have some devices with firmware 1.20.0.rc13 before and now 1.20.1. Unfortunately the old software is not running with the new firmware and I got e.g.
File "main.py", line 11, in <module> File "webserver.py", line 124, in <module> File "/flash/lib/microDNSSrv.py", line 19, in Create File "/flash/lib/microDNSSrv.py", line 161, in Start OSError: Network card not available
or
File "/flash/lib/terkin/network/core.py", line 110, in start_services File "/flash/lib/terkin/network/core.py", line 131, in start_httpserver File "/flash/lib/terkin/api/http.py", line 70, in start File "/flash/dist-packages/microWebSrv.py", line 219, in Start OSError: Network card not available
and my former working OTA-Code is stopping also
File "main.py", line 44, in <module> File "/flash/lib/OTA.py", line 176, in connect NameError: name 'SSID' isn't defined
@cmisztur I compared the related functions in machine.idle() and utime.sleep_ms() for both v1.20.0.rc13 and v1.20.1. They are identical, and just call taskYIELD() (idle) resp. vTaskDelay() (sleep_ms). So the change must be cause by moving from esp_idf v3.1 to esp_if v3.2.
@cmisztur That finding with idle is strange. Comparing the Pycom variant with the Micropython.org shows no difference in the machine.idle() implementation. But the Micropython.org version works. That's how the function looks in both variants:
STATIC mp_obj_t machine_idle(void) { taskYIELD(); return mp_const_none; }
@cmisztur I build the image myself, and as part of that, i can call:
make BOARD=xxPy flash
There are also the pyupgrade tools, you can download from the pycom web site.
@robert-hh said in Firmware Release v1.20.1:
@cmisztur Sorry, I never looked at pybytes.
How are you flashing 1.20.1?
Breaking change for me from 1.20.0.rc13 to 1.20.1 is usage of machine.idle().
Below, line 7 is never reached.
import utime import _thread def t(): print('thread start') utime.sleep_ms(4000) print('thread run') _thread.stack_size(8192) _thread.start_new_thread(t, ()) utime.sleep_ms(1000) while 1: machine.idle() #utime.sleep_ms(50) | https://forum.pycom.io/topic/5258/firmware-release-v1-20-1/41 | CC-MAIN-2020-40 | refinedweb | 1,174 | 68.77 |
I once worked for a company that developed an application that ran as an OS/2 process in a console like window on a Windows server. The users of the application were unaccustomed to that type of windows, and from time to time, someone would close the window by clicking on the "x" close button. Now, the application HAD to be terminated correctly, otherwise it would result in all kinds of corrupt files and database entries, so every time it happened, it generated a lot of work for us at the support desk.
So I wrote a small application in VB6 (yes, it was THAT long ago) that used PInvoke to disable the close button on the OS/2 window to avoid that.
Yesterday, there was a question in Q/A about a similar scenario, so I thought that I would take the opportunity to look at my old code and adapt it to .NET, this time in C#.
So, What IS PInvoke really? I found a very good explanation in Dave Amenta's blog. And since I can't explain it any better, I'm going to rip the explanation directly from the blog:
P/Invoke, or Pinvoke stands for Platform Invocation Services. PInvoke is a feature of the Microsoft .NET Framework that allows a developer to make calls to native code inside Dynamic Link Libraries (DLL’s). When Pinvoking, the .NET framework (or Common Language Routine) will load the DLL and handle the type conversions automatically. The most common use of P/Invoke is to use a feature of Windows that is only contained in the Win32 API. The API in Windows is extremely extensive and only some of the features are encapsulated in .NET libraries. For example, Form.Show(); is really a wrapper for the ShowWindow() API found in shell32.dll.
Please go and have a look at his blog for more details.
A lot of the PInvoke methods are now wrapped in the .NET framework, and it is no longer necessary to access them directly. But some functionality is not, and in those cases, it is good to know that the "old" way still works.
To achieve our objective and disable the close button on another window, we first need to find the windows handle.
In .NET, it's very easy to list the running process on the system and find their handles. Actually, in the old days you even had to use PInvoke for that, so here's an example of something that has been wrapped in the .NET framework to help developers.
In my attached sample app, I enumerate all the running processes, and I list those that have a window handle (because those are the ones with a UI window we can manipulate) in a ListView:
using System.Diagnostics;
Process[] processlist = Process.GetProcesses();
foreach (Process process in processlist)
{
//Add process to listview
}
The Process object contains the MainWindowHandle we need to manipulate the window.
Process
MainWindowHandle
Now, to disable the Close button, we don't actually manipulate the button itself. What we do is that we manipulate the window's System menu. The system menu is the one that pops up when you click the application icon in the upper left corner of the title bar.
This is the system menu from Notepad (sorry that it is in Swedish):
The last menu item on the system menu is "Close". If we disable that item (or remove it completely), we will disable the close button at the same time. That's actually quite nice, because we wouldn't want to disable the close button but leave users with the option to shut down the window using the Close menu item on the System menu.
When we have the Window handle, we need to use that to find the System menu handle. First we need to declare the PInvoke method needed:
[DllImport("user32.dll")]
static extern IntPtr GetSystemMenu(IntPtr hWnd, bool bRevert);
We can then use it to find the System menu handle:
IntPtr sysMenuPtr = GetSystemMenu(mainWindowPtr, false);
And with that, we can disable it. First we need to define two more PInvoke methods:
[DllImport("user32.dll")]
static extern bool EnableMenuItem(IntPtr hMenu, uint uIDEnableItem, uint uEnable);
[DllImport("user32.dll")]
static extern bool DrawMenuBar(IntPtr hWnd);
The first one sets the enabled/disabled state of the menu items, and the second refreshes the menu once changed.
EnableMenuItem takes some parameters. The first one, hMenu is the handle to the menu that should be changed.
EnableMenuItem
hMenu
The LAST parameter (to take that first), uEnable is NOT - like you could be tempted to believe - a mere boolean value of enabled or not. It is a flag variable you use to send information to the method about what you actually want to do.
uEnable
The second one, uIDEnableItem identifies the menuitem we want to change. It can either be a specific item identified by a constant id or it can be an item identified by it's index in the menu. In our case, we already know that we want to disable the "Close" item, so we can specify the item directly regardless of where in the menu it is.
uIDEnableItem
We do that by specifying it's constant id, SC_CLOSE and calling EnableMenuItem with the menu items id in uIDEnableItem, and as uEnable flag we send both MF_BYCOMMAND to let the method know that we have specified the menu items id instead of it's index AND MF_GRAYED to gray out and disable the item.
SC_CLOSE
MF_BYCOMMAND
MF_GRAYED
private const int MF_BYCOMMAND = 0x0;
private const int MF_BYPOSITION = 0x400;
private const int MF_REMOVE = 0x1000;
private const int MF_ENABLED = 0x0;
private const int MF_GRAYED = 0x1;
private const int MF_DISABLED = 0x2;
private const int SC_CLOSE = 0xF060;
To disable the item, we then call:
EnableMenuItem(CurrentSystemMenuHandle, SC_CLOSE, MF_BYCOMMAND | MF_GRAYED);
DrawMenuBar(CurrentSystemMenuHandle);
We COULD (if we wanted to) have disabled it by index instead. In that case, we would have called:
EnableMenuItem(CurrentSystemMenuHandle, 6, MF_BYPOSITION | MF_GRAYED);
Where 6 is the index of the Close item (even the divider has an index).
Of course, that won't work if someone has manipulated and rearranged the menu items...
Note: This will disable the Close menu item in the System menu AND the Close button in the upper right corner. But normally, an application will have another menu that might feature a close funtion. That one is NOT disabled. We wouldn't want to do that either, because that one provides the user with the option to close the application CORRECTLY.
File menu from Notepad where you can close the application correctly (Also in Swedish - sorry!):
I have done a demo application that can enable and disable the close functionality as well as REMOVE the option completely, should you want to do that. It also shows how to get the current state of the close menu item.
PInvoke can be used to manipulate other windows in a lot of interesting ways, their size, position, minimized and maximized property, Zorder and get them to stay on top or not.
In the demo app, I have also shown how to set a window topmost and remove it again.
When you work with PInvoke, this is a useful site that describes most of the methods:
The point of this article is to do exactly what I have described above. You are free to add new functionality, but please don't send me a lot of suggestions like "You should really add this and that functionality..." - It is only a demonstration of a few methods and I'm not going to add any other functionality to this article.
Version 1.00 - Initial. | http://www.codeproject.com/Articles/611358/How-to-use-PInvoke-to-disable-the-close-button-on | CC-MAIN-2014-52 | refinedweb | 1,274 | 60.04 |
Here’s a neat Python trick you might just find useful one day. Let’s look at how you can dynamically define classes, and create instances of them as required.
This trick makes use of Python’s object oriented programming (OOP) capabilities, so we’ll review those first.
Classes and objects
Python is an object-oriented language, meaning it lets you write code in the object oriented paradigm.
The key concept in this programming paradigm is classes. In Python, these are used to create objects which can have attributes.
Objects are specific instances of a class. A class is essentially a blueprint of what an object is and how it should behave.
Classes are defined with two types of attribute:
- Data attributes — variables available to a given instance of that class
- Methods — functions available to an instance of that class
The classic OOP example usually involves different types of animal, or food. Here, I’ve gone more practical with a simple data visualization theme.
First, define the class
BarChart.
class BarChart: def __init__(self, title, data): self.title = title self.data = data def plot(self): print("\n"+self.title) for k in self.data.keys(): print("-"*self.data[k]+" "+k)
The
__init__ method lets you set attributes upon instantiation. That is, when you create a new instance of
BarChart, you can pass arguments that provide the chart’s title and data.
This class also has a
plot() method. This prints a very basic bar chart to the console when it is called. It could feasibly do more interesting things in a real application.
Next, instantiate an instance of
BarChart:
data = {"a":4, "b":7, "c":8}bar = BarChart("A Simple Chart", data)
Now you can use the
bar object in the rest of your code:
bar.data['d'] = bar.plot()
A Simple Chart ---- a ------- b -------- c ----- d
This is great, because it allows you to define a class and create instances dynamically. You can spin up instances of other bar charts in one line of code.
new_data = {"x":1, "y":2, "z":3} bar2 = BarChart("Another Chart", new_data) bar2.plot()
Another Chart - x -- y --- z
Say you wanted to define several classes of chart. Inheritance lets you define classes which “inherit” properties from base classes.
For example, you could define a base
Chart class. Then you can define derived classes which inherit from the base.
class Chart: def __init__(self, title, data): self.title = title self.data = data def plot(self): pass
class BarChart(Chart): def plot(self): print("\n"+self.title) for k in self.data.keys(): print("-"*self.data[k]+" "+k)
class Scatter(Chart): def plot(self): points = zip(data['x'],data['y']) y = max(self.data['y'])+1 x = max(self.data['x'])+1 print("\n"+self.title) for i in range(y,-1,-1): line = str(i)+"|" for j in range(x): if (j,i) in points: line += "X" else: line += " " print(line)
Here, the
Chart class is a base class. The
BarChart and
Scatter classes inherit the
__init__() method from
Chart. But they have their own
plot() methods which override the one defined in
Chart.
Now you can create scatter chart objects as well.
data = {'x':[1,2,4,5], 'y':[1,2,3,4]} scatter = Scatter('Scatter Chart', data) scatter.plot()
Scatter Chart 4| X 3| X 2| X 1| X 0|
This approach lets you write more abstract code, giving your application greater flexibility. Having blueprints to create countless variations of the same general object will save you unnecessarily repeating lines of code. It can also make your application code easier to understand.
You can also import classes into future projects, if you want to reuse them at a later time.
Factory methods
Sometimes, you won’t know the specific class you want to implement before runtime. For example, perhaps the objects you create will depend on user input, or the results of another process with a variable outcome.
Factory methods offer a solution. These are methods that take a dynamic list of arguments and return an object. The arguments supplied determine the class of the object that is returned.
A simple example is illustrated below. This factory can return either a bar chart or a scatter plot object, depending on the
style argument it receives. A smarter factory method could even guess the best class to use, by looking at the structure of the
data argument.
def chart_factory(title, data, style): if style == "bar": return BarChart(title, data) if style == "scatter": return Scatter(title, data) else: raise Exception("Unrecognized chart style.")
chart = chart_factory("New Chart", data, "bar") chart.plot()
Factory methods are great when you know in advance which classes you want to return, and the conditions under which they are returned.
But what if you don’t even know this in advance?
Dynamic definitions
Python lets you define classes dynamically, and instantiate objects with them as required.
Why might you want to do this? The short answer is yet more abstraction.
Admittedly, needing to write code at this level of abstraction is generally a rare occurrence. As always when programming, you should consider if there is an easier solution.
However, there may be times when it genuinely proves useful to define classes dynamically. We’ll cover a possible use-case below.
You may be familiar with Python’s
type() function. With one argument, it simply returns the “type” of the object of the argument.
type(1) # <type 'int'> type('hello') # <type 'str'> type(True) # <type 'bool'>
But, with three arguments,
type() returns a whole new type object. This is equivalent to defining a new class.
NewClass = type('NewClass', (object,), {})
- The first argument is a string that gives the new class a name
- The next is a tuple, which contains any base classes the new class should inherit from
- The final argument is a dictionary of attributes specific to this class
When might you need to use something as abstract as this? Consider the following example.
Flask Table is a Python library that generates syntax for HTML tables. It can be installed via the pip package manager.
You can use Flask Table to define classes for each table you want to generate. You define a class that inherits from a base
Table class. Its attributes are column objects, which are instances of the
Col class.
from flask_table import Table, Col class MonthlyDownloads(Table): month = Col('Month') downloads = Col('Downloads') data = [{'month':'Jun', 'downloads':700}, {'month':'Jul', 'downloads':900}, {'month':'Aug', 'downloads':1600}, {'month':'Sep', 'downloads':1900}, {'month':'Oct', 'downloads':2200}] table = MonthlyDownloads(data)print(table.__html__())
You then create an instance of the class, passing in the data you want to display. The
__html__() method generates the required HTML.
Now, say you’re developing a tool that uses Flask Table to generate HTML tables based on a user-provided config file. You don’t know in advance how many columns the user wants to define — it could be one, it could be a hundred! How can your code define the right class for the job?
Dynamic class definition is useful here. For each class you wish to define, you can dynamically build the
attributes dictionary.
Say your user config is a CSV file, with the following structure:
Table1, column1, column2, column3 Table2, column1 Table3, column1, column2
You could read the CSV file line-by-line, using the first element of each row as the name of each table class. The remaining elements in that row would be used to define
Col objects for that table class. These are added to an
attributes dictionary, which is built up iteratively.
for row in csv_file: attributes = {} for column in row[1:]: attributes[column] = Col(column) globals()[row[0]] = type(row[0], (Table,), attributes)
The code above defines classes for each of the tables in the CSV config file. Each class is added to the
globals dictionary.
Of course, this is a relatively trivial example. FlaskTable is capable of generating much more sophisticated tables. A real life use-case would make better use of this! But, hopefully, you’ve seen how dynamic class definition might prove useful in some contexts.
So now you know…
If you are new to Python, then it is worth getting up to speed with classes and objects early on. Try implementing them in your next learning project. Or, browse open source projects on Github to see how other developers make use of them.
For those with a little more experience, it can be very rewarding to learn how things work “behind-the-scenes”. Browsing the official docs can be illuminating!
Have you ever found a use-case for dynamic class definition in Python? If so, it’d be great to share it in the responses below. | https://www.freecodecamp.org/news/dynamic-class-definition-in-python-3e6f7d20a381/ | CC-MAIN-2019-43 | refinedweb | 1,455 | 66.03 |
Hope you enjoyed the first few posts on JavaScript and Hardware! If you've missed the previous parts, you can read them here and here. We're not over by a long shot, so let's strap ourselves in!
Today, we're going to start looking at using modern JavaScript syntax and features, as well making sure we write consistent code. This will allow us to use the latest features and make sure we're writing our code neatly and precisely!
Setting Up EditorConfig
We're going to start off with making sure the way our code is typed is consistent across all editors. This is especially useful when you have multiple developers in the same project, especially if they use different editors, and make sure they keep to the same code rules. This is where EditorConfig comes in handy!
With EditorConfig, we can specify how code should be written, over all files and specific file types, with how many spaces or tabs should be in an indent, inserting new lines at the end of files, and more! Most editors now come with EditorConfig built in, but if yours doesn't, be sure to install EditorConfig for your Editor!
In our project folder, we can now write an
.editorconfig file and place in the following rules. This specifies that all files need to have spaces for indents and have to be 2 spaces per indent, and make newlines at the end of our code files, except with JSON files we'll be making later.
# root = true [*] indent_style = space indent_size = 2 end_of_line = lf charset = utf-8 trim_trailing_whitespace = true insert_final_newline = true [*.json] insert_final_newline = ignore
Now that we've specified some editor rules, let's improve our code from the previous lesson!
Using ES6 Code
The first thing we can do right away, is update our code using the ES6 syntax. ES6 is a massive improvement to JavaScript, as it includes features that makes the writing of JavaScript easier, more readable, and has powerful new methods.
First thing we can do is update how we write our variables. In our previous code, we only used
var to write our variables, but with ES6 we can write two different kinds of variables:
const and
let. Let's look at them!
const is used for variables that are constant and never change — you may have seen this before in languages like C++.
let, on the other hand, is used for variables that can change from their initial value.
const five = require('johnny-five'); // Set `lightOn` to true as a default since our LED will be on let lightOn = true; // Make a new Board Instance const board = new five.Board(); // When the board is connected, turn on the LED connected to pin 9 board.on('ready', function() { console.log('[johnny-five] Board is ready.'); });
In our code, we've updated our code to use JavaScript constants for Johnny-Five component objects, and we'll use
let for the
lightOn variable since it continuously changes.
Another thing we can update is any function statements — ES6 allows us the use the 'arrow' syntax. This makes the writing of functions simpler and more convenient.
// When the board is connected, turn on the LED connected to pin 9 board.on('ready', function() { console.log('[johnny-five] Board is ready.'); // Make a new Led object and connect it to pin 9 const led = new Led(9); // Make a new Button object assigned to pin 7 // We also need to say it is a pullup resistor! const pushButton = new Button({ pin: 7, isPullup: true, }); // Switch it on! led.on(); // If the button is pressed, toggle the LED on or off pushButton.on('down', () => { if (lightOn) { led.off(); lightOn = false; } else { led.on(); lightOn = true; } }); // REPL object so we can interact with our LED this.repl.inject({ // Control the LED via calling for the object led, // switchOn and switchOff functions to turn LED on and off using REPL switchOn: () => { if (lightOn) { console.log('[johnny-five] LED is already on!'); } else { led.on(); lightOn = true; } }, switchOff: () => { if (!lightOn) { console.log('[johnny-five] LED is already off!'); } else { led.stop().off(); lightOn = false; } }, }); // When the board is closing, stop any LED animations and turn it off this.on('exit', () => { led.stop().off(); console.log('[johnny-five] Bye Bye.'); }); });
There is one caveat though — using the arrow syntax does not give the function its own context, so you can't use
this! Because of...this...we can't apply the arrow syntax to the
Board object — it requires
function() for its
'ready' event, it needs the
Board context for specific things!
Getting ready for the future with Babel
Now, currently, our current ES6 code works in our Node.JS runtime fine, but it could do more! The problem is, since Node.JS uses Chrome's V8 engine for JavaScript, that means it's not entirely up to date with ES6's awesome and fancy fancy features. However, we can use them now using a great development tool called Babel!
Babel allows us to code with current and upcoming JavaScript features, and can also output our code to browser-ready code for extra support! This will be useful later when it comes to the browser, but right now, we need to use it for ES6's module syntax.
In your CLI, add babel using Yarn as a development dependency.
yarn add --dev babel-cli babel-preset-env
We've added two packages here — babel-cli and babel-preset-env. Before we use babel, we need to also add a
.babelrc file, which is needed to work with babel's API. In
.babelrc, we can tell it which preset to use with the following:
{ "presets": [ "env" ] }
Now that we've specified a preset for our Babel runtime, we can now use babel-cli! With babel-cli, we can compile our code into ES5 syntax, or run the Babel code from the CLI!
In our code, we'll change our first line of code to use the modules syntax.
import { Board, Led, Button } from 'johnny-five'; // Set `lightOn` to true as a default since our LED will be on let lightOn = true; // Make a new Board Instance const board = new Board(); // When the board is connected, turn on the LED connected to pin 9 board.on('ready', function() { console.log('[johnny-five] Board is ready.'); // Make a new Led object and connect it to pin 9 const led = new Led(9); // We also need to say it is a pullup resistor! const pushButton = new Button({ pin: 7, isPullup: true, }); });
Notice we've no longer stored the entirety of Johnny-Five into a variable, but we're importing specific objects into our file using the ES6
import syntax. Seem familiar to some of you who have used Python before?
Since we've imported specific objects, we can now replace lines of code. No longer do we need to use
new five.Board(), for example. We can just do
new Board() instead since we've imported that specific module!
Now, if we run the following in the CLI:
yarn babel-node index.js
Our Nodebot, with our updated ES6 code, should run without any problems! Neato!
Getting Into Node Scripts
We've made a lot of progress so far, but let's do something that will be useful for us later — adding a Node script. In your package.json file, you can add scripts that can then be run by Yarn instead of writing long commands using the CLI. This is especially useful for commands that can be very long and can be quite complicated!
Instead of writing
yarn babel-node index.js all the time, why don't we put it into a simple
start command?
"scripts": { "start": "babel-node index.js" }
Now all we need to do is run
yarn start, and it will run our babel command no problem! We'll be making more scripts later which will greatly benefit our development process.
Writing Consistent Code with ESLint
It helps a lot to write consistent code, and one way of achieving this is using a Linter. Linters use a specific set of rules when it comes to code, and it gives a guide on how to write them. For this, we're using ESLint, a very popular JavaScript Linter.
This will be useful for testing our code is correct and follows a set of rules for writing our code. This is especially helpful if the code will be released for production.
For this, we'll be using
eslint-airbnb-base, a set of ESLint rules for JavaScript instead of writing our own. It's worth noting
eslint-airbnb is the usual set of ESLint rules by Airbnb, but they contain rules for React and JSX. We only need
eslint-airbnb-base for our code. Simply install them by running the following command.
npm info "eslint-config-airbnb-base@latest" peerDependencies --json | command sed 's/[\{\},]//g ; s/: /@/g' | xargs yarn add --dev "eslint-config-airbnb-base@latest"
This will install the latest version of Airbnb's base ESLint rules, and all dependencies required for them. In your project, make a file called
.eslintrc.json, and this will contain our configuration of ESLint.
{ "extends": [ "airbnb-base" ] }
Now we can run ESLint to look through our code using a ruleset, by simply running
yarn
eslint. But uh-oh! Looks like we have a few errors!
These warnings and errors show problems in our code and need to be fixed! Some perfectly simple ones to fix, like making sure we have comma dangles in
return and making sure our file has a new line at the end, but others are a bit trickier since the code requires these features, like the
console.log()!
Not to worry, if we put this at the top of our file:
/* eslint-disable no-console */ import { Board, Led, Button } from 'johnny-five'; // Set `lightOn` to true as a default since our LED will be on let lightOn = true;
Then this will make our console error go away, especially since we need the
console.log() and we can do it just for this file. However, other features, such as using
function() in our code instead of the arrow syntax and padded blocks, may need to be all around our project. This is where we can specify rules of our own through
.eslintrc.json and override some of the ones set via airbnb-base.
{ "extends": [ "airbnb-base", ], "rules": { "func-names": "off", "space-before-function-paren": "off", "padded-blocks": "off" } }
We should now get no errors in our code! Nice and clean, and with consistent rules so we can write code that's tidy.
Type Checking with Flow
Another thing to add is type checking — with type checking, we can make sure our types are consistent, i.e. that what is expected is a string, then it must be a string, otherwise it will throw an error. For this, we'll be installing Flow.
Flow will check over the code we've written and make sure all the data types are correct and consistent throughout all of our code. The following command will install the Flow as well as plugins for Babel and ESLint.
yarn add --dev flow-bin babel-preset-flow babel-eslint eslint-plugin-flowtype
Once we've installed flow, we need to update our
.babelrc for it to work with Flow.
{ "presets": [ "env", "flow" ] }
We also need to update
.eslintrc.json to work with Flow as well.
{ "extends": [ "airbnb-base", "plugin:flowtype/recommended" ], "plugins": [ "flowtype" ], "rules": { "func-names": "off", "space-before-function-paren": "off", "padded-blocks": "off" } }
Finally, add an empty
.flowconfig file in your project — and Flow will be ready to go! You can then add an annotation to files you want flow to check — in this case, we'll add this to our Nodebot script.
// @flow /* eslint-disable no-console */ import { Board, Led, Button } from 'johnny-five';
Now simply run
yarn flow, and hopefully there will be no errors! We can also add a test script to our
package.json file to run ESLint and then run flow once it's done.
"scripts": { "start": "babel-node index.js", "test": "eslint ./ && flow" },
Well done! You've now added modern JavaScript and code testing tools to your Nodebot!
Overview of updating our code
We've covered a lot here, but it's worth reviewing what's been done here. Here, we've added tools like EditorConfig to have consistent writing to our code, updated our code with ES6 features, and added Babel to use new JavaScript features.
We've also added tools for testing, with ESLint for testing the consistency of our code following specific rules and Flow to make sure our code types are correct. This will be very useful when we add more code and in larger projects!
Next Time...
We're going to be making a Node Server and make a simple web interface for our Nodebot! We'll be covering the process of making a simple server, building a page with HTML, and using sockets. This is where it's going to get exciting, so don't miss out!
I bet you totally can't wait for the next part!
Are you really looking forward to learning more about JavaScript and Hardware? Why wait for it to be released publicly? Pledge to my Patreon and you'll be able to see it days before anybody else!
Alternatively, you can follow me on Twitter, like my page on Facebook, or subscribe to my mailing list to know when it's out! Also, if you want to help me out, you can donate to my PayPal, it helps me make these articles and code! See you next time! | https://www.hackster.io/IainIsCreative/javascript-with-hardware-part-two-using-modern-javascript-c6f818 | CC-MAIN-2018-43 | refinedweb | 2,283 | 72.76 |
Burst 🎆
Burst is a Swift and simple way to make elements in your iOS app burst.
Back in the day, Facebook Paper popularized a firework burst effect using CAEmitterLayers with buttons.
This library provides a firework effect using CAEmitterLayers contained in an easy-to-use customizable component, written in Swift.
If you enjoy this library, you may also like another CAEmitterLayer project, Twinkle.
5.0– Target your Podfile to the latest release or master
Quick Start
Burst is available and recommended for installation using the Cocoa dependency manager CocoaPods. You can also simply copy the
Burst.swift file into your Xcode project.
# CocoaPods pod "Burst", "~> 0.1.0" # Carthage github "piemonte/Burst" ~> 0.1.0 # SwiftPM let package = Package( dependencies: [ .Package(url: "", majorVersion: 0) ] )
Usage
The sample project provides an example of how to integrate
Burst, otherwise you can follow this example.
import Burst
// ... let button: BurstButton = BurstButton(frame: CGRect(x: 0, y: 0, width: 100, height: 100)) // ... extension ViewController { @objc func handleButton(_ button: BurstButton) { button.isSelected = !button.isSelected } }
Community
- Found a bug? Open an issue.
- Feature idea? Open an issue.
- Want to contribute? Submit a pull request.
Resources
- Core Animation Reference Collection
- Swift Evolution
- MCFireworksButton, Objective-C version
- Twinkle
- Twinkle for Android
- Shimmer
License
Burst is available under the MIT license, see the LICENSE file for more information.
Latest podspec
{ "name": "Burst", "version": "0.1.1", "license": "MIT", "summary": "Swift and easy way to make elements in your iOS or tvOS app burst", "homepage": "", "social_media_url": "", "authors": { "patrick piemonte": "[email protected]" }, "source": { "git": "", "tag": "0.1.1" }, "platforms": { "ios": "10.0", "tvos": "10.0" }, "source_files": "Sources/*.swift", "resources": "Sources/Resources/*.png", "requires_arc": true, "swift_versions": "5.0", "screenshots": "" }
Sun, 12 May 2019 10:17:11 +0000 | https://tryexcept.com/articles/cocoapod/burst | CC-MAIN-2019-22 | refinedweb | 286 | 53.27 |
This chapter describes how Oracle Transfer Pricing goes about calculating option costs. The chapter begins with an introduction to option costs and subsequently describes option cost theory and the calculation architecture.
This chapter covers the following topics:
The purpose of option cost calculations is to quantify the cost of optionality, in terms of a spread over the transfer rate, for a single instrument. The cash flows of an instrument with an optionality feature change under different interest rate environments and thus should be priced accordingly.
Consider a mortgage that may be prepaid by the borrower at any time without penalty. Here the lender has, in effect, granted the borrower an option to buy back the mortgage at par, even if interest rates have fallen in value. Thus, this option has a cost to the lender and should be priced accordingly.
Another example of an instrument with an optionality feature is an adjustable rate loan issued with rate caps (floors) which limit its maximum (minimum) periodic cash flows. These caps and floors constitute options.
When banks give such options to their borrowers, they raise the bank's cost of funding the loan and affect the underlying profit. Consequently, banks need to use the calculated cost of options given to their borrowers in conjunction with the transfer rate to analyze profitability.
Oracle Transfer Pricing uses the Monte Carlo technique to calculate the option cost. The application calculates and outputs two spreads, and the option cost is calculated indirectly as a difference between these two spreads.
Static spread
Option-adjusted spread (OAS)
The option cost is derived as follows:
option cost = static spread - OAS
The static spread is equal to the margin, and the OAS to the risk-adjusted margin of an instrument. Therefore, the option cost quantifies the loss or gain due to risk.
You can calculate option costs using the Transfer Pricing Process rule. See: Transfer Pricing Process Rules, Oracle Transfer Pricing User Guide.
This description of the option cost calculation architecture makes use of an example and assumes:
The instrument, taken in the example, pays K cash flows, each occurring at the end of the month.
Each month has the same duration in number of years, such as 1/12.
The discount factor calculation does not use the approximation for small option adjusted spread.
Related Topics
Overview of Transfer Pricing Option Cost
You can define neither the static nor the option-adjusted spread directly, as they are solutions of two different equations. Therefore, the system solves a simplified version of the equations. The static spread is the value ss that solves the following equation:
Here:
MV = market, book, or par value of the instrument
CF(k) = cash flow occurring at the end of month k along the forward rate scenario
f(j) = forward rate for month j
Delta T = length (in years) of the compounding period; hard-coded to a month, such as 1/12
In the Monte Carlo methodology, the option-adjusted spread is the value OAS that solves the following equation:
Here:
N = total number of Monte Carlo scenarios
CF(k, w) = cash flow occurring at the end of month k along scenario w
D (k, w, OAS) = stochastic discount factor at the end of month k along scenario w for a particular OAS
Note: Cash flows are calculated until maturity even if the instrument is adjustable. Otherwise the calculations would not catch the cost of caps or floors.
In real calculations, the formula for the stochastic discount factor is simplified.
Related Topics
Understanding Option Cost Calculation Architecture
In this example, the transfer pricing yield curve is the Treasury curve. It is flat at 5%, which means that the forward rate is equal to 1%. This example uses only two Monte Carlo scenarios:
Up scenario: One-year rate one year from now equal to 6%.
Down scenario: One-year rate one year from now equal to 4%.
The average of these two stochastic rates is equal to 5%.
The instrument record is two year adjustable, paying yearly, with simple amortization. Its rate is Treasury rate plus 2%, with a cap at 7.5%. Par value and market value are equal to $1.
For simplicity, this example assumes that the compounding period used for discounting is equal to a year, for example:
Delta t = 1
The static spread is the solution of the following equation:
1 = [0.07 / (1 + 0.05 + SS) ] + [(1 + 0.07) / (1 + 0.05 + SS)2]
The static spread is supposed to be equal to the margin. In this example:
static spread = coupon rate - forward rate =7%-5%=2%
Substituting this value (2% or .02) in the right side of the above equation yields:
This is equal to par, which proves that the static spread is equal to the margin.
The OAS is the solution of the following equation:
By trial and error you get a value of 1.88%.
To summarize:
option cost = static spread - OAS = 2%-1.88% = 12 basis points
Related Topics
Understanding Option Cost Calculation Architecture
The following graphic represents the option cost calculations process flow.
This exposition focuses only on the following steps:
Calculating forward rates
Calculating static spread
Calculating OAS
Related Topics
Understanding Option Cost Calculation Architecture
The cubic spline interpolation routine first calculates smoothed, continuously compounded zero-coupon yields Y(j) with maturity equal to the end of month j. The formula for the one-month annually compounded forward rate spanning month j + 1 is:
fj = exp [ (Yj) (j + 1) - Y(j) j ] -1
Related Topics
Option Cost Calculations Process Flow
You can calculate the static spread using the Newton-Raphson algorithm. If Newton-Raphson algorithm does not converge, which can happen if cash flows alternate in sign, you can revert to a brute search algorithm. However, this algorithm is much slower.
You can control the convergence speed of the algorithm by adjusting the value of the variable OptionCostSpeedFactor. This variable is defined through a profile option, Option Cost Speed Factor.
The default value is equal to one. A lower speed factor provides more accurate results. In all experiments, a speed factor equal to one results in a maximum error (on the static spread and OAS), which is lower than half a basis point.
To recap the Newton-Raphson algorithm, let x be the static spread. At each iteration m, the function F(m) is defined by the following equation:
Equation A
The algorithm is:
For performance reasons, the code utilizes a more complicated algorithm, albeit similar in spirit. This is the reason why the specific values for tol and MaxIterations, or details on the brute search are mentioned above.
Related Topics
Option Cost Calculations Process Flow
For fixed rate instruments, such as instruments having the same deterministic cash flows as the stochastic cash flows, the OAS is by definition equal to the static spread. This statement is true in the case of continuous compounding. For discrete compounding this approximation has a negligible impact on the accuracy of the results.
The OAS is also calculated with an optimized version of Newton-Raphson algorithm. See: Calculating Static Spread.
Note: While calculating OAS, the following substitution is made in the Newton-Raphson method: OAS = x(m)
Related Topics
Option Cost Calculations Process Flow
According to the option cost theory, when you select the market value of an instrument to equate the discounted stream of cash flows, the static spread is equal to margin and the OAS to the risk-adjusted margin of the instrument.
This exposition of option cost theory assumes that you have good knowledge of no arbitrage theory, and requires you to note these assumptions and definitions:
To acquire the instrument, the bank pays an initial amount V(0), the current market value.
The risk-free rate is denoted by r(t).
The instrument receives a cash flow rate equal to C(t), with
0 < = t < = T < = Maturity
The bank reinvests the cash flows in a money market account which, with the instrument, comprises the portfolio.
The total return on a portfolio is equal to the expected future value divided by the initial value of the investment.
The margin p on a portfolio is the difference between the rate of return (used to calculate the total return) and the risk free rate r.
The risk-adjusted expected future value of a portfolio is equal to its expected future value after hedging all diversifiable risks.
The total risk-adjusted return of a portfolio is equal to the risk-adjusted expected future value divided by the initial value of the investment.
The risk-adjusted margin m of a portfolio is the difference between the risk-adjusted rate of return (used to calculate the total risk-adjusted return) and the risk-free rate r.
More precisely,
Equation B
Related Topics
Overview of Transfer Pricing Option Cost
In a no-arbitrage economy with complete markets, the market value at time t of an instrument with cash flow rate C(t)is given by:
If expectation is taken with respect to the risk-neutral measure, the expected change in value is given by:
The variation in value is, therefore, equal to the expected value of the change dV plus the change in value of a martingale M in the risk-neutral measure:
dV(t) = Et[dV] + dM = rVdt - Cdt + dM
If I is the market value of the money market account in which cash flows are reinvested then:
Note that unlike V, this is a process of finite variation. By Ito's lemma:
dl = rIdt + Cdt
Let Sbe the market value of a portfolio composed of the instrument plus the money market account. We have:
dS = dV + dI = rSdt + dM
S(0) = V(0)
In other words, the portfolio, and not the instrument, earns the risk-free rate of return. An alternate representation of this process is:
dS/S = rdt + dN
Here N is another martingale in the risk-neutral measure. The expected value of the portfolio is then:
Here, <N, N> is the quadratic variation of N. This is equivalent to:
To define the martingale:
Z = eN-(1/2)<N,N>
This represents the relative risk of the portfolio with respect to the standard money market account, that is, the account where only an initial investment of V(0) is made. Then
In other words, the expected future value of the portfolio is equal to the expected future value of the money market account adjusted by the correlation between the standard money market account and the relative risk. Assuming complete and efficient markets, banks can fully hedge their balance sheet against this relative risk, which should be neglected to calculate the contribution of a particular portfolio to the profitability of the balance sheet. Therefore:
In this example, the risk-adjusted rate of return of the bank on its portfolio is equal to the risk-free rate of return.
Now suppose that another instrument offers cash flows C' > C.
Assuming complete and efficient markets, the market value of this instrument is:
The value of the corresponding portfolio is denoted by S' > S.
By analogy with the previous development, we have:
Again, the risk-adjusted rate of return of the bank on its portfolio is equal to the risk-free rate of return. Suppose now that markets are incomplete and inefficient. The bank pays the value V(0) and receives cash flows equal to C'. We have:
By definition of the total risk-adjusted return for Equation B, we have:
Equation C
Therefore, by analogy with the previous development,
dS' = (r + m)S' + dM
S(0) = V(0)
This can be decomposed into
Equation D
dV' = (r + m) V' dt - C' dt + dM'
Equation E
dS' = dV' + dI'
dI' = rI' dt + C' dt
Equation F
V' (0) - V(0)
The solution of Equation D and Equation F is:
By the law of large numbers, Equation D and Equation F result in:
OAS = m
In other words, the OAS is equal to the risk-adjusted margin.
Related Topics
Option Cost Theory
Static spread calculations are deterministic. Therefore, they are a special case of the equations in the previous section where all processes generally are equal to their expected value, and the margin p is substituted for the risk-adjusted margin m. The equivalent of Equation C is then:
Here f is the instantaneous forward rate.
The equivalent of Equation D and Equation F is then:
Equation G
dV' = (r + m)V' dt - C' dt
Equation H
dI' = rI' dt + C' dt
dS' = dV' + dI'
Equation I
V'(0) = V(0)
The solution of Equation G and Equation I is:
Equation J
Comparing Equation J and Equation A:
ss = p
In other words, the static spread is equal to the margin.
Related Topics
Option Cost Theory
The option cost calculation model is flexible and you can calibrate the calculations to your needs. See:
Nonunicity of the Static Spread
Calibrating the Accuracy of Option Cost Calculations
Related Topics
Overview of Transfer Pricing Option Cost
Nonunicity of the static spread means that sometimes more than one value can solve the static-spread equation. However, such cases extremely rare.
Take an instrument with a market value of $0.445495 for example. Suppose the instrument has two cash flows. The following table shows the value of the cash flows and the corresponding discount factors (assuming a static spread of zero).
The continuously compounded static spread solves the following equation:
0.9 Exp (-ss) -0.8 * 0.505025 Exp (-2ss) -0.445494 = 0
There are two possible solutions for the static spread:
static spread = 0.19%
static spread = $1.81
Related Topics
Option Cost Model Usage Hints
If you desire a better numerical precision than the default precision, you can take two actions:
Decrease the speed factor. See: Calculating Static Spread.
Increase the number of Monte Carlo scenarios.
Both actions increase the calculation time.
Related Topics
Option Cost Model Usage Hints
Copyright © 2006, 2009, Oracle and/or its affiliates. All rights reserved. | http://docs.oracle.com/cd/E18727_01/doc.121/e13528/T427784T427788.htm | CC-MAIN-2014-52 | refinedweb | 2,321 | 50.06 |
Apps are super-important for Ubuntu. Many of us have blogged about this in a more general sense, but I want to provide an update of what has been happening behind the scenes in the last few weeks.
Before I start, I want to reach out to you to be part of the upcoming Apps Sprint. Join us on #ubuntu-arb on irc.freenode.net from Monday, 2nd July to Wednesday, 4th July to learn more about Ubuntu apps, get involved in reviewing them and bringing more progress to this effort.
Getting apps into Ubuntu hasn’t been easy up until now, due to a number of circumstances. The review process takes long, some of the steps involved are cumbersome and the review queue has been filling up. There are reasons for this and there is lots of room for improvement.
Technical requirements
As apps are part of a separate repository, the Technical Board requires us to make some very specific namespace distinctions between “regular packages” and apps. This means that apps install into /opt/ and files which can’t go there (.desktop files, lens specific bits, etc.) have to include the package name to avoid possible file name clashes.
This looks pretty straight-forward and you could just rename files and move things around as part of the packaging, but quite often this means you have to make changes in the code as well (think of file locations, translation files, data directories or file look-ups). For a larger app this results in quite a bit of engineering work to make all the changes and make sure they work as intended.
At this point I want to credit the App Review Board (ARB) for some work they have been doing. They could easily just have said: “Rejected: Your app doesn’t do it right.”, but instead they helped app authors to get their app working. This was time-consuming, but a learning experience for everyone involved.
The good news is: quickly, which is our recommended tool to produce apps, has templates where everybody worked hard to get the templates and the code up to scratch, so that writing code for the extras repository gets easier.
Another piece of good news is: pkgme has progressed nicely and can help with the initial packaging of apps (useful if you don’t use quickly).
I very recently started working on a tool called arb-lint, which automates certain parts of the app review. This will make it possible to collect the knowledge of app policies into it, so new ARB members or app review helpers can easily find out what’s wrong with an app and how to fix it. You could even run it on your own app and find out what needs to be improved.
To sum this up: everybody knows that packaging and policies is quite boring to app authors. They just want to focus on producing great quality apps, they’re not interested in tweaking their build-system to adhere to all the policies. Don’t worry – this is understood. There’s still work to be done, but the tools are all progressing nicely.
App submission
During the 18 months the App Review Board has existed, the submission process has changed a number of times. The tool which is now being used is called myApps and a lot of handy improvements have gone into it in the last weeks and months.
One current problem is that some app authors submit tarballs of their apps, others provide bzr branches, others submit their app in a PPA. While we know how to use all of these tools, it makes the review process fairly inconsistent. This is why we came up with a service called apps-brancher, which downloads the app’s code, sticks it into a bzr branch, attempts to package it if necessary and pushes it to Launchpad.
Staffing
The current ARB members are all volunteers and working hard on apps and other places in Ubuntu. Some weeks ago the Ubuntu App Review Contributors team was set up, so that more active helpers can easily join the effort.
Summary
It is true. There is quite a backlog of apps. Some of them might be reviewed and approved quickly, others will need quite a bit of engineering to get into Ubuntu. Some might not be suitable for the extras repository at all.
As you have read above, there are numerous improvements in the works and there are very likely lots of other things which might result in a nice speed-up. Your help will be appreciated here!
The Ubuntu Apps Sprint
All of the above is why we want to invite you to the Ubuntu Apps Sprint from Monday to Wednesday (2nd-4th July). Join us in #ubuntu-arb on irc.freenode.net to:
- Improve quickly.
- Improve the apps-brancher.
- Improve arb-lint.
- Improve pkgme.
- Review and improve apps and get the queue under control.
- Learn from each other, hatch new plans and make apps just rock in Ubuntu.
Quite a number of experts from the ARB, from the quickly and pkgme teams and lots of others will be around to answer your questions. We hope you will get involved and help us out.
Apps will make Ubuntu even more beautiful. It’s just great to get to see so much creativity first. Contributing here is totally worthwhile.Read more | http://voices.canonical.com/tag/quantal/ | CC-MAIN-2016-44 | refinedweb | 900 | 71.04 |
DEBSOURCES
Skip Quicknav
sources / mnemosyne / 2.6.1+ds
# Mnemosyne: Optimized Flashcards and Research Project
Linux: Windows: []()
[]()
Mnemosyne is:
- a free, open-source, spaced-repetition flashcard program that helps you learn as efficiently as possible.
- a research project into the nature of long-term memory.
If you like, you can help out and upload anomynous data about your learning process (this feature is off by default).
Important features include:
- Bi-directional syncing between several devices
- Clients for Windows/Mac/Linux and Android
- Flashcards with rich content (images, video, audio)
- Powerful card types
- Flexible card browser and card selection
- Visualization to illustrate your learning process
- Extensive plugin architecture and external scripting
- Different learning schedulers
- Webserver for review through browser (does not implement any security features so far)
- Cramming scheduler to review cards without affecting the regular scheduer
- Core library that allows you to easily create your own front-end.
You can find a more detailed explanation of the features on the [webpage](), as well as the general [documentation]().
# Installation of the development version and hacking
If you just want to download the latest Mnemosyne release as a regular user, please see the [Download section]().
If you are interested in running and changing the latest code, please read on.
We use the git version control system and [Github]() to coordinate the development.
Please use a search engine to find out how to install git on your operating system.
If you are new to git and github, there are many tutorials available on the web.
For example, [this]() interactive tutorial.
## Working locally with the code
If you want to hack on Mnemosyne and propose your changes for merging later ('pull request'), first create an account on, or log into, Github.
Then, [fork]() the project on Github.
You now have your own copy of the Mnemosyne repository on Github.
To work with the code, you need to clone your personal Mnemosyne fork on Github fork to your local machine.
It's best to setup [ssh for Github](), but you don't have to.
Change to your working directory on the terminal and then clone your repository of Mnemosyne (in this example without ssh):
```
git clone<your-username>/mnemosyne.git
```
Let's also make it easy to track the official Mnemosyne repository:
```
git remote add upstream
```
It is best to create your own branches for your work:
```
git checkout -b <branch name>
```
Whenever you want, you can commit your changes:
```
git status
git add <files to add>
git commit -v
```
## Sharing your changes
At some point you may want to share your changes with everyone.
Before you do so, you should check make sure that you didn't introduce new test failures.
Then, you should check if changes were made to the original Mnemosyne repository on Github.
Your private fork on Github is not automatically updated with these changes.
You can get the most recent changes like this:
```
git fetch upstream
git checkout master
git merge upstream/master
```
If there are new changes, your repository now looks like this (each number symbolyses a commit):
```
your local master branch: ---1-2-3-4'-5'-6'-7'-8' (new changes from upstream)
|
your local feature branch: |-4-5-6 (your changes)
```
Before you push your branch, you should rebase it on master.
Rebasing takes all the changes in your branch (in the figure: 4-5-6) and tries to apply them on top of the master branch, so that we end up with a linear history:
```
your local master branch: ---1-2-3-4'-5'-6'-7'-8' (new changes from upstream)
|
your local feature branch: |-4-5-6 (your changes)
```
Rebase like this:
```
git checkout <branch name>
git rebase master
```
Follow the instructions (`git status` gives additional information).
Once you've successfully rebased your branch, push it to your Github account (we use `--force`, because we want to overwrite the existing branch on our private Github account):
```
git push origin --force <branch name>
```
To create a pull request for your changes, go to the Mnemosyne project page on Github and click on the pull request tab.
Click on 'New pull request' and follow the instructions.
Finally, some more background on the whole workflow can be found [here]().
## About the code base
To get an overview of how all the different bits of the library fit together, see the documentation in the code at `mnemosyne/libmnemosyne/docs/build/html/index.html`.
In order to keep the code looking uniform, please following the standard Python style guides [PEP8]() and [PEP257]().
## Running the development code
You can find instructions for Windows [here]().
The following instructions are valid for Linux and Mac (if you use homebrew or some other package manager).
### Runtime requirements
To start working on Mnemosyne, you need at least the following software.
- [Python]() 3.5 or later
- [PyQt]() 5.6 or later, including QtWebEngineWidgets.
- [Matplotlib]()
- [Easyinstall]()
- [cheroot]() 5 or later
- [Webob]() 1.4 or later
- [Pillow]()
- For Latex support: the `latex` and `dvipng` commands must be available (e.g., `TeXLive` on Linux, `MacTeX` on Mac, and `MikTeX` on Windows).
- For building the docs: [sphinx]()
- For running the tests: [nose]()
You can either run a development version of Mnemosyne by using your system-wide Python installation, or by using a virtual environment with virtualenv.
If your distribution provides and packages all necessary libraries in a recent enough version, using the system-wide Python install is probably easier and the recommended way.
### Using the system-wide python installation
First, install all dependencies with your distribution's package manager.
Then, run `make build-all-deps`, followed by `make` from the top-level mnemosyne directory.
This will generate all the needed auxiliary files and start Mnemosyne with a separate datadir under dot_mnemosyne2.
If you want to use mnemosyne interactively from within a python shell, run python from the top-level mnemosyne directory.
You can check if the correct local version was imported by running `import mnemosyne; print(mnemosyne.__file__)`.
### Using a local python installation
If your distribution does not provide all required libraries, or if the libraries are too old, create a virtual environment in the top-level directory (`virtualenv venv`), activate it (`source venv/bin/activate`) and install all the required dependencies with `pip install`.
Then, follow the steps of the previous paragraph.
### Running the test suite
You can always run the test suite:
```
make test
```
or:
```
python3 -m nose tests
```
Single tests can be run like this:
```
python3 -m nose tests/<file_name>.py:<class_name>:<method_name>
```
Nose captures `stdout` by default.
Use the `-s` switch if you want to print output during the test run.
You can increase the verbosity level with the `-v` switch.
Add `--pdb` to the command line to automatically drop into the debugger on errors and failures.
If you want to drop into the debugger before a failure, edit the test and add the following code at the exact spot where you want the debugger to be started:
```
from nose.tools import set_trace; set_trace()
```
# System-wide installation from source
For testing the development version it is not necessary to do a system-wide installation.
If you want to do so anyway, here are the instructions.
## Linux
Follow the installation instructions from above (install the dependencies, get the source code - either by cloning it from github, or by downloading and extracting the `.tar.gz` archive).
Then, run the following command from within the top-level directory of the repository (which is also the location of this `README.md` file):
```
sudo python setup.py install
```
Depending on your setup, you might need to replace `python` with `python3`. To test the installation, change to any other directory and run `mnemosyne`.
For example:
```
cd ~
mnemosyne
```
If you run into the issue of non-latin characters not displaying on statistic
plots, install ttf-mscorefonts-installer and regenerate the font cache of
matplotlib.
## Mac
- Download and install Homebrew (see)
- Open the Terminal.
- Make sure you are using the latest version of Homebrew:
```
brew update
```
- Patch the python3 formula so you'll get python 3.6 and not a later version (pyinstaller still requires python 3.6):
```
brew uninstall python3
brew edit python3
# replace the file with the contents of and save it
brew install python3
brew pin python3
```
- Patch the qt formula so you'll get qt 5.10.0 (matching PyQt5) and not a later version:
```
brew uninstall qt
brew edit qt
# replace the file with the contents of and save it
brew install qt
brew pin qt
```
- Install the remaining dependencies for Mnemosyne, using a python virtual environment to isolate python dependencies.
```
brew install mplayer
pip3 install virtualenv
virtualenv --python=python3 venv
source venv/bin/activate
pip install webob tornado matplotlib numpy sip pillow cheroot pyinstaller pyqt5==5.10
```
- Build it (while still using the python virtual environment):
```
export QT5DIR=/usr/local/opt/qt # help pyinstaller find the qt5 path
make clean
make macos
```
- Test the new application (back up your data directory first!):
```
open dist/Mnemosyne.app
```
- Optionally drag and drop this new app to /Applications. | https://sources.debian.org/src/mnemosyne/2.6.1+ds-1/README.md/ | CC-MAIN-2020-05 | refinedweb | 1,497 | 59.03 |
Hi there! Welcome to the first ever post on the Quirrel blog!
Quirrel is an open-source task queueing service that's easy to use and sparks joy.
In this tutorial, we're going to use it to create a water drinking reminder - because if you're anything like me, it's easy to forget sometimes.
To follow this tutorial, you should already know a little bit about Next.js.
If you’re totally new to it, check out the Next tutorial first.
Let's get started!
The first thing we're gonna do is to create a new Next.js project.
$ npx create-next-app water-reminder
Open up
water-reminder in your favourite editor and run
npm run dev to startup the development environment.
Take a look into
pages/index.js and replace its content with the following:
// pages/index.js export default function Home() { return ( <main> <h1> Water Drinking Reminder </h1> <p> I want to be reminded under the following e-mail: </p> <form onSubmit={(evt) => { evt.preventDefault(); const formData = new FormData(evt.target); const email = formData.get("email"); alert(email); }} > <input name="email" type="email" placeholder="E-Mail" /> <button type="submit"> Submit </button> </form> </main> ); }
It contains some markup and a simple form that lets you submit the e-mail you want to be reminded at.
At the moment, it will just alert the typed email.
In your browser, open up localhost:3000.
It should look similar to this:
Submitting the form to the backend
Setup a new API Route by creating
pages/api/setupReminder.js and adding the following code:
// pages/api/setupReminder.js export default async (req, res) => { const email = req.body; console.log(`I'll setup the reminder for ${email}.`); res.status(200).end(); }
Now instead of
alert-ing the form value, let's post it to the newly created API route.
Go back to
pages/index.js and replace the following:
// pages/index.js onSubmit={(evt) => { evt.preventDefault(); const formData = new FormData(evt.target); const email = formData.get("email"); - alert(email); + fetch("/api/setupReminder", { method: "POST", body: email }); }}
Submitting the form will now cause the e-mail to be printed to your development console:
Now that we've hooked up the form with the API Route, let's get into Quirrel and E-Mail sending.
Setting up Quirrel
Quirrel is a task queueing service.
It takes requests à la "call me back at /api/queue in 10 minutes", stores them and makes sure to call back
/api/queue as requested.
What's awesome about Quirrel is that it can run fully locally, for example in a Docker container.
That's what we're gonna setup for testing.
Paste the following into
docker-compose.yml:
# docker-compose.yml version: "3.7" services: quirrel: image: ghcr.io/quirrel-dev/quirrel environment: REDIS_URL: redis://redis ports: - "9181:9181" redis: image: redis
Now run
docker-compose up to start Quirrel (you need Docker and Docker Compose installed for this to work).
Hooking up your Next.js application to Quirrel works by installing the Next.js client library:
$ npm install @quirrel/next
Now create a new queue by creating
pages/api/queues/reminder.js and typing the following:
// pages/api/queues/reminder.js import { Queue } from "@quirrel/next"; export default Queue( "queues/reminder", // because it's reachable // under /api/queues/reminder async (recipient) => { console.log(`Sending an E-Mail to ${recipient}`); } );
Queue takes two arguments: The first one is it's API location, and the second one is the worker function.
Whenever a job is executed, the worker function is called.
To use our newly created Queue, simply import it from the API Route:
// pages/api/setupReminder.js + import reminderQueue from "./queues/reminder"; // 👆 don't forget this export default async (req, res) => { const email = req.body; - console.log(`I'll setup the reminder for ${email}.`); + await reminderQueue.enqueue( + email, + { + id: email, + delay: 30 * 60 * 60 * 1000, + repeat: { + every: 30 * 60 * 60 * 1000, // 30 minutes + times: 16, // 16 * 30min = 8h + }, + } + ); res.status(200).end(); };
Calling
.enqueue will schedule a new job.
The first argument is the job's payload while the second argument contains some options:
idprevents having multiple reminders for the same e-mail address
repeatmakes the job execute on twice an hour, for a duration of 8 hours
delayadds an initial delay of 30 minutes, so the first job isn't executed immediately
To verify that this works, open up the Quirrel Development UI at ui.quirrel.dev.
It will connect to your local Quirrel instance and show all pending jobs in the "Pending" tab:
If it doesn't connect, that may be because you're using Safari. Try a different browser instead.
Submitting your email to the form at
localhost:3000 will add a new job to the UI, and pressing "Invoke" will execute the job.
You'll now be able to see
Sending an E-Mail to XYZ in your development logs.
Because it's a repeated job, it will be re-scheduled immediately, until it's been executed for the 16th time.
Before we proceed with the last part of the tutorial: Stand up, go to the kitchen and grab a glass of water 💧
Let's hook up E-Mail!
Now that the Queue is working, let's hook up the final thing: E-Mail!
Run
npm install nodemailer and add your SMTP setup code to your reminder queue:
// pages/api/queues/reminder.js import { Queue } from "@quirrel/next"; + import { createTransport } from "nodemailer"; + const mail = createTransport({ + host: "smtp.ethereal.email", + port: 587, + auth: { + user: "randall.renner66@ethereal.email", + pass: "Dp5pzSVa52BJwypJQm", + }, + }); ...
If you don't have any SMTP credentials at hand, you can get some demo ones at ethereal.email.
Then simply switch out the
console.log call with a real email dispatch:
... export default Queue( "queues/reminder", async (recipient) => { - console.log(`Sending an E-Mail to ${recipient}`); + await mail.sendMail({ + to: recipient, + from: "waterreminder@quirrel.dev", + subject: "Remember to drink some water!", + text: "..." + }) } );
That's it! Now our app is fully working.
It may not be the best water reminder service ever, but it's your very own one.
Here are some ideas to improve it further:
- Make duration and interval of the reminder configurable
- Allow users to unsubscribe using a link in the email
- Add some styles, maybe using Tailwind CSS
Deploying this to production is easy using the managed Quirrel service.
Simply follow this guide: Deploying Quirrel to Vercel
Conclusion
We've built a working water reminder in little under an hour.
You can see the finished project here: quirrel-dev/water-reminder-demo
If you've got experience with well-known task queueing libraries like beanstalkd or SideKiq, you may have noticed how easy-to-use Quirrel is.
The highly integrated client libraries for popular frameworks and the managed solution available, Quirrel is a great choice for JAMStack users.
And if you want to host Quirrel yourself, the MIT License allows you to do so.
Discussion | https://dev.to/quirrel/building-a-water-drinking-reminder-with-next-js-and-quirrel-1ckj | CC-MAIN-2020-45 | refinedweb | 1,156 | 57.87 |
2012-12-06T01:38:40Z Ruby Issue Tracking System Ruby trunk - Feature #7519: Module Single Inheritance 2012-12-06T01:38:40Z matz (Yukihiro Matsumoto) matz@ruby.or.jp <ul><li><strong>Status</strong> changed from <i>Open</i> to <i>Rejected</i></li></ul><p>I think providing new inheritance system for modules is overkill for allowing module method inheritance.<br> It would make the role of modules in the language unclear.</p> <p>If I were you, I'd make a proposal like another version of #include, or adding optional (keyword) argument to #include.</p> <p>Matz.</p> Ruby trunk - Feature #7519: Module Single Inheritance 2012-12-06T03:21:12Z alexeymuranov (Alexey Muranov) <ul></ul><p>Maybe a solution would be to allow a second method table in modules, so that including a module would also add singleton methods to the base? I suggested it for classes here: <a class="issue tracker-2 status-1 priority-4 priority-default" title="Feature: A mechanism to include at once both instance-level and class-level methods from a module (Open)" href="">#7250</a>, but it can without any change work for modules. (It is different from just inheriting singleton methods like in class inheritance.)</p> <p>However i would see nothing wrong with inheriting any object from any other object:</p> <p>x = Object.new</p> <p>def x.foo<br> "Foo"<br> end</p> <p>object y < x<br> end</p> <p>y.foo # => "Foo"</p> <p>:)</p> <p>P.S. OOP is object-oriented, not class-oriented :).</p> Ruby trunk - Feature #7519: Module Single Inheritance 2012-12-12T13:10:18Z trans (Thomas Sawyer) <ul></ul><p>=begin</p> <blockquote> <p>I think providing new inheritance system for modules is overkill for allowing module method inheritance.<br> It would make the role of modules in the language unclear.</p> </blockquote> <p>It's clear to me -- to be a pain in the butt ;)</p> <p>I find the whole "def self.included(base); base.extend ClassMethods; end" to be about the worst anti-pattern I have ever seen. Modules are pretty well useless and the class method thing makes it that much worse. Even when I try to use them, in the end, they almost always end up getting factored out. (I'm not talking about namespaces, of course. For that they do their job.)</p> <p>I actually thought you might like this particular suggestion b/c it keeps a strong stance on Single Inheritance. I really don't think it would have any effect whatsoever on what people perceive as the role of modules. That has everything to do with the lack <code>new</code> and nothing else.</p> <blockquote> <p>If I were you, I'd make a proposal like another version of #include, or adding optional (keyword) argument to #include.</p> </blockquote> <p>I'm not so sure that is a good idea. A module should be an encapsulation of reusable behavior. It doesn't make sense to leave that to the "consumer". It would be like asking for a way to include a module but only including the methods that start with the letter <code>s</code>.</p> <p>It I were to suggest anything along these lines it would be that one could specify which class-methods are visible or not. e.g.</p> <p>module M<br> def self.a; "a"; end</p> <pre>visible def self.b; "b"; end </pre> <p>end</p> <p>class C<br> include M<br> end</p> <p>C.a #=> error<br> C.b #=> "b"</p> <p>However, I have my doubts that's really the best answer either. It adds more complexity to the language. And complexity is the enemy of productivity. I'd still tend to think it would be better if all class methods were visible, b/c one can easy tuck away methods that one did not want visible in another namespace. e.g.</p> <p>module M<br> module S<br> def self.a; "a"; end<br> end</p> <pre>def self.b; "b"; end </pre> <p>end</p> <p>class C<br> include M<br> end</p> <p>C.a #=> error<br> C.b #=> "b"</p> <p>It's a trade-off, of course, but the later is so much simpler it seems hard to justify any of the former language modifications.</p> <p>But that's actually OT. Whether modules can have a (single) inheritance chain like classes is a separate question. Personally I very much like the symmetry.<br> =end</p> | https://bugs.ruby-lang.org/issues/7519.atom | CC-MAIN-2019-22 | refinedweb | 762 | 57.87 |
When building web sites in ASP.NET 2.0 that use the membership features you're inevitably going to use the Profile to store some custom properties (you know the stuff; Address, email, Theme, etc). You're probably also going to have an 'update your settings' type page to allow users to edit their profile properties, so you code a page with a bunch of TextBox controls, setting their values from the Profile, then a button to update the profile from the entered data. It's just a bit tedious, especially if these controls are within a template, where you end up doing a ton of FindControl. Ugh.
So in an attempt to make the code easier I've created a ProfileDataSource control, which simply iterates through the custom Profile properties and exposes them as a data source. This allows you to use a DetailsView (or FormView) to provide the display/edit features.
The data source is pretty simple, and hasn't had much in the way of testing, but works fine. If you intend to use it I suggest a thorough test. There are things it does and doesn't do. It does take into account read only properties, so won't update those. It doesn't however, take into account the different between anonymous/authenticated properties. For example, you can bind to all properties and update them while an anonyous user even if those properties are not marked as allowAnonymous. The framework stops the property being updated, but the datasource doesn't. I decided not to impose that as a restriction.
You can get the code from here. There's a test page, along with a couple of ProfileDataSource controls. One is application specific and has the profile properties explicity defined, while the other is generic. I've included both just to show you how it can be done. Just place the .vb files in the Code directory, and register the namespace/tagprefix on the page, and use it like any other data source control.="Admin/SiteAdmin.aspx" roles="Administrator" /> <siteMapNode title="UserAdmin".
The SiteMap architecture of ASP.NET 2.0 allows roles to be defined for each menu item, thus restricting their view to only users who are in that role. This requires the securityTrimming attribute to be added to the siteMapProvider, but I'd never been able to get this to work, and assumed it was a just a simple bug in the beta.
I now learn that it's not a bug, and the solution is pretty simple. Danny Chen explains it in this forum post. Simple really.
I've been digging into CSS menus for a while, and when I received the first previews of ASP.NET 2.0 I wrote a really simple CSS menu. Since I'm doing a talk at ASP Connections on Navigation in ASP.NET 2.0. I'm building a Database SiteMap Provider and a new Menu control that's lightweight - small to render and no viewstate. I decided to modify my menu control to sit properly on top of the site map architecture; it's take a few days to get my head around what I really need to do. It's now working and in trying to pretty it up I came across this article about CSS Menus. A really sweet solution for CSS based hierrachical menus.
Spooky. Bob is talking about UDTs in SQL Server Management Console (SSMS - I've hijacked his acronym). I've had the very same problem, and couldn't work out why the SSMS couldn't see the UDT, but that using ToString() explicitly worked. I had to mail the PM for UDTs to get it answered. It hit me doubly as I also have a User Defined Aggregate for explicit aggregation of the UDT, and that didn't work in SSMS either. Same problem.
During this early testing phase when we're doing lots of build/deploy/test (call it iterative development, it sounds better) this is a royal pain. If you want to keep deploying to SQL Server you probably don't want the assembly in the gac. I suppose the answer is to spend more time up front designing and getting your code right, but that's not always the best way to learn. Well, not for me anyway.
Two books worth mentioning. A First Look at SQL Server 2005 for Developers is damn fine. A ton of excellent material on the new version of SQL Server (codename "Yukon"). I saw this in early draft and found it invaluable for some of the stuff I'm doing.
ASP.NET v. 2.0 - The Beta Version is an update to the First Look book, for beta 1 of ASP.NET 2.0. If you're thinking about getting into .NET 2.0, do not buy the old version as that was for the technical preview and there have been many changes. The new version has more material and a new set of samples: dowloadable or runnable online.
I'm now going to get back to work, after a week of doing bugger all. Still, I had a good excuse.
So, working with Yukon at the moment and wasn't having a very good week. Finally got some code working , but some of my conversions weren't giving the right values. This is GIS stuff, so I'm dealing with Latitude & Longitude, converting to decimal values, calculating distances etc. I couldn't work out why things weren't right, and spent hours debugging. Finally, with Alex's help, we realised I was using the wrong type - I should have been using decimal to preserve accuracy in calculations.
Now I understand rounding and the instrinsic problems of storing floating point numbers, but it's just so painful having to do lots of type conversions/casting just so you can get accurate numbers. I mean I'm only dealing with a few decimal places so you'd kinda expect things to be accurate, but oh no. As a good example, start a project in VS. Doesn't matter what type, but break into the debugger. View the immediate window and type 9.2-9 - what do you expect? By and large I'm a fairly optimistic guy, and although my maths skills are pretty poor, even I knew it should be 0.2. But I was wrong. Now call me a pedant. Call me stupid. Call me naive, but don't call me wrong for wanting to believe that such a simple calculation should give an incorrect answer. At what level should we expect rounding errors to occur?
On a side note (and perhaps not seriously, but then again perhaps I am serious), why is it that we have rounding errors at all? Why is there any need to store floating point numbers as actual floating point numbers? After all, they could be stored as integers, all calculations could be done on intergers and accuracy would be preserved. The decimal point is really only needed for display purposes. Of course, it would mean radical changes to every computing platform, but heck, there's no gain without large restructing of the world as we know it.
I've been holding off on getting a new laptop, but finally went for it - a Dell Inspiron 510m, excellent screen, big disk and lots of memory. And very nice it is to. Since the disk is big I'm going for 3 boot partitions: a stable one running .net 1.1, a .net 2.0 beta 1 partition, and a general test partition (for any other beta stuff that comes along), plus a large partition for data. I've isntalled the stable one and decided to clone it for the others to save some time. I've not used cloning software before, but decided to try Acronis TrueImage; it has a nice Copy Disc option. So it whirs away, I boot into the new partition, generate a new sid and everything looks fine.
However, I go to install VS.NET 2.0 and the default install directory is C:, which is my stable partition. I look at the environment variables and some of them point to C: still, as does almost everything with a full path that's stored in the registry. Hmm, not exactly what I had planned. This disc cloning is great if you don't want the new parition to have a different drive letter. So my options are:
Views people? What have others done?
[update] Of course, option 1 isn't available as it's the system partition, and thus can't be renamed. Sigh.
My friend Lou has finally joined the throngs of self-employed with her new graphic design service Frog Box. She designed my web site plus business cards, and has come up with some cool stuff for a new Al and Dave design, which we might eventually get time to implement. She's very talented and did get accepted as a storyboard designer for a new film, but turned it down because when you're starting out you can't afford to work for nothing (related news: their previous film, Dan had a hand in). It's funny but when you look at talented designers you realise how much nicer they can make sites look than most of us programmers.
I spent 3 days last week on a course at DevTrain, the company I'm going to start doing training for. This was the Web Apps with C# course, aimed at beginners. Now I'm not a beginner but I am going to train this course, so I sat in to see how the current trainer (and author of the course - they are all custom written) did it. It was an interesting time, as I was worried I might be bored. After all the material isn't new to me, and I used to be a trainer years ago - an MCT training VB, SQL, Exchange, NT, etc. It's always interesting to see other presenters. I see plenty at conferences but very few on courses. Actually none on courses, since I don't go on courses. But, you learn things about presenting just from watching others.
I wasn't the least bit bored. Now Steve didn't have an outlandish style, just fairly normal presenting with enough anecdotes to keep us entertained. Careful explanation plus real world examples. I found myself concentrating quite hard and enjoying it much more than I thought I would.
I learned two things.
if (lb.SelectedIndex >= 0)
while (lb.SelectedIndex >= 0)
Is it just me or does anyone else not like the new search features in the help system in Visual Studio 2005 and SQL Server 2005? What don't I like:
OK, rant over. Carry on. | http://aspadvice.com/blogs/dsussman/default.aspx | crawl-002 | refinedweb | 1,804 | 72.46 |
- 15 Mar, 2018 1 commit
Carried over from
- 14 Mar, 2018 16 commits
Breadcrumb on Admin Runner page Closes #43717 See merge request gitlab-org/gitlab-ce!17431
- Zeger-Jan van de Weg authored
Prior to this change, this method was called add_namespace, which broke the CRUD convention and made it harder to grep for what I was looking for. Given the change was a find and replace kind of fix, this was changed without opening an issue and on another feature branch. If more dynamic calls are made to add_namespace, these could've been missed which might lead to incorrect bahaviour. However, going through the commit log it seems thats not the case.
- 13 Mar, 2018 23 commits
- Robert Speicher authored
Revert "Merge branch 'sh-filter-secret-variables' into 'master'" See merge request gitlab-org/gitlab-ce!17733
-
- Robert Speicher authored
Upgrade GitLab Pages to v0.7.1 See merge request gitlab-org/gitlab-ce!17732
- Nick Thomas authored
- Clement Ho authored
Fix markdown table showing extra fake column v1 Closes #44024 See merge request gitlab-org/gitlab-ce!17669
- Mike Greiling authored
fix timescale prometheus charts overlapping Closes #43458 See merge request gitlab-org/gitlab-ce!17657
- Douwe Maan authored
Specify installation type for link See merge request gitlab-org/gitlab-ce!17713
- Douwe Maan authored
Resolve "lib/gitlab/git/gitlab_projects.rb does not respect Gitlab.config.git.bin_path" Closes #44161 See merge request gitlab-org/gitlab-ce!17693
Resolve "List Gitaly calls and arguments in the performance bar" Closes #43805 See merge request gitlab-org/gitlab-ce!17564
- Tim Zallmann authored
Add frontend security documentation See merge request gitlab-org/gitlab-ce!17622
Make commit pipeline accessible on file page Closes #44152 See merge request gitlab-org/gitlab-ce!17716
This is as important as SQL timings, and much more important most of the time than GC, Redis, or Sidekiq.
The same as the SQL queries, show the details of Gitaly calls in the performance bar, as a modal that can be opened in the same way.
- Fatih Acet authored
Resolve "Projects::MergeRequestsController#show is slow (implement skeleton loading)" Closes #35475 See merge request gitlab-org/gitlab-ce!15200
- Simon Knox authored
- Jacob Schatz authored
Resolve "Document webpack_bundle_tag replacement method" Closes #43720 and #42704 See merge request gitlab-org/gitlab-ce!17706
Resolve "Wrong button has the loading state when submitting a comment in issues" Closes #44149 See merge request gitlab-org/gitlab-ce!17698
- Andrey Maslennikov authored | https://foss.heptapod.net/heptapod/heptapod/-/commits/6a42d517b42241852b712eabfbe051d7a48f05e7 | CC-MAIN-2022-21 | refinedweb | 411 | 56.35 |
1. Debug the program with breakpoints in pycharm to understand the logic of each line of code
How to enable debug debugging:
if name = = 'main': (referenced in the figure below))
The meaning of setting breakpoints: breakpoint debugging is actually that you mark a breakpoint at a certain place in the code during the automatic operation of the program. When the program runs to the breakpoint you set, it will be interrupted. At this time, you can see all the program variables that have been run before
Common shortcut keys:
step over (F8 shortcut key): during single step execution, when a sub function is encountered in the function, it will not enter the sub function for single step execution, but stop the whole sub function after execution, that is, take the whole sub function as one step. The effect is the same as step into when there is no sub function. Simply put, the program code crosses the sub function, but the sub function will execute and will not enter.
Step into (F7 shortcut key): when stepping into execution, you will enter and continue to step into execution when encountering sub functions, and some will jump into the source code for execution.
step into my code (Alt+Shift+F7 shortcut): when stepping into execution, you will enter and continue stepping into the sub function, and will not enter the source code.
step out (Shift+F8 shortcut): if you enter a function body, you see two lines of code and don't want to see it, jump out of the current function body and return to the place where this function is called, that is, you can use this function.
Resume program(F9 shortcut): continue to resume the program and run directly to the next breakpoint.
The general operation steps are to set the breakpoint, debug, and then F8 step-by-step debugging. When you encounter the function you want to enter, F7 goes in, figure it out, shift + F8, skip the place you don't want to see, directly set the next breakpoint, and then F9
2. Full connection layer: nn.Linear usage (written clearly in official documents)
3. nn.Module use
To build a neural network under the pytorch framework, you need to define a class and inherit the torch.nn.Module module. The general structure is as follows:
class Model(nn.Module): def __init__(self): super().__init__() ... def... return def forward(self, inputdata): ... return
In essence, it is also a class. After inheriting the Module, it has some special properties. You can clearly explain it with a basic example code:
import torch import torch.nn as nn #The input model is a 3 * 5 matrix inputdata = torch.rand(3, 5) class Testnet(nn.Module): def __init__(self): super().__init__() self.fc = nn.Linear(5, 2) def forward(self, inputdata): out = self.fc(inputdata) return out test = Testnet() out = test(inputdata) print(out)
Output:
tensor([[-0.7191, 0.6140], [-0.1250, 0.7199], [-0.0656, 0.5232]], grad_fn=<AddmmBackward>)
4. Understanding of torch.rand() multidimensional tensor
torch.randn(2,3,4,5)#First, this is the 4th dimension, then 4 and 5 are the innermost layer, 3 is the outer layer, and 2 is the outermost layer. There are two (simple understanding: "[[[" has two large parts, each part has three '[[', there are four '[' in '[[', and the dimension is 4 * 5) tensor([[[[-0.5819, 1.0541, 1.2122, 0.1487, -1.3239], [ 1.1498, 0.1537, 1.3365, -0.5458, 2.4623], [-0.2419, 0.9619, -1.7176, 0.6234, -0.1420], [ 0.2708, 1.2968, 0.3590, 1.4835, -0.4068]], [[-1.5560, 1.2271, 0.1556, -0.7206, -3.6874], [-1.2283, -0.4955, -0.0591, 0.7332, -0.3467], [-1.0715, -0.8225, -0.3180, -0.9774, -0.6425], [ 0.0962, -0.4811, -1.2161, 0.6909, -0.4036]], [[ 1.9039, 0.0585, 0.5491, -0.3894, 0.0350], [-0.1628, 0.0697, -0.2491, 1.1777, 1.3530], [-0.3784, -0.0743, -0.6657, -0.5710, 0.2267], [-1.9573, 0.1118, 1.4209, 0.3095, -1.0523]]], [[[ 1.1964, 0.8547, -0.7742, -0.5260, -0.1902], [-0.2960, 0.7014, -0.1351, 1.3705, 0.9462], [-0.4928, 0.3687, -0.8138, -0.3793, 1.2148], [ 0.7936, 0.6168, -0.3903, 0.4030, -1.4236]], [[-0.5191, -1.3978, -0.7809, 0.1161, -0.5701], [ 1.7385, -0.8792, -0.7399, 0.4146, -0.2882], [ 1.6423, -0.2982, 0.5043, 0.8092, 1.5948], [ 1.6171, 0.2906, -0.2790, -0.4758, -1.4615]], [[-0.8722, 0.7420, 0.3168, 0.9529, -0.7665], [-0.4354, -0.4272, 0.7883, -2.2822, -0.2489], [-0.3527, -0.9323, 0.2115, 0.6318, 0.6811], [-0.6773, -0.3727, 0.2425, -1.0979, -0.7501]]]])
5. Meaning of torch.nn.softmax parameter | https://programmer.ink/think/september-deep-learning-summary.html | CC-MAIN-2021-39 | refinedweb | 808 | 67.49 |
# In-Memory Showdown: Redis vs. Tarantool

In this article, I am going to look at Redis versus Tarantool. At a first glance, they are quite alike — in-memory, NoSQL, key value. But we are going to look deeper. My goal is to find meaningful similarities and differences, I am not going to claim that one is better than the other.
There are three main parts to my story:
* We’ll find out what is an in-memory database, or IMDB. When and how are they better than disk solutions?
* Then, we’ll consider their architecture. What about their efficiency, reliability, and scaling?
* Then, we’ll delve into technical details. Data types, iterators, indexes, transactions, programming languages, replication, and connectors.
Feel free to scroll down to the most interesting part or even the summary comparison table at the very bottom and the article.
### Content
1. Introduction
* What is an in-memory database, or IMDB
* Why IMDBs are needed
* What is Redis
* What is Tarantool
2. Architecture
* Performance
* Reliability
* Scaling
* Data schema validation
3. Technical features
* Supported data types
* Data eviction
* Iteration with keys
* Secondary indexes
* Transactions
* Persistence
* Programming languages for stored procedures
* Replication
* Connectors from other programming languages
* When not to use Redis or Tarantool
* Ecosystem
* Redis advantages
* Tarantool advantages
4. Summary
5. References
1. Introduction
---------------
### What Is an in-Memory Database, or IMDB?
Redis and Tarantool are in-memory technologies. What is an in-memory database, or IMDB? This is a database that stores all the data in RAM. Its size is limited by the RAM capacity of the node. Although limiting the data amount, this greatly increases the speed.
In-memory DBs can store persist on disks. The node can be restarted without losing the information. Today, in-memory DBs can already be used as the main storage in production. For instance, Mail.ru Cloud Solutions uses Tarantool as the main DB to store metadata in their S3 compatible object repository.
In-memory DBs are also used for high-speed data access, capable of 10,000 requests per second. These can be the cases of large spikes in traffic on IMDB on the day of the Zack Snyder's Cut of Justice League release, Amazon a week before Christmas, or Uber on Friday night.
### Why in-Memory DBs Are Needed?
**Cache**. In-memory DBs are often used as a cache for disk databases. RAM is way faster than any disk (even an SSD). However, caches restart, crash, can be inaccessible through the network, suffer from memory shortage and other issues.
Eventually, caches learned to provide persistence, reservation, and sharding.
* **Persistence** means caches store data on disks. After restart, the status is restored without addressing the main storage. If we don’t do this, addressing a cold cache will take a really long time and can even result in the main DB crashing.
* **Reservation** means caches can replicate data. If one node crashes, the second one will receive the queries. The main storage won’t crash because of overloading, as the reserve node will be there.
* **Sharding** means that if hot data doesn't fit into a node's RAM, several nodes are used in parallel. This is horizontal scaling.
Sharding is a large-scale system. Reservation is a reliable system. Together with persistence, we get clustered data storage, which can be used to store terabytes of data and access it at remarkable speed, even at 1,000,000 RPS.
**OLTP** stands for Online Transaction Processing. In-memory solutions are fit for tasks of this type thanks to their architecture. OLTP comprises many short online transactions like INSERT, UPDATE, DELETE. The main thing with OLTP systems is fast procession of queries and ensuring data integrity. Efficiency is usually measured in RPS.
### What’s Redis?
* Redis is an in-memory data structure store.
* Redis is a key-value store.
* If you Google «database caching,» almost every article will mention Redis.
* Redis only offers primary key access and doesn’t support secondary indexing.
* Redis contains a Lua stored procedure engine.
### What's Tarantool?
* Tarantool is an in-memory computing platform.
* Tarantool is a key-value store that supports documents and a relational data model.
* It has been designed for hot data — MySQL caching in a social network, but gradually it became a fully-featured database.
* Tarantool can provide any number of indexes.
* Tarantool supports stored procedures in Lua, too.
2. Architecture
---------------
### Performance
The most popular questions about in-memory DBs are «How fast are they?» and «How many millions of RPS can we get from one core?». Let’s perform an easy synthetic test, approximating database settings as much as possible. The Go script will fill the storage with random keys having random values.
MacBook Pro 2,9 GHz Quad-Core Intel Core i7
Redis version=6.0.9, bits=64
Tarantool 2.6.2
#### Redis
File: redis\_test.go
Content:
redis\_test.go
```
package main
import (
"context"
"fmt"
"log"
"math/rand"
"testing"
"github.com/go-redis/redis"
)
func BenchmarkSetRandomRedisParallel(b *testing.B) {
client2 := redis.NewClient(&redis.Options{Addr: "127.0.0.1:6379", Password: "", DB: 0})
if _, err := client2.Ping(context.Background()).Result(); err != nil {
log.Fatal(err)
}
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
key := fmt.Sprintf("bench-%d", rand.Int31())
_, err := client2.Set(context.Background(), key, rand.Int31(), 0).Result()
if err != nil {
b.Fatal(err)
}
}
})
}
```
#### Tarantool
Command: tarantool
Tarantool initialization:
```
tarantool>
box.cfg{listen='127.0.0.1:3301', wal_mode='none', memtx_memory=2*1024*1024*1024}
box.schema.user.grant('guest', 'super', nil, nil, {if_not_exists=true,})
box.schema.space.create('kv', {if_not_exists=true,})
box.space.kv:create_index('pkey', {type='TREE', parts={{field=1, type='str'}},
if_not_exists=true,})
File: tarantool_test.go
Content:
package main
import (
"fmt"
"math/rand"
"testing"
"github.com/tarantool/go-tarantool"
)
type Tuple struct {
_msgpack struct{} `msgpack:",asArray"`
Key string
Value int32
}
func BenchmarkSetRandomTntParallel(b *testing.B) {
opts := tarantool.Opts{
User: "guest",
}
pconn2, err := tarantool.Connect("127.0.0.1:3301", opts)
if err != nil {
b.Fatal(err)
}
b.RunParallel(func(pb *testing.PB) {
var tuple Tuple
for pb.Next() {
tuple.Key = fmt.Sprintf("bench-%d", rand.Int31())
tuple.Value = rand.Int31()
_, err := pconn2.Replace("kv", tuple)
if err != nil {
b.Fatal(err)
}
}
})
}
```
**Launching**. To load databases to the maximum, let’s use more threads.
```
go test -cpu 12 -test.bench . -test.benchtime 10s
goos: darwin
goarch: amd64
BenchmarkSetRandomRedisParallel-12 929368 15839 ns/op
BenchmarkSetRandomTntParallel-12 972978 12749 ns/op
```
**Results**. Average duration of Redis query was 15 microseconds. For Tarantool — 12 microseconds. It means Redis efficiency is 63,135 RPS, and Tarantool — 78,437 RPS. The test does not demonstrate the speed, but the efficiency of in-memory DBs. You can tweak the benchmark in a way that your DB of choice wins.
### Reliability
Two basic methods are used for reliability in data storages:
* **Persistence** At a restart, DB will load the data from the disc without any queries to outside systems.
* **Replication** If one node crashes, there will be a copy at the other one. Replication can be asynchronous and synchronous,
Both Redis and Tarantool can do that. We’ll delve into some technical details further.
### Scaling
Scaling can be used:
* to reserve additional nodes that can replace each other in case one of crashes;
* in case the data doesn’t fit into a single node and has to be distributed among several ones.
#### Redis
Redis nodes can be interconnected by means of asynchronous replication. We’ll call such nodes a replica set. It is Redis Sentinel that manages the replica set. Redis Sentinel is one special process or several of them clustered, to monitor Redis nodes. They perform 4 primary tasks:
* Checking node status within the group — dead or alive.
* Notifying the system administrator if something goes wrong within the group.
* Automatic switching of the master.
* Config provider for external clients for them to know where to connect.
If data is to be sharded to several nodes, Redis offers an open-source version of Redis Cluster. It supports building a cluster from several replication groups. Data within the cluster is sharded over 16 384 slots. Slot ranges are determined among Redis nodes.
Nodes within the cluster communicate over a separate open port to know their neighbors’ statuses. To work with Redis Cluster, the app should use a special connector.
#### Tarantool
Tarantool also supports replication and sharding. The key tool of scalability management is Tarantool Cartridge. It unites nodes into replica sets. You can make one group of that kind and use it similarly to Redis Sentinel. Tarantool Cartridge can manage several replica sets and shard data across them. vshard library is used for sharding purposes.
#### Differences
**Administration**
* Scripts and commands in Redis Cluster.
* Web-interface or API in Tarantool Cartridge.
**Sharding buckets**
* Number of sharding buckets in Redis is fixed, equal to 16 384.
* Number of sharding buckets in Tarantool Cartridge (vshard) is customizable. It is set up once — when a cluster is created.
**Bucket rebalancing (resharding)**
* In Redis Cluster, setup and launching are manual,
* They are automatic in Tarantool Cartridge (vshard).
**Query routing**
* In Redis Cluster, queries are routed on the side of the client app.
* In Tarantool Cartridge, queries are routed on the cluster router nodes.
**Infrastructure**
* Tarantool Cartridge also contains:
### Data Schema Validation
In Redis, the primary data schema is key-value, but the values can contain different structures. You cannot set validation rules on the server side. We can’t indicate how certain data types should be used and what structure a value should have. The schema must be validated by a connector or a client app.
Tarantool supports data schema validation on the server side:
* using integrated validation box.space.format that covers only the top-level section of the fields;
* using an installed Avro schema extension.
3. Technical Features
---------------------
### What Kind of Data Types Can Be Stored?
In Redis, only a string can be a key. Redis supports the following data types:
* strings;
* string lists;
* unordered collections of strings;
* hashmaps or just key-value string pairs;
* ordered collections of strings;
* Bitmap and HyperLogLog.
Tarantool supports the following data types:
* Primitive
+ strings;
+ boolean (true or false);
+ integer;
+ with floating point;
+ with decimal floating point;
+ UUID.
* Complex
+ arrays;
+ hashmaps.
Redis data types are better tailored for event counters, including the unique ones, as well as for storage of small finished data marts.
Tarantool data types are better tailored for storage of objects and/or documents both in SQL and NoSQL DBMS.
### Data Eviction
Redis and Tarantool both have engines to limit the memory occupied. If a client attempts to add data after the limit has been reached, the databases will return an error. Both Redis and Tarantool will proceed with reading queries in that case, though.
Let’s see how «no longer required» data can be deleted. **Redis** comprises several data eviction engines:
* TTL — object eviction as soon as their lifetime expires;
* LRU — long used data eviction;
* RANDOM — random object eviction;
* LFU — rarely used data eviction.
All the engines can be set up either for the entire data amount or for the objects marked as evictable only.
In **Tarantool**, expirationd or indexpiration extensions can be used for eviction. Another option is creating your own background procedure that will be by-index implemented (e. g., with a timestamp) and will delete unnecessary data.
### Iteration With Keys
In **Redis**, this can be done by means of operators:
* SCAN;
* iteration with keys.
Transactions return pages with results. To get a new page, an ID of the previous one has to be sent. Transactions support by-template filtration. For this, MATCH parameter is used. Filtration takes place when the page is sent, so some of the pages might be blank. However, it doesn’t mean there are no more pages.
**Tarantool** offers a more flexible «iteration with keys» schema. Both direct and inverse iterations are possible, and you can additionally filter the values on the go. You can move to a certain key value and then check the consequent keys either in the ascending or descending order. The check direction can’t be changed on the spot.
For example:
```
results = {}
for _, tuple in box.space.pairs('key', 'GE') do
if tuple['value'] > 10 then
table.insert(results, tuple)
end
end
return results
```
### Secondary Indexes
#### Redis
Redis has no secondary indexes, but there are some ways to imitate them.
* The order number of the element can be used as a secondary key in ordered collections.
* Otherwise, hashmaps can be used with their key considered as the data index.
#### Tarantool
In Tarantool, a custom amount of secondary data indexes can be created.
* Secondary keys may contain several fields.
* Types HASH, TREE, RTREE, and BITSET can be used for secondary indexes.
* Secondary indexes may contain unique and non-unique keys.
* Locale settings can be used for any indexes, e. g., for register-independent string values.
* Secondary indexes can be based on fields with value arrays (sometimes referred as MultiIndexes).
#### Summary
Secondary keys and convenient iterators enable relational data storage model building in Tarantool. It is impossible to build such a model in Redis.
### Transactions
Transactions enable primitive execution of several operations. Both Redis and Tarantool support transactions. Transaction example from Redis:
```
> MULTI
OK
> INCR foo
QUEUED
> INCR bar
QUEUED
> EXEC
1) (integer) 1
2) (integer) 1
```
Transaction example from Tarantool
```
do
box.begin()
box.space.kv:update('foo', {{'+', 'value', 1}})
box.space.kv:update('bar', {{'+', 'value', 1}})
box.commit()
end
```
### Persistence
Data persistence is ensured by two engines — * in-memory data recording to a disk at specified intervals — snapshotting;
* successive write-ahead logging of all the incoming operations — transaction journal.
Both Redis and Tarantool have these persistence engines.
#### Redis
At specified intervals, Redis snapshots all in-memory data. By default, it is done every 60 seconds (customizable). Redis copies the current in-memory data using OS.fork and then stores data to the disk.
In case of an abnormal shutdown, Redis recovers its status from the most recent saving. If the last snapshot was made a long time ago, all the data received after the snapshot will be lost.
The transaction journal is used to store all the information arriving at the database. Every operation is logged in the on-disk journal. When Redis is started, it recovers its status from the snapshot and then adds the remaining operations from the journal.
* In Redis, a snapshot is called RDB (Redis DataBase).
* The transaction journal in Redis is called AOF (Append Only File).
#### Tarantool
* The persistence engine is derived from database architectures.
* It is comprehensive, with snapshotting and transaction journaling.
* This mechanism ensures reliable WAL-based replication.
Tarantool snapshots the current in-memory data at specified intervals and records every transaction to the journal.
* A snapshot in Tarantool is called a snap and can be made at any frequency.
* In Tarantool, the transaction journal is called WAL (Write Ahead Log).
Each of the engines can be switched off both in Redis and in Tarantool. Both engines should be on for reliable data storage. You can trade-off persistence and turn off snapshotting and journaling to ensure the highest operation speed possible.
#### Differences
Redis uses OS.fork for snapshotting. Tarantool uses an internal readview of all the data, and this is faster than fork.
By default, Redis has snapshotting only. Tarantool has both snapshotting and transaction journaling.
Redis stores and uses only one file for both snapshotting and transaction journaling. Tarantool stores two snapshot files by default (but this number is customizable) and a consistently enlarging unlimited number of transaction journals. If a snapshot file is damaged in Tarantool, it can use the previous one to load. In Redis, you need to set up backups.
Unlike Redis, snapshotting and journaling in Tarantool form a common engine for data display in the file system. It means that in Tarantool snapshot files and journals store all the metainfo on the transaction, which is who made it and when. It has the same format and is complementary.
#### Troubleshooting
If a journal file is damaged in Redis:
```
redis-check-aof --fix
```
If a journal file is damaged in Tarantool:
```
tarantool> box.cfg{force_recovery=true}
```
### Programming languages for stored procedures
Stored procedures are a code executed in the data section. Both Redis and Tarantool suggest using Lua for stored procedure creation. This language is quite simple. It was designed for those using programming for task solving in a specific area.
From a database developer point of view:
* Lua can be easily integrated into an existing app.
* It is easy to integrate with objects and processes of the app.
* Lua has dynamic typization and automatic memory management.
* This language has a garbage collector — incremental Mark&Sweep.
#### Differences
**Implementation**
* Redis is a plain vanilla implementation of PUC-Rio.
* Tarantool uses LuaJIT.
**Task timeout**
* Redis allows you to set a timeout after which execution of a stored procedure will end.
* In Tarantool stored procedures are compiled and executed faster, but no timeout can be set. To end a stored procedure, a user should make provisions to check the end flag.
**Runtime**
* Redis is single-tasked: it executes the tasks one by one.
* Tarantool uses cooperative multitasking. It executes the tasks one by one, but at the same time the task gives up IO operation management, in particular — directly by means of yield.
#### Summary
* In Redis, Lua is just about the stored procedures.
* In Tarantool, it is a cooperative runtime that supports communication with outside systems.
### Replication
Replication is an engine that enables object copying from one node to another. Replication can be asynchronous and synchronous.
* Asynchronous replication: after adding an object to one node we don’t wait for it to be replicated to the second node.
* Synchronous replication: after adding an object, we wait for it to be saved at the first and the second node.
Redis and Tarantool support asynchronous replication, whereas synchronous replication is only available in Tarantool.
In some cases, we need to wait for the object replication:
* in Redis, we use the wait command. It accepts only two parameters:
+ number of replicas an object has to obtain;
+ amount of time required for that to happen.
* In Tarantool, it can be done with a code fragment — Pseudocode:
```
while not timeout do
if box.info.lsn <= (box.info.replication[dst].downstream.vclock[box.info.id] or 0) then
break
end
fiber.sleep(0.1)
end
```
#### Synchronous Replication
Redis has no synchronous replication. Tarantool has it in versions from 2.6.
### Connectors From Other Programming Languages
Both Redis and Tarantool support connectors from popular programming languages:
* Go;
* Python;
* NodeJS;
* Java.
Complete lists:
* <https://redis.io/clients>
* <https://tarantool.io/ru/download/connectors>
### When Not to Use Redis or Tarantool
Both Redis and Tarantool are poorly tailored for OLAP tasks. Online Analytical Processing deals with historic or archive data. OLAP has relatively few transactions, queries are often complex and contain aggregation.
In both cases, data is stored line-by-line. This makes aggregation algorithms less effective as compared to column-oriented databases.
Redis and Tarantool use one-thread for data which makes parallelizing analytical queries impossible.
### Ecosystem
#### Redis
There are three categories of Redis modules:
* Enterprise;
* verified and certified for Enterprise and Open Source;
* unverified.
Enterprise modules:
* full-text search;
* storage and search by bloom-filters;
* time series storage.
Certified:
* storage of graphs and queries to them;
* storage of JSON and queries to it;
* storage of ML models and their operation.
All the modules sorted by the number of stars at Github:<https://redis.io/modules>
#### Tarantool
There are two categories of modules:
* Embedded:<https://www.tarantool.io/en/doc/latest/reference/>
* Enterprise: [https://www.tarantool.io/en/enterprise\_doc/rocksref/#closed-source-modules](https://www.tarantool.io/en/enterprise_doc/rocksref/#https://www.tarantool.io/en/enterprise_doc/rocksref/)
#### Redis Advantages
* It is easier to use.
* There is more info on the web, 20 000 questions on Stackoverflow (7 000 of these pending unanswered).
* The entry barrier is lower.
* There are more people experienced with Redis.
#### Tarantool Advantages
* Free developer support on Telegram.
* Secondary indexes available.
* Index iteration available.
* UI for cluster administration available.
* Application server with cooperative multitasking. It is similar to single-flow Go.
* Higher ceiling in production.
4. Summary
----------
Redis offers a great advanced cache, but it can’t be used as a primary storage. Tarantool is a multi-paradigm database that can be used as a primary storage. Tarantool supports:
* Relational storage model with SQL.
* Distributed NoSQL storage.
* Advanced cache creation.
* Making a queue broker.
Redis has a lower entry barrier. Tarantool has a higher ceiling in production.
| | | |
| --- | --- | --- |
| | Redis | Tarantool |
| Description | Advanced in-memory cache. | Multi-paradigm DBMS with an integrated application server. |
| Data model | Key-value | Key-value, documents, relational DBMS |
| Website | [redis.io](http://redis.io/) | [www.tarantool.io](http://www.tarantool.io/) |
| Documentation | [redis.io/documentation](http://redis.io/documentation) | [www.tarantool.io/ru/doc/latest](http://www.tarantool.io/ru/doc/latest/) |
| Developer | Salvatore Sanfilippo, Redis Labs | Mail.ru Group |
| Current release | 6.2 | 2.7.2 |
| License | The 3-Clause BSD License | The 2-Clause BSD License |
| Implementation language | C | C, C++ |
| Supported OS | BSD, Linux, MacOS, Win | BSD, Linux, MacOS |
| Data schema | Key-value | Flexible |
| Secondary indexes | No | Yes |
| SQL support | No | For one instance — ANSI SQL |
| Foreign keys | No | Yes, with SQL |
| Triggers | No | Yes |
| Transactions | Optimistic locking, primitive execution | ACID, read committed |
| Scaling | Sharding within a fixed range. | Sharding within an adjustable amount of virtual buckets. |
| Multitasking | Yes, server serialization | Yes, cooperative multitasking |
| Persistence | Snapshots and journaling. | Snapshots and journaling. |
| Consistency concept | Eventual Consistency Strong eventual consistency with CRDTs | Immediate Consistency |
| API | [RESP open protocol](https://redis.io/topics/protocol) | [Open binary protocol](http://www.tarantool.io/en/doc/latest/dev_guide/internals/box_protocol) (on MsgPack base) |
| Script language | Lua | Lua |
| Supported languages | C, C#, C++, Clojure, Crystal, D, Dart, Elixir, Erlang, Fancy, Go, Haskell, Haxe, Java, JavaScript (Node.js), Lisp, Lua, MatLab, Objective-C, OCaml, Pascal, Perl, PHP, Prolog, Pure Data, Python, R, Rebol, Ruby, Rust, Scala, Scheme, Smalltalk, Swift, Tcl, Visual Basic | C, C#, C++, Erlang, Go, Java, JavaScript, Lua, Perl, PHP, Python, Rust |
5. References
-------------
1. You can download Tarantool here [at the official website](https://www.tarantool.io/ru/download?utm_source=habr&utm_medium=articles&utm_campaign=2021).
2. Get help [in the Telegram chat](https://t.me/tarantool). | https://habr.com/ru/post/575772/ | null | null | 3,713 | 51.44 |
edited: 08/12/2014
In many large-scale projects, software developers are often have to work with existing SQL Server databases with predefined tables and relationships. The problem can be that some predefined databases can have aspects that are awkward to deal with from the software side. As a software developer, my choice of database access tool is Microsoft’s Entity Framework (EF) so I am motivated to see how EF can handle this.
Entity Framework 6 has a number of features to make it fairly straightforward to work with existing databases. In this article I’ll detail those steps that I needed to take on the EF side, in order to build a fully featured web application to work with the AdventureWorks database. I’ll actually use the AdventureWorksLT2012 database, which is a cut-down version of the larger AdventureWorks OLTP database. I am using Microsoft’s ASP.NET MVC5 (MVC) with the propriety Kendo UI package for the UI/presentation layer, which I cover in the next article.
At the end, I also mention some other techniques that I didn’t need for AdventureWorks, but I have needed on other databases. The aim is to show how you can use EF with pre-existing databases, including ones that need direct access to T-SQL commands and/or Stored Procedures.
Creating the Entity Framework Classes from the existing database
Entity Framework has a well-documented approach, called reverse engineering, to create the EF Entity Classes and
DbContext from an existing database. This produces data classes with various Data Annotations to set some of the properties, such as string length and nullablity (see the example below built around the CustomerTable), plus a
DbContext with an
OnModelCreating method to set up the various relationships.
This does a good job of building the classes. Certainly it is very useful to have the Data Annotations because front-end systems like MVC use these for data validation during input. However I did have a couple of problems:
- The default code generation template includes the `virtual` keyword on all of the relationships. This enabled lazy loading, which I do not want. (see section 1 below)
- The table
SalesOrderDetailhas two keys: one is the
SalesOrderHeaderIDand one is an identity,
SalesOrderDetailID. EF failed on a create and I needed to fix this. (See section 3 below)
I will now describe how I fixed these issues.
1: Removing lazy loading by altering the scaffolding of the EF classes/DbContext
As I said earlier the standard templates enable ‘lazy loading’. I have been corrected in my understanding of lazy loading by some readers. The documentation states that ‘Lazy loading is the process whereby an entity or collection of entities is automatically loaded from the database the first time that a property referring to the entity/entities is accessed’. The problem with this is it does not make for efficient SQL commands, as individual SQL SELECT commands are raised for each access to virtual relationships, which is not such as good idea for performance.
For that reason I do not use Lazy Loading so I want to turn it off.? The problem is that you would then have to re-import the database and so lose all your edits, which you or your colleague might have forgotten about by then, and suddenly your whole web application slows down. No, the common rule with generated code is not to edit it. In this case the answer is to change the code that is generated during the creation of the classes and
DbContext.
Note: You can turn off lazy loading via the EF Configuration class too, but I prefer to remove the virtual keyword as it ensures that lazy loading is definitely off.
2: Altering the code that Reverse Engineering produces
The generation of the EF classes and
DbContext is done using some t4 templates, referred to as scaffolding. By default the reverse engineering of the database uses some internal scaffolding, but you can import the scaffolding and change it. There is a very clear explanation of how to import the scaffolding using NuGet, so I’m not going to repeat it.
Once you have installed the
EntityFramework.CodeTemplates you will find two files called
Content.cs.t4 and
EntityType.cs.t4, which control how the
DbContext and each entity class respectively are built. Even if you aren’t familiar with t4 (a great tool) then you can understand what it does – its a code generator and anything not surround by <# #> is standard text. I found the word ‘virtual’ in the EntityType.cs.t4 and deleted it. I also removed the word ‘virtual’ from the
Content.cs.t4 file on the declaration of the
DbSet<>.
You may want to alter the scaffolding more extensively, perhaps by adding a [Key] attribute on primary keys for some reason. All is possible, but you must dig into the .t4 code in more depth.
One warning about using importing scaffolding – Visual Studio threw a nasty error message when I first tried to import using the
EntityFramework.CodeTemplates scaffolding (see stackoverflow entry). It took a bit of finding but it turns out if you have Entity Framework Power Tools Beta 4 installed then they clash. If you have Entity Framework Power Tools installed then you need to disable it and restart Visual Studio before you can import/reverse engineer a database. I hope that gets fixed as Entity Framework Power Tools is very useful.
Note: There are two other methods to reverse engineer an existing database:
EntityFramework Reverse POCO Code First Generator by Simon Hughes. This is Visual Studio extension recommended by the EF Guru, Julia Lerman, in one of her MSDN magazine articles. I haven’t tried it, but if Julia recommends it then it must be good.
Entity Framework Power Tools Beta 4 can also reverse engineer a database. Its quicker, only two clicks, but its less controllable. I don’t suggest you use this.
3: Fixing a problem with how the two keys are defined in the SalesOrderDetail table
The standard definition for the
SalesOrderDetail table key parts are as follows
You can see it marks the first as not database-generated, but it does not mark the second as an Identity key. This caused problems when I tried to create a new
SalesOrderDetail so that I could add a line item to an order. I got the SQL error:
That confused me for a bit, as other two-key items had worked, such as
CustomerAddress. I tried a few things but as it looked like an EF error I tried telling EF that the
SaledOrderDetailID was an Identity key by using the attribute …
[DatabaseGenerated(DatabaseGeneratedOption.Identity)].
That fixed it!
The best solution would be to edited the scaffolding again to always add that attribute to identity keys. That needed a bit of work and the demo was two days away so in the meantime I added the needed attribute using the MetadataType attribute and a ‘buddy’ class. This is a generally useful feature so I use this example to show you how to do this in the next section.
Adding new DataAnnotations to EF Generated classes
Being able to add attributes to properties in already generated classes is a generally useful thing to do. I needed it to fix the key problem (see section 1 above), but you might want to add some
DataAnnotations to help the UI/presentation layer such as marking properties with their datatype, e.g. [
DataType(DataType.Date)]. The process for doing this is given in the Example section of this link to the
MetadataType attribute. I will show you my example of adding the missing Identity attribute.
The process requires me to add a partial class in another file (see later for more on this) and then add the
[MetadataType(typeof(SalesOrderDetailMetaData))] attribute to the property
SaledOrderDetailID in a new class, sometimes called a ‘buddy’ class . See below:
The effect is to apply those attributes to the existing properties. That fixed my problem with EF creating new SalesOrderDetail properly and I was away.
What happens when the database changes?
Having sorted the scaffolding as discussed above then just repeat step 1, ‘Creating the Entity Framework Classes from the existing database’. There are a few things you need to do before, during and after the re-import.
- You should remember/copy the name of the
DbContextso you use the same name when you re-import. That way it will recompile properly without major name changes.
- Because you are using the same name as the existing
DbContextyou must delete the previous
DbContextotherwise the re-importing process will fails. If its easier you can delete all the generated files as they are replaced anyway. That is why I suggest you put them in a separate directory with no other files added.
- When re-importing by default the process will add the connection string to your
App.Configfile again. I suggest you un-tick that otherwise you end up with lots of connection strings (minor point, but can be confusing).
- If you use source control (I really recommend you do) then a quick compare of the files to check what has changed is worthwhile.
Adding new properties or methods to the Entity classes
In my case I wanted to add some more properties or methods to the class? Clearly I can’t add properties that change the database – I would have to talk to the DBA to change the database definition and import the new database schema again. However in my case I wanted to add properties that accessed existing database properties to produce more useful output, or to have an intention revealing name, like
HasSalesOrder.
You can do this because the scaffolding produces ‘partial’ classes, which means I can have another file which adds to that class. To do this it must: have the same namespace as the generated classes
The class is declared as
public partial <same class name>.
I recommend you put them in a different folder to the generated files. That way they will not be overwritten by accident when you recreate the generated files (note: the namespace must be the original namespace, not that of the new folder). Below I give an example where I added to the customer class. Ignore for now the
IModifiedEntity interface (dealt with later in this article) and
[Computed] attribute, which I will cover in the next article.
Note that you almost certainly will want to add to the
DbContext class (I did – see section 4 below). This is also defined as a partial class so you can use the same approach. Which leads me on to…
Dealing with properties best dealt with at the Data Layer
In the AdventureWorks database there are two properties called
'ModifiedDate' and ‘
rowguid‘. In the AdventureWorks Lite database these were not generated in the database. Therefore the software needs to update
ModifiedDate on create or update and set the rowguid on create.
Many databases have properties like this and, if not handled by the database,they are best dealt with at Data/Infrastructure layer. With EF this can be done by providing a partial class and overriding the
SaveChanges() method to handle the specific issues your database needs. In the case of AdventureWorks I adding an
IModifiedEntity interface to each partial class that has
ModifiedDate and
rowguid property.
Then I added the code below to the
AdventureWorksLt2012 DbContext to provide the functionality required by this database.
The
IModifiedEntity interface is really simple:
Using SQL Store Procedures
Some databases rely on SQL Stored Procedures (SPs) for insert, update and delete of rows in a table. AdventureWorksLT2012 did not, but if you need to that EF 6 has added a neat way of linking to stored procedures. It’s not trivial, but you can find good information here on how to get EF to use SPs for Insert, Update and Delete operations.
Clearly if the database needs SPs for CUD (Create, Update and Delete) actions then you need to use them, and there are plenty of advantages in doing so. In the absence of stored procedures, it is easy from the software point of view to use EFs CUD actions and EFs CUD have some nice features. For instance, EF has an in-memory copy of the original values and uses this for working out what has changed. The benefit is that the EF updates are efficient – you update one property and only that cell in a row is updated. The more subtle benefit is tracking changes and handling SQL security, i.e. if you use SQL column-level security (Grant/Deny) then if that property is unchanged we do not trigger a security breach. This is a bit of an esoteric feature, but I have used it and it works well.
Other things you could do
This is all I had to do to get EF to work with an existing database, but there are other things I have had to use in the past. Here is a quick run through of other items:
Using Direct SQL commands
Sometimes it makes sense to bypass EF and use a SQL command, and EF has all the commands to allow you to do this. The EF documentation has a page on this here which gives a reasonable overview, but I recommend Julia Lerman’s book ‘Programming Entity Framework: DbContext’ which goes into this in more detail (note: this book is very useful but it covers an earlier version of EF so misses some of the latest commands like the use of SPs in Insert, Update and Delete).
For certain types of reads SQL makes a lot of sense. For instance in my GenericSecurity library I need to read the current SQL security setup (see below). I think you will agree it makes a lot of sense to do this with a direct SQL read rather than defining multiple data classes just to build the command.
For SQL commands such as create, update and delete is less obvious, but I have used it in some cases. For these you use the SqlCommand method, see example from Microsoft below:
Neither of these example had parameters, but if you did need any parameters then
SqlQuery and
SqlCommand methods can take parameters, which are checked to protect against a SQL injection attack. The Database.SqlQuery Method documentation shows this.
One warning on
SqlCommands. Once you have run a
SqlCommand then EF’s view of the database, some of which is held in memory, is out of date. If you are going to close/dispose of the
DbContext straight away then that isn’t a problem. However if the command is followed by other EF accesses, read or write, then you should use the EF ‘Reload’ command to get EF back in track. See my stackoverflow answer here for more on this.
SQL Transaction control
When using EF to do any database updates using the .
SaveChanged() function then all the changes are done in one transaction, i.e. if one fails then none of the updates are committed. However if you are using raw SQL updates, or a combination of EF and SQL updates, you may well need these to be done in one transaction. Thankfully EF version 6 introduced commands to allow you to control transactions.
I used these commands in my EF code to work with SQL security. I wanted to execute a set of SQL commands to set up SQL Security roles and grant/deny access, but if any one failed I wanted to roll back. The code to execute a sequence of sql commands and rollback if any single command fails is given below:
You can also use the same commands in a mixed SQL commands and EF commands. See this EF documentation for an example of that.
Conclusion
There were a few issues to sort out but all of them were fixable. Overall, getting EF to work with an existing database was fairly straightforward, once you know how. The problem I had with multiple keys (see section 1) was nasty, but now I, and you, know about it we can handle it in the future.
I think the AdventureWorks Lite database is complex enough to be a challenge: with lots of relationships, composite primary keys, computed columns, nullable properties etc. Therefore getting EF to work with AdventureWorks is a good test of EFs capability to work with existing SQL databases. While the AdventureWorks Lite database did not need any raw SQL queries or Stored Procedures other projects of mine have used these, and I have mentioned some of these features at the end of the article to complete the picture.
In fact version 6 of EF added a significance amount of extra features and commands to make mixed EF/SQL access very possible. The more I dig into things the more goodies I find in EF 6. For instance EF 6 brought in Retry logic for Azure, Handling transaction commit failures, SQL transaction control, improved sharing connections between SQL and EF, plus a number of other things. Have a good look around the EF documentation – there is a lot there.
So, no need to hold back on using Entity Framework on your next project that has to work with an existing SQL database. You can use it in a major role as I did, or now you have good connection sharing just use it for the simple CRUD cases that do not need heavy T-SQL methods.
My second article carried on this theme by looking at the challenges of displaying and updating this data at the user interface end. I talk about various methods to develop a good the user experience quickly while still keeping a reasonable database performance. | https://www.red-gate.com/simple-talk/dotnet/.net-framework/using-entity-framework-with-an-existing-database-data-access/ | CC-MAIN-2018-30 | refinedweb | 2,961 | 59.64 |
patch for drivers/net/irda/irport.c IRDA driver removes one call to
check_region using request_region instead. The patch also moves the call to
request_region to before the allocation of the driver instance.
I don't have this hardware so patch is not tested. This patch removes all
references to check_region in this driver.
Patch also available at the following URL:
This is patch number 38 in a series of check_region patches I am doing as
part of the kernel janitors project. Removal of check_region is one of the
items on the kernel janitors TODO list ()
- "get rid of check_region, use just request_region checking its return (2.2
request_region returned void) and now the driver init sequence is not to be
serialized anymore, so races are possible (look at cardbus/pcihotplug code)"
Best regards
William Stinson
--- linux-2.5.59/drivers/net/irda/irport.c 2003-01-31 22:17:39.000000000
+0100 +++ linux-local/drivers/net/irda/irport.c 2003-01-31
23:44:45.000000000 +0100 @@ -102,9 +102,6 @@
int i;
for (i=0; (io[i] < 2000) && (i < 4); i++) {
- int ioaddr = io[i];
- if (check_region(ioaddr, IO_EXTENT))
- continue;
if (irport_open(i, io[i], irq[i]) != NULL)
return 0;
}
@@ -142,6 +139,14 @@
IRDA_DEBUG(1, "%s()\n", __FUNCTION__);
+ /* Lock the port that we need */
+ ret = request_region(iobase, IO_EXTENT, driver_name);
+ if (!ret) {
+ IRDA_DEBUG(0, "%s(), can't get iobase of 0x%03x\n",
+ __FUNCTION__, iobase);
+ return NULL;
+ }
+
/*
* Allocate new instance of the driver
*/
@@ -149,6 +154,7 @@
if (!self) {
ERROR("%s(), can't allocate memory for "
"control block!\n", __FUNCTION__);
+ release_region(iobase, IO_EXTENT);
return NULL;
}
memset(self, 0, sizeof(struct irport_cb));
@@ -165,14 +171,6 @@
self->io.irq = irq;
self->io.fifo_size = 16;
- /* Lock the port that we need */
- ret = request_region(self->io.sir_base, self->io.sir_ext, driver_name);
- if (!ret) {
- IRDA_DEBUG(0, "%s(), can't get iobase of 0x%03x\n",
- __FUNCTION__, self->io.sir_base);
- return NULL;
- }
-
/* Initialize QoS for this device */
irda_init_max_qos_capabilies(&self->qos);
------------------------------------------------------- | https://sourceforge.net/p/irda/mailman/irda-users/?viewmonth=200302&viewday=2 | CC-MAIN-2018-17 | refinedweb | 327 | 59.3 |
The type of a data object in C determines the range and kind
of values an object can represent, the size of machine storage reserved
for an object, and the operations allowed on an object. Functions also
have types, and the function's return type and parameter types can be
specified in the function's declaration.
The following sections discuss these topics:
The selection of a data type for a given object or function is one of
the fundamental programming steps in any language. Each data object or
function in the program must have a data type, assigned either
explicitly or by default. (Chapter 4 discusses the assignment of a
data type to an object.) C offers a wide variety of types. This
diversity is a strong feature of C, but can be initially confusing.
To help avoid this confusion, remember that C has only a few basic
types. All other types are derived combinations of these basic types.
Some types can be specified in more than one way; for example,
short
and
short int
are the same type. (In this manual, the longest, most specific name is
always used.) Type is assigned to each object or function as part of
the declaration. Chapter 4 describes declarations in more
detail.
Table 3-1 lists the basic data types:
integral types (objects representing integers within a
specific range), floating-point types (objects representing
numbers with a significand part---a whole number plus a fractional
number---and an optional exponential part), and character
types (objects representing a printable character). Character
types are stored as integers.
In HP C, use of the
_Imaginary
keyword produces a warning, which is resolved by treating it as an
ordinary identifier.
The integral and floating-point types combined are called the
arithmetic types.
See Section 3.1 for information about the size and range of integral
and floating-point values.
A large variety of derived types can be created from the basic
types. Section 3.4 discusses the derived types.
Besides the basic and derived types, there are three keywords that
specify unique types:
void
,
enum
, and
typedef
:
There are also the type-qualifier keywords:
Using a qualifying keyword in the type declaration of an object results
in a qualified type. See Section 3.7 for general information
on type qualifiers.
With such a wide variety of types, operations in a program often need
to be performed on objects of different types, and parameters of one
type often need to be passed to functions expecting different parameter
types. Because C stores different kinds of values in different ways, a
conversion must be performed on at least one of the operands
or arguments to convert the type of one operand or argument to match
that of the other. You can perform conversions explicitly through
casting, or implicitly through the compiler. See Section 6.11
for more information on data-type conversions. See Section 2.7 for a
description of type compatibility.
See your platform-specific HP C documentation for a
description of any implementation-defined data types.
An object of a given data type is stored in a section of memory having
a discreet size. Objects of different data types require different
amounts of memory. Table 3-2 shows the size and range of the basic
data types.
Derived types can require more memory space.
See your platform-specific HP C documentation for the sizes of
implementation-defined data types.
In C, an integral type can declare:
The integral types are:
For HP C on OpenVMS systems, storage for
int
and
long
is identical. Similarly, storage of
signed int
and
signed long
is identical, and storage for
unsigned int
and
unsigned long
is identical.
For HP C on Tru64 UNIX systems, storage for the
int
data types is 32 bits, while storage for the
long int
data types is 64 bits.
The 64-bit integral types
signed long long int
and
unsigned long long int
, and their equivalents
signed __int64
and
unsigned __int64
are provided on Alpha and Itanium processors only. Note: the
__int64
and
long long int
data types (both signed and unsigned) can be used interchangeably,
except for use with pointer operations, in which case the pointer types
must be identical:
__int64 *p1;
__int64 *p2;
long long int *p3;
.
.
.
p1 = p2; // valid
p1 = p3; // invalid
For each of the signed integral types, there is a corresponding
unsigned integral type that uses the same amount of storage.
The
unsigned
keyword with the integral type modifies the way the integer value is
interpreted, which allows the storage of a larger range of positive
values. When using the
unsigned
keyword, the bits are interpreted differently to allow for the
increased positive range with the unsigned type (at the expense of the
negative range of values). For example:
signed short int x = 45000; /* ERROR -- value too large for short int */
unsigned short int y = 45000;/* This value is OK */
The range of values for the
signed short int
type is - 32,768 to 32,767. The range of values for the
unsigned short int
type is 0 to 65,535.
A computation involving unsigned operands can never overflow, because
any result outside the range of the
unsigned
type is reduced to fit the type by the rules of modulus arithmetic. If
the result cannot be represented by the resulting integer type, the
result is reduced modulo the number that is one greater than the
largest value that can be represented by the resulting
unsigned integer type. This means that the low-order bits are kept, and
the high-order bits of the mathematical result that do not fit in the
type of the result are discarded. For example:
unsigned short int z = (99 * 99999); /* Value of y after evaluation is 3965 */
HP C treats the plain
char
type as
signed
by default for compatibility with VAX C and many other C compilers.
However, a command-line option can control this, and a predefined macro
can be tested to determine the setting of the option in a given
compilation. On Alpha systems,
unsigned char
might offer some performance advantage for character-intensive
processing.
An unsigned integer of n bits is always interpreted in
straight unsigned binary notation, with possible values ranging from 0
to 2 n-1 .
The C99-specified
_Bool
data type is available in all modes of the compiler except VAX C,
common, and strict ANSI89 modes. A
_Bool
object occupies a single byte of storage and is treated as an
unsigned integer
, but its value can be only 0 or 1.
double a = .01;
int b = a;
_Bool c = a;
#define bool _Bool
#define true 1
#define false 0
#define __bool_true_false_are_defined 1
Character types are declared with the keyword
char
and are integral types. Using
char
objects for nonintegral operations is not recommended, as the results
are likely to be nonportable. An object declared as a
char
type can always store the largest member of the source character set.
Valid character types are:
The wide character type
wchar_t
is provided to represent characters not included in the ASCII character
set.
The
wchar_t
type is defined using the
typedef
keyword in the
<stddef.h>
header file. Wide characters used in constants or strings must be
preceded with an
L
. For example:
#include <stddef.h>
wchar_t a[6] = L"Hello";
All
char
objects are stored in 8 bits. All
wchar_t
objects are stored as
unsigned int
objects in 32 bits. The value of a given character is determined by the
character set being used. In this text, the ASCII character set is used
in all examples. See Appendix C for a complete list of ASCII
equivalents, in decimal, octal, and hexadecimal radixes.
To aid portability, declare
char
objects that will be used in arithmetic as
signed char
or
unsigned char
. For example:
signed char letter;
unsigned char symbol_1, symbol_2;
signed char alpha = 'A'; /* alpha is declared and initialized as 'A' */
Strings are arrays of characters terminated by the null
character (\0). Section 1.9.3 has more information on the syntactic
rules of using strings; Chapter 4 has information on declaring
string literals. | http://h41379.www4.hpe.com/commercial/c/docs/6180profile_006.html | CC-MAIN-2017-43 | refinedweb | 1,353 | 52.9 |
Step 3 : Add controller class
MVC controller returns many types of output to the view according to the data we need for the application. In this article we will learn about JsonResult type of MVC . So instead of going into the depth on the subject, let us start with its practical implementation.
To know more about the Action result types please refer my previous article
What is ActionResult ?
It is the type of output format which is shown to the client .
It is the type of output format which is shown to the client .
What is JsonResult ?
JsonResult is one of the type of MVC action result type which returns the data back to the view or the browser in the form of JSON (JavaScript Object notation format).
JsonResult is one of the type of MVC action result type which returns the data back to the view or the browser in the form of JSON (JavaScript Object notation format).
Step 1 : Create an MVC application
Step 2 : Create Model Model folder in the created MVC application, give the class name employee or as you wish and click OK.Employee.cs
public class Employee { public int Id { get; set; } public string Name { get; set; } public string City { get; set; } public string Address { get; set; } }
Right click on Controller folder in the created MVC application ,give the class name Home or as you
and click on OK
HomeControlle.cs
public class HomeController : Controller { // GET: Home public ActionResult Index() { return View(); } [HttpGet] public JsonResult EmpDetails() { //Creating List List<Employee> ObjEmp = new List<Employee>() { //Adding records to list new Employee {Id=1,Name="Vithal Wadje",City="Latur",Address="Kabansangvi" }, new Employee {Id=2,Name="Sudhir Wadje",City="Mumbai",Address="Kurla" } }; //return list as Json return Json(ObjEmp, JsonRequestBehavior.AllowGet); } }
In the above controller class JsonResult method EmpDetails we have added the records into the Generic list and returning it as JSON to avoid database query for same result.
Possible Error
If you return JSON method without setting JsonRequestBehavior property to AllowGet then the following error will occur.
Why error occurred ?
The GET request by default not allowed in JSON result so to allow GET request in JsonResult we need to set JsonRequestBehavior to AllowGet as in the following code snippet:
Json(ObjEmp, JsonRequestBehavior.AllowGet);Now run the application and the Json result output will be shown into the browser as in the following screenshot:
Step 4 : Add Partial view
Right click on Home folder inside the View folder in the created MVC application as
Give the name EmpDetails and click Add button.
To bind view using json we need JQuery and the following JQuery library to communicate to the controller from view:
The preceding JQuery library file version may be different (lower or higher). Now write the following code into the partial view created:
<script src="~/Scripts/jquery-1.10.2.min.js"></script>
<script src="~/Scripts/jquery-1.10.2.min.js"></script> <script> $(document).ready(function () { //Call EmpDetails jsonResult Method $.getJSON("Home/EmpDetails", function (json) { var tr; //Append each row to html table for (var i = 0; i < json.length; i++) { tr = $('<tr/>'); tr.append("<td>" + json[i].Id + "</td>"); tr.append("<td>" + json[i].Name + "</td>"); tr.append("<td>" + json[i].City + "</td>"); tr.append("<td>" + json[i].Address + "</td>"); $('table').append(tr); } }); }); </script> <table class="table table-bordered table-condensed table-hover table-striped"> <thead> <tr> <th>Id</th> <th>Name</th> <th>City</th> <th>Address</th> </tr> </thead> <tbody></tbody> </table>
Step 5 : Call partial view EmpDetails in Main Index view
Now call the partial view EmpDetails in Main Index view as using following code
Step 6 : Run the application
@{ ViewBag. @Html.Partial("EmpDetails"); </div>
From the preceding examples we have learned about JsonResult type using the scenario on how to bind view using JSON data in ASP.NET MVC.
Watch Video
Note:
- Since this is a demo, it might not be using proper standards, so improve it depending on your skills.
- Handle the exception in the database or text file as per you convenience, since in this article I have not implemented it.
I hope this article is useful for all the readers. If you have any suggestion please contact me. | https://www.compilemode.com/2015/10/json-result-type-in-mvc.html | CC-MAIN-2022-40 | refinedweb | 704 | 52.8 |
McFarland5,122 Points
Stage 4 "Delivering the MVP Java Objects Error - Bummer! Expected "firstName" to fail but it passed.
Not sure what to think about this challenge - help!
public class TeacherAssistant { public static String validatedFieldName(String fieldName) { char first; char second; first = fieldName.charAt(0) ; second = fieldName.charAt(1) ; boolean isValidFirstChar = (first == 'm'); boolean isValidSecondChar = (Character.isUpperCase(second)); try { if (isValidFirstChar && isValidSecondChar); } catch (IllegalArgumentException iae) { throw new IllegalArgumentException("IllegalExcepttion"); } // These things should be verified: // 1. Member fields must start with an 'm' // 2. The second letter in the field name must be uppercased to ensure camel-casing // NOTE: To check if something is not equal use the != symbol. eg: 3 != 4 return fieldName; } }
2 Answers
Craig DennisTreehouse Teacher
So close Kevin! Leave the trying and catching to the caller of the method (not your job). Regarding your if statement. ..You could have two separate if statements negate each value. Or you could surround the statement you built with parenthesis and negate it. I've included that below since that is a new concept. Code will execute in the parens and then you negate it's value.
if (! (isValidFirstChar && isValidSecondChar)) { throw new IllegalArgumentException("Bad method name"); }
I intended two separate if statements, but kudos for going above and beyond ;)
Kaleb Burnham4,877 Points
I found a much simpler way to complete this challenge, only requiring two additional lines. No need to create any variables.
public class TeacherAssistant { public static String validatedFieldName(String fieldName) { if (fieldName.charAt(0) != 'm' || !Character.isUpperCase(fieldName.charAt(1))) { throw new IllegalArgumentException("IllegalException"); } return fieldName; } }
Kevin McFarland5,122 Points
Kevin McFarland5,122 Points
Hi Craig,
I know that I tend to over think the challenges.
It makes sense now... thanks
Kevin | https://teamtreehouse.com/community/stage-4-delivering-the-mvp-java-objects-error-bummer-expected-firstname-to-fail-but-it-passed | CC-MAIN-2022-27 | refinedweb | 284 | 50.43 |
Hm ... what I've seen often is that "waiting for GPU" sometimes takes up a very long time, and/or that the grey bars inside Scout start to show a sawtooth pattern. But I can't remember having seen that "clear" takes so long.
Two things: What's your stage color? That's what Starling uses to clear the context. Make sure it's perfect black (0x0), that makes a difference on some hardware.
Next, you could try if turning off VSYNC makes any difference.
Add the following code to your game, ideally right at the beginning. That "stage" is the Flash stage!
stage.addEventListener(VsyncStateChangeAvailabilityEvent.VSYNC_STATE_CHANGE_AVAILABILITY,
function (e:VsyncStateChangeAvailabilityEvent):void
{
trace("vsync change available: " + e.available);
if (e.available) stage.vsyncEnabled = false;
trace("vsync is now " + (stage.vsyncEnabled ? "on" : "off"));
});
That requires a recent AIR version.
This is how I set stage color:
[SWF(frameRate="60", backgroundColor="0x000000")]
public class Main extends BaseMainClass
Is that ok?
I made a mistake this is also affecting Android as well but it affects iOS way more.
@hardcoremore, what was the mistake just incase anyone else runs into it and has done the same thing?
The mistake was that it also affects Android devices. I originally posted that only iOS devices are affected.
@Daniel,
I have tried code for disabling vsync but Context3D.clear still takes the same amount of time. Roughly 25% of all code time is Context3D.clear.
This is really depressing.
I have added bug report to Adobe Issue tracker here:
Just for the comparison this is how much Context3D.clear takes on Desktop PC:
And that is for more than 7 minutes of game play. It is barely measurable. Only 101 ms for 7 minutes of game play.
This difference is really HUGE.
One error you made in your tests is that you had both "ActionScript sampler" AND "Memory Allocation Tracking" turned on at the same time. The latter affects the former, so I recommend you only test ActionScript performance with memory allocation tracking turned off.
[After all, Scout says that memory allocation tracking has a high overhead!]
This doesn't have to be the root of the issue, but please try that again, so we're sure.
I have disabled Memory Allocation Tracking in Adobe Scout but this issue is still present. Take a look at the screenshot:
And here is the Adobe Scout profile:
But for example take a look at iPhone X Adobe Scout Profile:
Select frames 1751-5563. There you can see Context3D.clear accounts for 58% of all code time which is insane and on other frames it is normal. One thing I have noticed is that this behavior is changing over time but only on iPhone X. On iPhone 7 Plus it is present all the time. I do not know what triggers it.
I there anyone that has iOS game that can test on iOS device with Adobe Scout to see what time do you get for Context3D.clear method as called by Painter.clear(starling.rendering)?
I also noticed that this is something that is changing at run time. For example when I tested on iPhone X sometimes Context3D.clear takes too much time then when I die in game and when I run System.gc the Context3D.clear returns to normal.
This is affecting iOS way more than Android. On Android Waiting for GPU takes much more time than Context3D.clear(). On Android Context3D.clear() is pretty normal around 3% of all time.
I can't reproduce this in my projects, although there's rarely going on really much on the screen, so it's not a perfect test. In any case, "context.clear" only takes up very little time there.
One more thing I forgot to mention: you have deactivated "Starling.enableErrorChecking", right? That's also something that has a very negative impact.
What I *did* see in the past is that I've got long "waiting for the GPU" spikes which I never could explain 100%. One reason they might happen is descried here — perhaps reading that (really interesting!) article gives you another idea what you could do to solve this.
And one more thing: you made some changes to your Starling version recently, if I remember correctly. Have you tried if this happens with "vanilla" Starling (head revision or v2.3), too? Just to be sure it's not an unexpected side effect.
Starling.enableErrorChecking and only thing I changed in Starling is regarding to touch. I will try to make more tests to see if I can discover where the bottleneck is.
Daniel,
Further testing shows that this does does not have to to with game at all. On home screen Context3D.clear can take almost 80% of all time.
Take a look at this Adobe Scout profile:
This is recorded on iPhone 7 Plus. I have only displayed Stack Screen Navigator and you can see how much Context3D.clear takes time.
Also you can see how it one time works normal and other time takes too long to run. I do not know what triggers it. One time everything is normal and one time ti sky rockets to 80% of all code time. And this is just on Home Screen almost nothing is happening there.
On what iOS Device have you tested Daniel?
Further testing also shows that this is not happening at all on iPad. I can not reproduce this on iPad Air 2.
Context3D.clear is working slow only on iPhones. I have tested on iPhone 6s, iPhone 7 Plus and iPhone 10.
On iPad everything is working normal.
I feel as if you did not clearly answer Daniel's question about Starling.enableErrorChecking. Please confirm that your app does not set Starling.enableErrorChecking to true.
Yeah Josh,
It was a typo. I am 100% not using Starling.enableErrorChecking. E.g Starling.enableErrorChecking is set to false by default I never set it to true.
Just want to add Adobe Scout profile that I recorded on iPhone 7 Plus while being on Home Screen:
Maybe someone notice something interesting. Just want to emphasize that Context3D.clear is not behaving the same all the time. It changes from time to time like you can see in the Scout Profile. Once it changes it stays like that for some time. So at some point Context3D.clear works fine for 20 seconds and than works bad for like 30 seconds and that changes from time to time. I notice that this change can be triggered sometimes with calling System.gc() which triggers garbage collection in AIR apps.
What's interesting is that you always stay below the frame budget (i.e. the red line), even when "context.clear" takes very long. Do you actually see performance problems if you don't look at Scout?
I'm just bringing this up because it could also simply be a bug in Scout, or a wrong telemetry being sent by the runtime.
A quick check: you deactivated the "CPU usage" flag in Scout. If you activate that (and ideally, just that) and look at the CPU usage in each frame (in the summary tab) — does that show similar values as the frame time? | https://forum.starling-framework.org/d/20563-context3dclear-and-waiting-for-gpu-is-killing-performance-on-ios | CC-MAIN-2019-22 | refinedweb | 1,204 | 76.72 |
[spoiler title=”Lesson Video”]
Direct Download of Video (For mobile / offline viewing)(right click > save target as)
[/spoiler]
[spoiler title=”Lesson Source Code”]
#include <iostream> using namespace std; int addFive(int); int addFiveRef(int&); void addFiveToAll(int[]); int main(){ int x = 0; //Homework //Using an array of strings for days of the week and an array for days (your choice on type) in the months of the year, ( 31, 28,...) //Figure out the last day of a user input month for the year 2015, or 2016. //2016, July etc. /*cout << addFive(x) << endl; // What will this line output? addFive(x); //5 that floats and does nothing cout << x << endl; // What will this line output? cout << "Reference: " << endl; cout << addFiveRef(x) << endl; // What will this line output? addFiveRef(x); cout << x << endl; // What will this line output?*/ int myArray[5] = { 1, 3, 5, 7, 9 }; addFiveToAll(myArray); for (int i = 0; i < 5; i++){ cout << myArray[i] << endl; } system("pause"); return 0; } int addFive(int y){ y += 5; return y ; } int addFiveRef(int &y){ y += 5; return y; } void addFiveToAll(int anArray[]){ for (int i = 0; i < 5; i++){ anArray[i] += 5; } }
[/spoiler]
Homework: Using an array of strings for days of the week and an array for days (your choice on type) in the months of the year, ( 31, 28,…) Figure out the last day of a user input month for the year 2015, or 2016. Ex: 2015, July.
What is passing by reference?
Passing by reference is when you pass an argument into a function directly. The default way which you pass arguments into a function is called “Pass by value” in which a perfect copy of the variable you are passing is made, and the one you passed is not affected by any actions taken within the function. Pass by reference passes the variable itself into the function (via reference, which we will touch on later), and any alterations made to that variable throughout the course of the function are passed made directly to the variable itself and thus persist outside of the function even without assignment.
The only difference in syntax between passing by reference and passing by value is an ampersand (&) which is put before the variable name in the function’s argument list.
int addFive(int &myNumber){} // pass by reference
int addFive(int myNumber){} //pass by value | https://beginnerscpp.com/lesson-19-functions-return-types-pass-by-value-reference/ | CC-MAIN-2019-13 | refinedweb | 392 | 63.32 |
Bioinformatics Tools Programming in Python with Qt. Part 2.
In this article will focus purely on laying out the structure for our ‘DNA Engine’ project. We will start adding elements and programming them in our next article.
So here is the structure we had up and running last time (Test UI and a Test Class):
Segment 1: File structure.
Now that we have Python Virtual Environment, PyQT module and tools installed and tested, let’s create a new structure for our project.
First, let’s delete the following files from our project (we have used them to test our development environment):
- dna_engine.py
- dna_engine.ui
- main.py
And create three new files:
- application.py – this will be our entry point. The ‘main’ function.
- engine.py – main engine class that will handle and manage the core functionality (more on that as we progress).
- engine_ui.py – ui layout file.
Segment 2: Class Template and the ‘main’ function.
Before we create the initial ‘DNAEngine’ class structure, let’s practice by going back to the Qt Designer and creating a new window ‘MainWindow’:
Let’s rename our Window and drop a Button on to it and change its text also:
Now save this as a
engine_ui.ui file in the root of our ‘DNA_Engine’ folder. Open up your terminal and execute the command we learned in the previous article:
pyuic5 engine_ui.ui -o engine_ui.py
So the final file structure for this article is this:
Now, that we have the
ui file converted to a
py file, we are ready to create our class, that will include it. Open
engine.py file and type/add the following ‘skeleton’ class:
from engine_ui import Ui_MainWindow from PyQt5 import QtWidgets import sys class DNAEngine: def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.app = QtWidgets.QApplication(sys.argv) self.MainWindow = QtWidgets.QMainWindow() def setup(self): self.ui = Ui_MainWindow() self.ui.setupUi(self.MainWindow) def run(self): sys.exit(self.app.exec_()) def display(self): self.MainWindow.show()
My file/class structure and function names will look a lot like a typical Game/Games Engine structure as 70%+ of my programming experience comes from developing Game Engines.
On the line 1 we include our
ui file, and it has just one class, which is
Ui_MainWindow.
On the line 2 we include only one module from
PyQT5. We will include others as we start using them. This module will allow us to create a window and communicate with it. We also include a standard Python module
sys to enable our application/window to ‘talk’ to the operating system.
We call our class
DNAEngine, but if you are not interested in bioinformatics direction of this project, feel free to use any other names that fit your project.
First 4 class methods we will add are very descriptive:
__init__: initialize the class and create an instance of a window/widget.
setup: run any setup for UI and load any other configurations.
run: start an instance of a class by initializing the event loop.
show: show/render the UI.
If you are not familiar with Python’s
super() function, I suggest you read this Stack Overflow page.
Now let’s use our class for the first time to “Start Our Engine”. Open
application.py file and type/add the following code:
from engine import DNAEngine if __name__ == "__main__": Engine = DNAEngine() Engine.setup() Engine.display() Engine.run()
And this is it! We just import our
DNAEngine class from
engine.py file, and call all 4 methods from our class.
The goal here is to have a clean and module design, and easy to read and maintain code.
Now let’s run and test our new setup:
Make sure you are running the project out of the virtual environment we created in the last article.
Awesome! Now, that our project is configured and tested, we are ready to start designing and programming our first UI elements. This is exactly what we will start doing in the next article.
Links:
- PyQT5 + Designer installation and use on Windows Video.
- Alan D Moore’s amazing PyQT and Designer Video.
Also, not required but a highly recommended book by Alan D Moore:
Buy: US / UK
GitLab:
YouTube playlist for this series:
2 Comments
Guest · September 1, 2020 at 02:25
When is this series coming back! I’m eagerly following along!
rebelCoder · September 5, 2020 at 11:20
Hey! Thank you. This series will continue very soon! | https://rebelscience.club/2020/06/part-2-dna-engine-structure-dna-toolkit-import/ | CC-MAIN-2020-45 | refinedweb | 740 | 66.03 |
xml 3.7.0
Dart XML #
Dart XML is a lightweight library for parsing, traversing, querying, transforming and building XML documents.:xml/xml.dart' as xml;
Reading and Writing #
To read XML input use the top-level function
parse(String input):);
The resulting object is an instance of
XmlDocument. In case the document cannot be parsed, a
XmlParserException is thrown.
To write back the parsed XML document simply call
toString(), if you need more control
toXmlString(petty: true, indent: '\t'):
print(document.toString()); print(document.toXmlString(pretty: true, indent: '\t'));
Traversing and Querying #
Accessors allow to access nodes in the XML tree:
attributesreturns a list over the attributes of the current node.
childrenreturns a list over the children of the current node.
Both lists are mutable and support all common
List methods, such as
add(XmlNode),
addAll(Iterable<XmlNode>),
insert(int, XmlNode), and
insertAll(int, Iterable<XmlNode>). Trying to add a
null value or an unsupported node type throws an
XmlNodeTypeError error. Nodes that are already part of a tree are not automatically moved, you need to first create a copy as otherwise an
XmlParentError is thrown.
XmlDocumentFragment nodes are automatically expanded and copies of their children are added.
There are various methods to traverse the XML tree along its axes:
precedingreturns an iterable over nodes preceding the opening tag of the current node in document order.
descendantsreturns an iterable over the descendants of the current node in document order. This includes the attributes of the current node, its children, the grandchildren, and so on.
followingthe nodes following the closing tag of the current node in document order.
ancestorsreturns an iterable over the ancestor nodes of the current node, that is the parent, the grandparent, and so on. Note that this is the only iterable that traverses nodes in reverse document order.
For example, the
descendants iterator could be used to extract all textual contents from an XML tree:
var textual = document.descendants .where((node) => node is xml.XmlText && !node.text.trim().isEmpty) .join('\n'); print(textual);
Additionally, there are helpers to find elements with a specific tag:
findElements(String name)finds direct children of the current node with the provided tag
name.
findAllElements(String name)finds direct and indirect children of the current node with the provided tag
name.
For example, to find all the nodes with the <title> tag you could write:
var titles = document.findAllElements('title');
The above code returns a lazy iterator that recursively walks the XML document and yields all the element nodes with the requested tag name. To extract the textual contents call
text:
titles .map((node) => node.text) .forEach(print);
This prints Growing a Language and Learning XML.
Similarly, to compute the total price of all the books one could write the following expression:
var total = document.findAllElements('book') .map((node) => double.parse(node.findElements('price').single.text)) .reduce((a, b) => a + b); print(total);
Note that this first finds all the books, and then extracts the price to avoid counting the price tag that is included in the bookshelf.
Building #
To build a new XML document use an
XmlBuilder. The builder implements a small set of methods to build complete XML trees. To create the above bookshelf example one would write:
var builder = new xml.XmlBuilder(); builder.processing('xml', 'version="1.0"'); builder.element('bookshelf', nest: () { builder.element('book', nest: () { builder.element('title', nest: () { builder.attribute('lang', 'english'); builder.text('Growing a Language'); }); builder.element('price', nest: 29.99); }); builder.element('book', nest: () { builder.element('title', nest: () { builder.attribute('lang', 'english'); builder.text('Learning XML'); }); builder.element('price', nest: 39.95); }); builder.element('price', nest: 132.00); }); var bookshelfXml = builder.build();
Note the
element method. It is quite sophisticated and supports many different optional named arguments:
- The most common is the
nest:argument which is used to insert contents into the element. In most cases this will be a function that calls more methods on the builder to define attributes, declare namespaces and add child elements. However, the argument can also be a string or an arbitrary Dart object that is converted to a string and added as a text node.
- While attributes can be defined from within the element, for simplicity there is also an argument
attributes:that takes a map to define simple name-value pairs.
- Furthermore we can provide an URI as the namespace of the element using
namespace:and declare new namespace prefixes using
namespaces:. For details see the documentation of the method.
The builder pattern allows you to easily extract repeated parts into specific methods. In the example above, one could put the part that writes a book into a separate method as follows:
buildBook(xml.XmlBuilder builder, String title, String language, num price) { builder.element('book', nest: () { builder.element('title', nest: () { builder.attribute('lang', 'english'); builder.text(title); }); builder.element('price', nest: price); }); }
Misc #
Examples #
There are numerous packages depending on this package:
- image decodes, encodes and processes image formats.
- StageXL is a 2D rendering engine.
- Extensible Resource Descriptors is a library to read Extensible Resource Descriptors.
- xml2json is an XML to JSON conversion package.
- spreadsheet_decoder is a library for decoding and updating spreadsheets for ODS and XLSX files.
- and many more ...
Supports #
- Standard well-formed XML (and HTML).
- Reading documents using an event based API (SAX).
- Decodes and encodes commonly used character entities.
- Querying, traversing, and mutating API using Dart principles.
- Building XML trees using a builder API.
Limitations #
- Doesn't validate namespace declarations.
- Doesn't validate schema declarations.
- Doesn't parse and enforce DTD.
History #
This library started as an example of the PetitParser library. To my own surprise various people started to use it to read XML files. In April 2014 I was asked to replace the original dart-xml library from John Evans.
License #
The MIT License, see LICENSE.
Changelog #
3.7.0 #
- Update to PetitParser 3.0.0.
- Dart 2.7 compatibility and requirement.
3.6.0 #
- Entity decoding and encoding is now configurable with an
XmlEntityMapping. All operations that read or write XML can now (optionally) be configured with an entity mapper.
- The default entity mapping used only maps XML entities, as opposed to all HTML entities as in previous versions. To get the old behavior use
XmlDefaultEntityMapping.html5.
- Made
XmlParserErrora
FormatExceptionto follow typical Dart exception style.
- Add an example demonstrating the interaction with HTTP APIs.
3.5.0 #
- Dart 2.3 compatibility and requirement.
- Turn various abstract classes into proper mixins.
- Numerous documentation improvements and code optimizations.
- Add an event parser example.
3.4.0 #
- Dart 2.2 compatibility and requirement.
- Take advantage of PetitParser fast-parse mode:
- 15-30% faster DOM parsing, and
- 15-50% faster event parsing.
- Improve error messages and reporting.
3.3.0 #
- New event based parsing in
xml_events:
- Lazy event parsing from a XML string into an
Iterableof
XmlEvent.
- Async converters between streams of XML,
XmlEventand
XmlNode.
- Clean up package structure by moving internal packages into the
src/subtree.
- Remove the experimental SAX parser, the event parser allows more flexible streaming XML consumption.
3.2.4 #
- Remove unnecessary whitespace when printing self-closing tags.
- Remember if an element is self-closing for stable printing.
3.2.0 #
- Migrated to PetitParser 2.0
3.1.0 #
- Drop Dart 1.0 compatibility
- Cleanup, optimization and improved documentation
- Add experimental support for SAX parsing
3.0.0 #
- Mutable DOM
- Cleaned up documentation
- Dart 2.0 strong mode compatibility
- Reformatted using dartfmt
2.6.0 #
- Fix CDATA encoding
- Migrate to micro libraries
- Fixed linter issues
2.5.0 #
- Generic Method syntax with Dart 1.21
2.4.5 #
- Do no longer use ArgumentErrors, but instead use proper exceptions.
2.4.4 #
- Fixed attribute escaping
- Preserve single and double quotes
2.4.3 #
- Improved documentation
2.4.2 #
- Use enum as the node type
2.4.1 #
- Fixed attribute escaping
2.4.0 #
- Fixed linter issues
- Cleanup node hierarchy
2.3.2 #
- Improved documentation
2.3.1 #
- Improved test coverage
2.3.0 #
- Improved comments
- Optimize namespaces
2.2.2 #
- Formatted source
2.2.1 #
- Cleanup pretty printing
2.2.0 #
- Improved comments
Dart XML Examples #
This package contains examples to illustrate the use of Dart XML. A tutorial and full documentation is contained in the package description and API documentation.
ip_lookup #
This example performs an API call to ip-api.com to search for IP and domain meta-data. If no query is provided the current IP address will be used. Various options can be changed over the command line arguments.
dart example/ip_api.dart --help dart example/ip_api.dart --fields=query,city,country
xml_flatten #
This example contains a command-line application that flattens an XML documents from the file-system into a list of events that are printed to the console. For example:
dart example/xml_flatten.dart example/books.xml
xml_pp #
This example contains a command-line application that reads XML documents from the file-system and pretty prints the formatted document to the console.
dart example/xml_pp.dart example/books.xml
xml_grep #
This example contains a command-line application that reads XML documents from the file-system and prints matching tags to the console. For example:
dart example/xml_grep.dart -t title example/books.xml
Use this package as a library
1. Depend on it
Add this to your package's pubspec.yaml file:
dependencies: xml: ^3:xml/xml.dart';
We analyzed this package on Mar 26, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
- Dart: 2.7.1
- pana: 0.13.6 | https://pub.dev/packages/xml | CC-MAIN-2020-16 | refinedweb | 1,585 | 52.46 |
Hottest Forum Q&A on CodeGuru - October 27th
Introduction:
Lots of hot topics are covered in the Discussion Forums on CodeGuru. If you missed the forums this week, you missed some interesting ways to solve a problem. Some of the hot topics this week include:
- How can I get the current traffic bandwidth of my computer?
- How can I run one executable inside another's client window space?
- How do I serialize a CMap?
- How do I initialize the size of a char*[]?
- What is the best way to create a CEdit variable?
George2 wants to know the current bandwidth that his system his receiving or sending.
Hello, everyone! I want to get the current bandwidth (input and output respectively) of a Windows system, for example, 20.2kB for input and 30.1kB for output. Where can I find some sample codes?
Well, the solution is to use the PDH Interface. This interface provides the necessary information to get the total bandwidth. Here is some sample code:
#include <windows.h> #include <conio.h> #include <stdio.h> #include <pdh.h> #pragma comment(lib,"pdh.lib") int main(int argc, char* argv[]) { PDH_STATUS pdhStatus = ERROR_SUCCESS; LPTSTR szCounterListBuffer = NULL; DWORD dwCounterListSize = 0; LPTSTR szInstanceListBuffer = NULL; DWORD dwInstanceListSize = 0; HQUERY hQuery; HCOUNTER hCounter; DWORD dwType = 0; PDH_FMT_COUNTERVALUE value; char szCounter[256] = {0}; pdhStatus = PdhEnumObjectItems (NULL,NULL, "Network Interface", szCounterListBuffer, &dwCounterListSize, szInstanceListBuffer, &dwInstanceListSize, PERF_DETAIL_WIZARD, 0); szCounterListBuffer = (LPTSTR)malloc ((dwCounterListSize * sizeof (char))); szInstanceListBuffer = (LPTSTR)malloc ((dwInstanceListSize * sizeof (char))); if(!szCounterListBuffer || !szInstanceListBuffer) { printf ("unable to allocate buffer\n"); return 1; } pdhStatus = PdhEnumObjectItems (NULL,NULL, "Network Interface", szCounterListBuffer, &dwCounterListSize, szInstanceListBuffer, &dwInstanceListSize, PERF_DETAIL_WIZARD, 0); if(pdhStatus == ERROR_SUCCESS) { sprintf(szCounter,"\\Network Interface(%s) \\Current Bandwidth", szInstanceListBuffer); } else { printf ("unable to allocate buffer\n"); return 1; } if( PdhOpenQuery(NULL ,0 ,&hQuery)) { printf("PdhOpenQuery failed\n"); return 1; } if( PdhAddCounter(hQuery,szCounter,NULL,&hCounter)) { printf("PdhAddCounter failed\n"); return 1; } while(!_kbhit()) { if(PdhCollectQueryData(hQuery)) { printf("PdhCollectQueryData failed\n"); break; } if( PdhGetFormattedCounterValue(hCounter, PDH_FMT_LONG , &dwType, &value)) { printf("PdhGetFormattedCounterValue failed\n"); break; } printf("Current Bandwidth : %ld\r",value.longValue); Sleep(1000); } if (szCounterListBuffer != NULL) free (szCounterListBuffer); if (szInstanceListBuffer != NULL) free (szInstanceListBuffer); PdhRemoveCounter(hCounter); PdhCloseQuery(hQuery); return 0; }
scott meyer asked a very interesting question and I was curious whether or not this is possible. He needs to run an application in another application's window..
Do you know any possibilities about how to do this?
OReubens sees the possiblities and explains it in all, a bit like how MSDev displays the output of the compiler/linker.
If Fred is a GUI program, it's still somewhat possible, although it'll require LOTS of work becausee you'll have to trap all window creation and window messages, and pass them along from wilma to fred. This is recommended only is probably not going to be worth it.
martenbengtsson needs to store and load a CMap. He wants to use SerializeElements, but is not sure about it. Here is his comment.
I am trying to save and load a CMap. I thought I could just call the SerializeElements. But everybody seams to overload the SerializeElements why is that necessesary? I have not seen one single example of a call to the function. SerializeElements takes the size of the CMap as input argument. But how do I know the size of the CMap when I load a CMap from a file?
void CGsdoc_b1Doc::Serialize(CArchive& ar) { int size = m_UserMap.GetCount(); if (ar.IsStoring()) { // TODO: add storing code here SerializeElements ( ar, &m_UserMap , size); }
The trick is to override the SerializeElements in the Document implementaion. The SerializeElements function is called like the above in the Serialize function in the Document. It goes something like this:
void AFXAPI SerializeElements ( CArchive& ar, CUser* pNewUser, int nCount ) { for ( int i = 0; i < nCount; i++, pNewUser++ ) { // Serialize each CPerson object pNewUser->Serialize( ar ); } }
Also take a look at the MSDN article, which explains the topic pretty well.
waverdr9 is working on a shell application that accepts a char *arg[]. He has set up the size as char *arg[20] but unfortunaly it always returns the size of 0.
I am writing a program that acts as a simple shell. Basically, my program outputs a prompt, then waits for the user to input a command. My program then handles that command and ouputs a new prompt. This process continues until the user quits. The specifics of the program are not relevant though. The only question I have is with the variable char *arg[] inside of my Info struct. I am not sure how to initialize it's size. I tried just saying char *arg[20], but that ended up giving it a size of 0 for some reason. Can anyone correct this problem in my code for me? Can anyone tell me how to initialize the size of a char*[]? I have the entire program below, or you can download the attachment of it. Download
waverdr9 has also put the cpp in his thread. Unfortunaly, it's too large to show it here in the column. I suggest that you download the file. The problem is that if you only input one word (ex: "ls"), it works. But if you input more than one word, it goes out of bounds of the array. So, the problem is that there you have not allocated any space for the char*. There is no new[] or malloc to create this space. Initialize the char* to something like this and it will work.
argv[0] = new char [SomeLength]; //... delete [] argv[0];
dineshns wants to know the correct way to create a member varaible for CEdit. Although this might be a simple question, I think such questions are very important.
Assume that a dialog Box has a Edit Control. To access the edit control we should define a member variable. What is the best way of definning Member variable?
CEdit* m_pedtName; // Pointer
OR
CEdit m_edtName;
The solution is simple. Use the second one. But why?
Pointers always introduce the possibility of memory leaks, invalid addreses, and such. There is basically no need to use them for the controls.
There might be really rare cases where you would use a pointer to controls for a good reason, in my eyes. Besides that, if you assign these member variables via the class wizard....then you only have one choice, anyway.
There are no comments yet. Be the first to comment! | http://www.codeguru.com/columns/forum_highlights/article.php/c6641/Hottest-Forum-QA-on-CodeGuru--October-27th.htm | CC-MAIN-2014-35 | refinedweb | 1,060 | 58.89 |
I will work with this for a while and put it on the plugin list when there are no bugs...
R, and iTerm work flawlessly. Stata is a little annoying because you first have to save the code in a file and then open the .do file with Stata. Doing this, however, changes the stata working directory. Currently, I call a second file which just runs "cd '$job'" to change to the working dir to the global $job. So if you define such a global, stata always changes back. Suggestions for alternative implementations are welcome! You might also have to change 'StataMP' to 'StataSE' depending on the version of stata you have.
Key bindings
- Code: Select all
{ "keys": ["super+e"], "command": "send_selection" },
{ "keys": ["super+shift+e"], "command": "send_selection_iterm" }
Send-Selection.py (copy to /Packages/User)
- Code: Select all
import sublime, sublime_plugin
import os, subprocess, string
#
# to test strings:
#self.view.insert(edit, 0, cmd)
class SendSelectionCommand(sublime_plugin.TextCommand):
def run(self, edit):
# get selection
selection = ""
for region in self.view.sel():
selection+=self.view.substr(region) + "\n"
# only proceed if selection is not empty
if(selection!="\n"):
extension = os.path.splitext(self.view.file_name())[1]
# R file
if(extension.lower()==".r"):
# define osascript command
cmd = """osascript -e 'tell app "R" to activate' """
cmd+= """-e 'tell app "R" to cmd \"""" + string.replace(selection,"\"","\\\"") + """\"' """
# run and reactivate Sublime Text 2
os.system(cmd)
subprocess.Popen("""osascript -e 'tell app "Sublime Text 2" to activate' """, shell=True)
# Stata file
if(extension.lower()==".do"):
# define location of do file
file=sublime.packages_path() + "/User/sublime2stata.do"
file_restore=sublime.packages_path() + "/User/sublime2stata_restore.do"
# copy selection into file
os.system("echo '" + selection + "' > '" + file + "'")
os.system("""echo 'qui cd \"$job\"' > '""" + file_restore + "'")
# define osascript command and run
cmd_run = """osascript -e 'tell app "StataMP" to activate' """
cmd_run+= """-e 'tell app "StataMP" to open POSIX file \"""" + file + """\"' """
cmd_run+= """-e 'tell app "StataMP" to open POSIX file \"""" + file_restore + """\"' """
os.system(cmd_run)
# os.system("rm '" + file + "'")
# os.system("rm '" + file_restore + "'")
# restore stata working directory from global $job
subprocess.Popen("""osascript -e 'tell app "Sublime Text 2" to activate' """, shell=True)
class SendSelectionItermCommand(sublime_plugin.TextCommand):
def run(self, edit):
selection = ""
for region in self.view.sel():
selection+=self.view.substr(region) + "\n"
if(selection!="\n"):
# define location of shell script with selection as argument
script="sh '" + sublime.packages_path() + "/User/send-selection-iterm.sh' \"" + string.replace(selection,"\"","\\\"") + "\""
# run shell script and reactivate Sublime Text 2
os.system(script)
subprocess.Popen("""osascript -e 'tell app "Sublime Text 2" to activate' """, shell=True)
shell script for iTerm: send-selection-iterm.sh (copy to /Packages/User)
- Code: Select all
#!/bin/sh
PASTE=$(echo "$@" | sed 's/\/\//*/g' | sed 's/\"/\\\"/g' | sed 's/ //g' | sed -n '1h
1!{
# if the sought-after regex is not found, append the pattern space to hold space
/\/\*.*\*\// !H
# copy hold space into pattern space
g
# if the regex is found, then...
/\/\*.*\*\// {
# the regular expression
s/\/\*.*\*\// /g
# print
p
# read the next line into the pattern space
n
# copy the pattern space into the hold space
h
}
# copy pattern buffer into hold buffer
h
}
# if the last line then print
$p
')
osascript << END
tell application "iTerm"
activate
tell current terminal
tell current session
write text "$PASTE"
end tell
end tell
end tell
#tell application "Sublime Text 2"
# activate
#end tell
#END | http://www.sublimetext.com/forum/viewtopic.php?p=15994 | CC-MAIN-2015-40 | refinedweb | 546 | 50.63 |
Special Report -- Mutual Funds
Mutual Funds: What's Wrong
To keep pulling in investors, managers must lower costs and deliver the goods
Investors are losing their appetite for mutual funds. Despite a fifth knock-out year in a row for the stock market and an apparent snapback in fund performance, people are putting far fewer dollars into mutual funds than they did a few years ago. Sure the numbers are huge--equity funds took in an estimated $170 billion last year. But that's 30% below that of two years ago (chart, page 69).
Stock market awareness has never been higher, and the baby boom generation's need to gather assets for retirement has never been greater. Yet investors are seeking alternatives to mutual funds. Cheap online trading and the lure of quick riches draws some. Wealthier individuals, many of whom amassed their fortunes with the help of funds, now have enough money to hire investment managers to run customized stock portfolios. And many large employers, which adopted mutual funds for their 401(k) plans earlier in the decade, are increasingly casting them aside.
What's wrong with mutual funds? In part, they charge too much for their service, they often stick investors with unwanted tax bills, and the fund companies themselves have flooded the marketplace with products that are often difficult to differentiate and understand. "Investors are confused," says Lawrence J. Lasser, CEO of Putnam Investments Inc. "There are too many funds." But most of all, it's a question of performance. In the greatest bull market in history, the funds have too often failed to deliver the goods.
The mutual-fund industry, which controls more than $6 trillion in equity, bond, and money-market funds, is in no danger of withering away. Funds meet the needs of millions with no interest in online trading or only modest sums to invest. Still, the stocks of mutual-fund companies are underperforming the stock market, and that's a signal that there are problems in the industry. "Mutual funds used to be the investment of choice," says A. Michael Lipper, chairman of Lipper Inc., a fund data and research company. "Now, it's more the investment people use when they have no other choices."
Funds could win back investors if they improved their returns. Sure, over the past five years, the average fund earned 19% a year, a rate at which $10,000 grows into nearly $24,000. But relative to the Standard & Poor's 500-stock-index funds, which don't pick stocks but just buy those in the index, that's woeful. An index-fund investor would have $35,000--a 28.5% compounded average annual return. In all, just 7% of all equity funds beat the S&P over the past five years. Even subtracting foreign and specialty funds, the numbers don't get much better. Just 15% of U.S. diversified funds beat the index.SKEWERED AVERAGE. Mutual-fund performance has brightened of late, and 1999 was the first time since 1993 that the funds beat the S&P, which earned a 21% total return. Thanks to a boom in small and mid-cap stocks, U.S. diversified funds clocked a 27.7% return for the year. Add in the triple-digit returns earned by many technology and some international funds, and the all-equity average jumps to 31.2%.
But the average is deceptive. The median fund return--half of them did better, half worse--is just 20.9%, which suggests the huge returns of a few funds is skewing the average upward. In most years, the median and average returns are no more than a few percentage points apart.
So why have returns not been better? The most common explanation: The stock market swings back and forth between the large-cap stocks that dominate the index and smaller and mid-cap stocks favored by the mutual funds. The problem was that in the latter half of the 1990s, the swing to large-cap was more prolonged and extreme than in previous cycles.
It's a lame excuse. Fund managers failed to pick up on how the economy and the stock market were changing and how the pricing pressures of a disinflationary environment and increasing globalization favored the very largest companies.MISSED THE BOAT. Five years ago, only about 2% of the mutual funds had portfolios whose median market capitalization was greater than that of the S&P 500, says Don Phillips, CEO of Morningstar Inc. "So when big-cap stocks led the way, the funds didn't own enough of them." Or even if they did have big stocks, they did not own them in proportion to the stocks' weight in the index.
Those fund managers who caught the big-cap wave--and the subsequent surge in technology stocks, which reflected the information revolution transforming the economy--made bundles for shareholders. Getting those two moves right propelled Janus Funds from a midsize firm to one of the industry's largest (box).
Janus was clearly in the minority. "The fund industry blew the technology boom, too," says Phillips. "You'd be surprised how many fund managers tell us they don't buy technology because it's too risky or they don't understand it. It's like saying: `I don't understand the age in which I live."'
Changing behavior isn't easy. Most funds sell a style of management, a stock selection process, or a discipline that's worked well over time. When it stops working--and that's what happened to scores of value funds over the past few years--the managers usually stick to their systems and wait.
But what if the economy and the stock market have changed in a way that makes the manager's system obsolete? That's a tough call to make, but fund managements need to consider it. "People tend to dismiss what doesn't fit into their models," says Mark Finn, an investment consultant hired by the Lindner Funds last year to overhaul an investment process that was failing. "But you also have to ask if the model is missing something." Finn says he didn't abandon Lindner's tradition of value investing, but "we had to make it more sensitive to current conditions." Robert C. Doll, chief investment officer of Merrill Lynch Asset Management, says following a discipline is important, "but so is bending it when necessary. In a growth-oriented market like we've had, I'm more willing to pay a higher price for a company with good growth prospects."
All told, there's a good case to be made that the funds and the fund industry have been too timid, and funds have under-performed because of it. Not that risk-aversion is a sin. But smart investing is also knowing when to rev up and when to dial back on risk.
Within mutual funds, managers tend to use risk controls, such as limiting the amount of the fund that can be devoted to a particular sector such as technology. If this one fund were your only holding, that may make sense. But most investors today hold multiple funds, and what really counts is not the risk of one fund but how the funds in a portfolio fit together.BETTING BIG. The truth is risk-aversion may be more a business decision for the fund management company than an investment decision. Fund managers are compensated based on the assets under management, so the incentive is to hold on to what they've got. Sure, if a big bet pays off, the assets will grow and money will cascade into the fund. Missing a big bet will have the opposite effect.
"Fund companies have a basic conflict of interest with fund investors," says Mark P. Hurley, CEO of Undiscovered Managers Funds. "The companies' have an economic incentive to gather and retain assets. Shareholders' interest is maximizing performance."
The interests of the two need to be drawn closer. One way is to put funds on a "fulcrum" fee: The funds win a higher management fee if they exceed a benchmark but give up a portion if they don't. Fidelity, Vanguard and a few others have funds with these incentives, but for the most part they have not been widely adopted by the industry.
And if an incentive fee is good for the fund managers, why not for the independent directors who are paid to represent shareholder interests on boards? Right now, directors are not even required to own shares in the funds they oversee, although an industry task force on directors did recommend last year that they do so.TOO MANY FUNDS. Another way to improve returns is to shut a fund's doors to new investors before it gets too large to manage effectively. This is especially critical for funds investing in small companies, where too much money crimps returns. At what point does size start to slow the fund? It could be as little as $100 million or $200 million for small-cap funds and several billion dollars for large-cap.
Many funds need to close down--period. John Rekenthaler, research director for Morningstar, estimates that at least half of all equity funds are below $50 million in assets, probably not profitable for their sponsors, and with little likelihood of ever getting much bigger. Liquidating or merging them would go a long way toward cleaning up the clutter of funds that daze and confuse prospective investors and would lower overhead for the firms.
That would give fund companies latitude to lower expenses for shareholders, who by and large haven't benefited much from the economies of scale that go with the sixfold increase in assets over the past decade (chart). The average expense ratio for equity funds is 1.55%, up from 1.45% a decade ago, according to Morningstar. Bond fund expenses have shot up from 0.84% to 1.08%.
Why so high? For equity funds, at least, it's the bull market. With prospects for 50% or 100% returns, who's paying attention? Yet the problem in paying higher expenses is that it does not gain investors anything. "In most industries, you pay a higher price to get higher quality," says Charles A. Trczinka, a finance professor at the State University of New York at Buffalo. "That's not so here." In fact, some fund families with the better returns are also those with lower than average expenses, such as Fidelity, Janus, American, and Vanguard, the company with the lowest cost. Expenses weigh even more heavily on bond funds, where the potential return is in the single digits. No wonder inflows to bond funds have slowed to a trickle over the past five years.AT WHAT COST? Another reason for rising expense ratios are 12(b)-1 fees. These fees, which show up in the expense ratio, pay for the fund's advertising or distribution costs or to compensate those who sell the fund. These fees can be as little as 0.25% or as high as 1% per year--and they come right out of the fund's assets. For funds that sell through brokers and other intermediaries, these fees supplement or supplant the sales charges, or "loads," which have fallen by more than half over the past 20 years. So the broker's compensation is increasingly coming out of the fund, where in the past the shareholder paid it up front.
This shift distorts the expense ratios, according to the Investment Company Institute, the funds' trade association. So the ICI takes a different tack. It rolls sales charges and expenses together to calculate the "cost" of fund ownership. The ICI also weights the expenses by the size of the fund, so the 0.18% expense ratio for the $104 billion Vanguard 500 Index counts 1,000 times more than the 1.78% expense ratio of the $104 million Phoenix-Engemann Small-Midcap Growth Fund A.
With the ICI's approach, the funds' costs look like they're coming down. The cost of ownership for equity funds dropped from 2.25% in 1980 to 1.35% last year. But it's "consumers who are lowering their costs, not the producers," says John C. Bogle, founder and former chairman of Vanguard Group, a critic of the high costs in the fund industry. Investors do that by putting their money in lower-cost funds, like Vanguard index funds. "There's no price competition among fund companies," says Bogle, who blames this situation, in part, on fund directors who put the interests of fund management ahead of those of the shareholders they represent.
But even using ICI numbers, the drop in the cost of investing in funds looks puny considering how sharply costs on the Street have fallen. For example, technology now enables individuals to trade stocks online for as little as a penny per share. A decade ago, even discounters were charging about 15 cents, and full-service firms, anywhere from 30 cents to 50 cents. It may be unrealistic to think the funds could cut their service costs by that much, since an online transaction is an automated service--and investment management, except perhaps for index funds, is not. But surely, the funds could use technology to give investors a break.
Certainly, separate account management is no less labor-intensive than mutual funds, and the cost of that service is dropping. That has enabled investors with six-figure portfolios to switch from mutual funds to their own customized portfolio of stocks for little or no additional cost.TAX MAN COMETH. As an alternative to mutual funds, the separately managed account becomes especially attractive in January. That's when the inevitable 1099s arrive in the mail, notifying investors and the Internal Revenue Service of the taxable distributions the fund made the year before--and on which tax will be due. The gains are triggered when managers take profits--a process over which the fund shareholder has no control. Over the past five years, taxes have effectively cost fund shareholders about 2.3 percentage points a year, about 10% of the return (chart). With separate accounts, portfolio managers can work with clients on timing gains to minimize taxes.
Mutual funds have no choice about distributing their gains, but critics argue they don't do enough to lower them. One way is through better bookkeeping. Just as a tax-savvy investor would reduce the size of a profitable block of stock by selling his highest-cost shares, fund companies need to do the same. Yet not all funds follow the "highest in, first out" accounting. They should. Joel S. Dickson, a tax specialist at Vanguard, says studies show that HIFO accounting can save shareholders as much as 1% a year in costs.
Another way to cut the tax bill is to trade less often. The average equity fund has a turnover rate of 90%, which means that a $1 billion fund does about $900 million worth of trades each year. Besides triggering gains, trading incurs commission costs of about 5 cents a share--that's much higher than most individuals pay for online brokerage. Why so high? In part, by paying higher commissions, investment management companies can also use some of the commission--called "soft dollars"--to pay for research, data services, or even newspapers and magazines. Think of it this way: It's sort of like frequent-flier miles for the fund managers, except they use the shareholders' money.SHAKING THE MARKET. Commissions are not the only costs involved in trading--there's also what is commonly called "market impact cost," an additional cost incurred when the fund's buy or sell order itself changes the price of the stock. For example, if a mutual fund wants to sell a large block of stock, it may have to accept a lower price in order to do the trade. Likewise, if the fund is shopping for a large block of shares, it will have to pay more for it.
Market impact cost is not something you find in a fund's financial statement. But it can be detected through analysis of the trading records. The cost to shareholders can be high--1% to 5% a year, depending on the size and liquidity of the stocks that a fund trades and the style of trading. Funds that invest in small-cap stocks have higher impact costs than those that buy large-caps, says Nicolo Torre, a managing director at BARRA Inc., an investment consulting firm. And funds that practice a momentum strategy--buying what's hot--usually end up paying more than value investors that buy and hold. The only way to cut market impact costs, he says, is through fewer trades. "Ten percent of the trades reflect 90% of the market impact cost," says Torre. "Halve the number of trades, and you can significantly lower the fund's market impact cost."
Trading behavior is not only dependent on the portfolio manager's investment decisions. Sometimes shareholders rushing in or running out will force the manager's hand. Certainly, that's the case with Internet funds, which likely bear a huge market impact cost for buying thinly traded Internet stocks in a hurry. Of course, the returns have been so large that the cost is hardly noticed. However, if the Internet stocks tank and the funds are hit with a wave of redemptions, the impact cost will be huge--and without any gains to offset them.MAKING PATIENCE PAY. Mutual funds have become more sensitive in recent years to traders who dart in and out of the funds. In some cases, the traders have been booted out, and in others, redemption fees have been instituted to discourage short-term switching. Industrywide, redemption rates are on the rise.
Slowing down the trading would also enable funds to be more fully invested and keep less cash in the till for redemptions. That cash is a drag on performance, since it earns less than it would if it were invested in stocks. Over the long haul, lower cash levels should improve returns.
Mutual-fund companies are quick to point their finger at such bad practices of investors as excessive trading in and out of funds. The fund companies could do more to encourage good investor behavior, however. One idea: Create a separate class of shares for investors who have been in the fund for, say, five or ten years, a move that would allow the fund company to give the shareholders a break on expenses. Or, asks SUNY's Trczinka, what about giving a special dividend--payable in shares only--to long-term investors on the anniversary of their investments?
That could help improve the relationships between fund companies and their investors. But it's not a substitute for what really needs to be done: cutting costs and raising investor returns.By Jeffrey M. Laderman in New York, with Amy Barrett in PhiladelphiaReturn to top | http://www.bloomberg.com/bw/stories/2000-01-23/mutual-funds-whats-wrong | CC-MAIN-2015-27 | refinedweb | 3,169 | 63.7 |
The OpenMesh' proprietary OM format allows to store and restore custom properties along with the standard properties.
For it we have to use named custom properties like the following one
Here we registered a float property for the vertices at the mesh with name "vprop_float". The name of a property, that we want to make persistent, must follow a few rules
"v:",
"h:",
"e:",
"f:"and
"m:"are reserved.
If we stick to this rules we are fine. Furthermore we have to consider, that the names are handled case-sensitive.
To actually make a custom property persistent we have to set the persistent flag in the property with
Now we can use
IO::mesh_write() to write the mesh to a file on disk. The custom properties are added after the standard properties in the file, with the name and it's binary size. These two pieces of information are evaluated when reading the file again. To successfully restore the custom properties, the mesh must have registered named properties with equal names (case-sensitive compare). Additionally, when reading the data, the number of bytes read for a property must match the provided number in the file. If the OM reader did not find a suitable named property, it will simply skip it. If the number of bytes do not match, the complete restore will be terminated and
IO::read_mesh() will return
false. And if the data cannot be restored, because the appropriate restore method is not available the exception std::logic_error() will be thrown.
Since we now know the behaviour, we need to know what kind of data can we store? Without any further effort, simply using named properties and setting the persistent flag, we can store following types
For further reading we call these types basic types. Apparently we cannot store non-basic types, which are
However there is a way to store custom types ( else we could not store std::string). Let's start with an more simple custom data. For instance we have a struct
MyData like this
Here we keep an int, bool, double value and a vector of 4 floats, which are all basic types. Then we need to specialize the template struct OpenMesh::IO::binary<> within the namespace
OpenMesh::IO
Remember not to use long double, (unsigned) long and size_t as basic types because of inconsistencies between 32/64bit architectures.
Herein we have to implement the following set of static member variables and functions:
The flag
is_streamable has to be set to
true. Else the data cannot be stored at all.
size_ofmethods
Since the size of the custom data can be static, which means we know the size at compile time, or the size of it is dynamic, which means me the size is known at runtime, we have to provide the two
size_of() methods.
The first declaration is for the static case, while the second for the dynamic case. Though the static case is more simple, it is not straight forward. We cannot simply use
sizeof() to determine the data size, because it will return the number ob bytes it needs in memory (possible 32bit alignment). Instead we need the binary size, hence we have to add up the single elements in the struct.
Actually we would need to sum up the single elements of the vector, but in this case we know for sure the result (4 floats make 16 bytes, which is 32bit aligned therefore
sizeof() returns the wanted size). But keep in mind, that this a potential location for errors, when writing custom binary support.
The second declaration is for the dynamic case, where the custom data contains pointers or references. This static member must properly count the data, by disolving the pointers/references, if this data has to be stored as well. In the dynamic stetting the static variant cannot return the size, therefore it must return
IO::UnknownSize.
In this case the dynamic variant simply returns the size by calling the static variant, as the sizes are identical for both cases.
store/
restore
For the dynamic case as for the static case, we have to make up a scheme how we would store the data. One option is to store the length of the data and then store the data itself. For instance the type
std::string is implemented this way. (We store first the length in a 16bit word (=> max. length 65536), then the characters follow. Hence
size_of() returns 2 bytes for the length plus the actual length of the value
v.) Since
MyData contains only basic types we can implement the necessary methods
store and
restore, by simply breaking up the data into the basic types using the pre-defined store/restore methods for them:
It's very important, that the store/restore methods count the written/read bytes correctly and return the value. On error both functions must return 0.
A more complex situation is given with the following property
In this case the data contains a container, a map from strings to integer numbers. If we want to store this as well, we need to make up a scheme how the map will be stored in a sequential layout. First we store the number of elements in the map. Then, since the map has an iterator, we simply iterate over all elements and store each pair (key/value). This procedure is equal for the
size_of(),
store(), and
restore() methods. For example the
size_of() methods look like this
The implementation of
store() and
restore() follow a similar pattern.
The given example program does the following steps
Since the example is a little bit longer than usual the source is in several files. The main program is in
persistence.cc, the cube generator in
generate_cube.hh,
stats.hh provides little tools to display information about the mesh and the properties, the file
fill_props.hh providing the test data, and
int2roman.hh/.cc, which is used in fill_props.hh. All necessary parts are in
persistence.cc, which is displayed in full length below. For the other files please have a look in the directory
OpenMesh/Doc/Tutorial/09-persistence/. | http://www.openmesh.org/media/Documentations/OpenMesh-6.2-Documentation/a00064.html | CC-MAIN-2017-13 | refinedweb | 1,021 | 61.77 |
How to make the WPF Canvas mouse click event work?
The problem with Canvas is that when you click on it, you don’t actually get the click event to occur unless you have a background that is not white.
One trick if you want white is to use white -1 or #FFFFFE or possibly Transparent (unless the parent is not white). So no one can tell it isn’t white, because it is as close to white as can be without actually being white.
Now your click event can occur.
Also you need to make the Canvas focusable.
Example 1 – Getting a Canvas to take keyboard focus from a TextBox with a mouse click
Here is how you make this happen.
- First create a new WPF Project.
- Add a Canvas and clear the any sizing.
- Change the Canvas Background to #FFFFFE.
- Set the Canvas to be Focusable.
- Add a TextBox in the Canvas.
- Create a mouse down event for the Canvas.
MainWindow.xaml
<Window x: <Grid Name="MainGrid"> <canvas name="<span class=" span="" class="hiddenSpellError" pre="class ">canvas1" Focusable="True" Background="#FFFFFE" MouseDown="canvas1_MouseDown"></canvas> <TextBox Height="23" Name="textBox1" Width="120" IsEnabled="True" Canvas. </Canvas> </Grid> </Window>
MainWindow.xaml.cs
using System.Windows; using System.Windows.Input; namespace TextBoxInCanvas { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); } private void canvas1_MouseDown(object sender, MouseButtonEventArgs e) { Keyboard.Focus(canvas1); } private void textBox1_PreviewKeyDown(object sender, KeyEventArgs e) { if (Key.Enter == e.Key) Keyboard.Focus(canvas1); } } }
Now your click event occurs when the Canvas is clicked and keyboard focus is taken from the TextBox. | https://www.wpfsharp.com/2011/08/23/how-to-make-the-wpf-canvas-mouse-click-event-work/ | CC-MAIN-2021-31 | refinedweb | 269 | 59.9 |
Imaging we have a small CSV file:
name,enroll_time robin,2021-01-15 09:50:33 tony,2021-01-14 01:50:33 jaime,2021-01-13 00:50:33 tyrion,2021-2-15 13:22:17 bran,2022-3-16 14:00:01
Let’s try to load it into DataFrame of Pandas and upload it to a table of BigQuery:
import pandas as pd from google.cloud import bigquery df = pd.read_csv("test.csv", parse_dates=["enroll_time"], index_col=0) schema = [] schema.append(bigquery.SchemaField("name", "STRING")) schema.append(bigquery.SchemaField("enroll_time", "DATE")) job_config = bigquery.LoadJobConfig(schema=schema) bq_client = bigquery.Client() table = "project.dataset.test_table" job = bq_client.load_table_from_dataframe( df, table, job_config=job_config ) job.result()
But it reports error:
File "pyarrow/array.pxi", line 176, in pyarrow.lib.array File "pyarrow/array.pxi", line 85, in pyarrow.lib._ndarray_to_array File "pyarrow/error.pxi", line 81, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Casting from timestamp[ns] to date32[day] would lose data: 1610704233000000000
Seems the BigQuery library couldn’t recognize the
1610704233000000000 as nano-seconds. Then I tried to divide the
1610704233000000000 with 1e9 but it also failed.
Actually what we need to do is just use
TIMESTAMP instead of
DATE as the type of column
enroll_time:
schema.append(bigquery.SchemaField("name", "STRING")) schema.append(bigquery.SchemaField("enroll_time", "TIMESTAMP"))
and the BigQuery library could recognize the column even with nano-seconds unit. | https://donghao.org/2021/01/15/import-date-column-in-pandas-to-bigquery/ | CC-MAIN-2021-39 | refinedweb | 231 | 55.2 |
Hi. This is Tom Ball. I am a Principal Researcher at Microsoft Research, where I manage the Software Reliability Research group in the Research in Software Engineering area.
On behalf of the CHESS team, I am happy to announce our first DevLabs pre-release of the CHESS tools (build 0.1.30106.5) for finding subtle concurrency errors in multithreaded single-process Windows and .NET programs.
CHESS is specifically designed for concurrency unit-testing and requires that you provide a set of test functions, each testing a particular concurrency scenario in the program. CHESS exhaustively enumerates all thread schedules of a test function by systematically inserting preemptions (unplanned interruptions of a thread) at various points in a program’s execution.
CHESS is realized as a test host for Visual Studio Team System 2008, as well as a set of command-line tools for analyzing .NET and unmanaged code. CHESS also includes a simple graphical user interface for exploring error traces of concurrent programs called Concurrency Explorer.
This post gives a glimpse of the Visual Studio Team System 2008 integration. Later posts will describe the command line tools. You can find out more about CHESS at our home page and MSDN forum.
Exploring Thread Schedules with CHESS
We provide a test host ([HostType(“Chess”)]) for Visual Studio Team System 2008 that runs managed unit tests under control of CHESS. Let’s take a quick look at a test of a bank Account class that is supposed to be thread-safe (you can find the code for Account at the end of this post):
[TestClass]
public class TestBank
{
[TestMethod]
public void WithdrawAndDepositConcurrently()
{
var account = new Account(10);
var child = new Thread(
o => { (o as Account).Withdraw(2); }
);
child.Start(account);
account.Deposit(1);
child.Join();
Assert.AreEqual<int>(9, account.Read());
}
}
The attributes [TestClass] and [TestMethod] tell Visual Studio that the class TestBank and method WithdrawAndDepositConcurrently are test code. The body of the method creates an Account instance with $10. It then creates a child thread that will withdraw $2 from the account. The main thread starts the child thread and concurrently deposits $1 in the same account and then waits for the child thread to complete. Of course, regardless of the thread schedule, we expect the account to contain $9 at the end, as asserted in the final statement. We ran this test and got the following output:
Should we be satisfied that the test passed? Our answer is “definitely not!” This is because this test has no control over which thread schedule executes. To test the code with CHESS, we simply attribute the WithdrawAndDepositConcurrently method with the HostType attribute, as shown below:
[TestMethod]
[HostType(“Chess”)]
public void WithdrawAndDepositConcurrently()
{
Running the test again, we see the following:
We ended up with $8 instead of $9 – not good! If we run the test again, we get exactly the same result. This is because CHESS explores thread schedules in a deterministic order. That is, CHESS does not randomly perturb the thread scheduling but instead systematically explores the thread schedules.
Reproducing a Buggy Thread Schedule with CHESS
For long running tests, the number of schedules that CHESS explores can be enormous: CHESS may explore thousands if not tens of thousands of schedules before finding an error. When CHESS does find an error, it records an ASCII representation of the thread schedule that led to the bug. With this schedule, you can use CHESS to immediately reproduce the bug without waiting through the many bug-free schedules CHESS explored to find the bug. To access the CHESS repro, we double click on the test in the above pane to see:
The “Error Message” section details the nature of the error. CHESS uses “Standard Console Output” section to give you information about the test and how to reproduce the error. The section “Standard Console Error” section contains a set of attributes that will help you reproduce the error with CHESS. We click on the link to copy this section’s content to the clipboard and then paste the contents of the clipboard before the method WithdrawAndDepositConcurrently, so the code looks like:
Note that the ChessScheduleString is in a region so you can hide it; it is not intended to be human-readable. The string contains the schedule of events in the thread schedule that caused the assertion violation. There are two new attributes to direct CHESS. The first (“ChessMode”) tells CHESS to reproduce the execution directed by the CHESS schedule string. (The other mode of CHESS is the default exploration mode in which CHESS enumerates the thread schedules.)
Debugging with CHESS
Now we will run the test under the control of the debugger, with CHESS controlling the schedule, to find the source of assertion violation. When debugging, the second attribute (“ChessBreak”) is active. This directive tells CHESS to break before each thread preemption (recall that a preemption is an unexpected context switch). CHESS has a vocabulary of concurrency primitives that you can use when debugging. As shown below, the first breakpoint takes place just before the main thread is about to acquire a lock on the Account in order to perform the deposit:
This is the spot of the first preemption which transfers control from the main thread to the child thread. We now hit F10 to jump to the next preemption, which takes place in the child thread:
The second breakpoint takes place just after the child thread has read the value of the Account into the local variable “temp” (which has value 10) but just before the child thread is about to acquire a lock on the Account in order to perform the withdrawal. The error in the code is immediately obvious, as the comment explains: there is a window of time after the read of the Account’s balance but before the withdrawal in which another thread can interrupt the child thread.
We press F10 a few times to see that control returns to the main thread which performs the deposit (raising the balance to 11 dollars):
The main thread then blocks in the call child.Join(), waiting for the child thread to continue:
We press F10 a few more times until the assignment statement of the child thread is highlighted, as shown below. Hovering over the variable “balance”, we see that the current balance is 11, reflecting the deposit of the parent thread:
Hovering over the local variable “temp”, we see that its value is the old/stale value of balance (10):
Oops! Running to completion, we will witness that the assertion fails (because Withdraw will subtract 2 from 10 to get 8). The complete code of the buggy Account class is:
public class Account
{
private int balance;
public Account(int amount)
{
balance = amount;
}
public void Withdraw(int amount)
{
int temp = Read();
// oops, temp could become stale if we are
// preempted here
lock (this)
{
balance = temp – amount;
}
}
public int Read()
{
int temp;
lock (this)
{
temp = balance;
}
return temp;
}
public void Deposit(int amount)
{
lock (this)
{
balance = balance + amount;
}
}
}
Don’t Stress, Test with CHESS!
Please download CHESS, try it out on your code and send us comments via our forum. Enjoy!
– Tom Ball for the CHESS Team
PingBack from | https://blogs.msdn.microsoft.com/chess/2009/01/08/chess-release-on-msdn-devlabs/ | CC-MAIN-2016-36 | refinedweb | 1,197 | 56.79 |
From: Larry McVoy <lm@bitmover.com> To: linux-kernel@vger.kernel.org Subject: linux-2.5 activity by directory Date: Tue, 23 Apr 2002 12:36:37 -0700 This one shows which directories had deltas over the specified time period. Let me know if this is useful and/or if there should be a graphical barchart of this sort of thing available. == Directory activity in the last week == 50 15.02% include/asm-x86_64 36 10.81% arch/x86_64/kernel 26 7.81% drivers/ide 22 6.61% drivers/isdn/i4l 12 3.60% arch/arm/boot/compressed 11 3.30% include/linux 9 2.70% arch/arm/kernel 8 2.40% drivers/char 7 2.10% kernel 5 1.50% arch/alpha/kernel 4 1.20% mm 3 0.90% drivers/isdn 2 0.60% arch/arm/nwfpe 1 0.30% arch/sparc == Directory activity in the last 2 weeks == 50 8.65% include/asm-x86_64 36 6.23% arch/x86_64/kernel 30 5.19% drivers/ide 29 5.02% include/linux 22 3.81% drivers/isdn/i4l 19 3.29% arch/ia64/kernel 17 2.94% drivers/char 16 2.77% include/asm-ia64 15 2.60% arch/i386/kernel 12 2.08% drivers/usb/host 11 1.90% mm 10 1.73% arch/arm/kernel 8 1.38% drivers/net 7 1.21% kernel 6 1.04% fs/reiserfs 5 0.87% include/asm-alpha 4 0.69% fs/nfsd 3 0.52% drivers/isdn 2 0.35% drivers/isdn/eicon 1 0.17% arch/sparc == Directory activity in the last month == 57 3.48% include/linux 55 3.36% drivers/net 51 3.12% drivers/ide 50 3.05% include/asm-x86_64 48 2.93% fs/nls 40 2.44% fs/jfs 38 2.32% drivers/char 36 2.20% arch/x86_64/kernel 31 1.89% drivers/usb/media 30 1.83% drivers/scsi 29 1.77% include/asm-i386 27 1.65% arch/alpha/kernel 26 1.59% drivers/ieee1394 25 1.53% arch/ppc64/kernel 24 1.47% drivers/usb/host 23 1.41% arch/ia64/kernel 22 1.34% net/ipv4/netfilter 18 1.10% mm 17 1.04% include/asm-ppc64 15 0.92% drivers/acpi/include 14 0.86% drivers/media/radio 13 0.79% BitKeeper/deleted 12 0.73% include/asm-arm 11 0.67% drivers/net/hamradio 10 0.61% drivers/net/arcnet 9 0.55% drivers/usb/class 8 0.49% include/linux/netfilter_ipv4 7 0.43% arch/ppc/kernel 6 0.37% drivers/net/wan 5 0.31% fs/ext2 4 0.24% fs/ext3 3 0.18% drivers/isdn/eicon 2 0.12% arch/sparc/kernel 1 0.06% Documentation/video4linux/bttv == Directory activity in the last 2 months == 131 4.02% BitKeeper/deleted 116 3.56% include/linux 79 2.42% drivers/net 58 1.78% drivers/ide 50 1.53% include/asm-ia64 48 1.47% fs/nls 46 1.41% Documentation/sound/oss 43 1.32% drivers/scsi 41 1.26% fs/jfs 37 1.13% arch/ppc64/kernel 36 1.10% arch/x86_64/kernel 34 1.04% arch/ia64/sn/io 33 1.01% arch/i386/kernel 32 0.98% arch/sparc64/kernel 31 0.95% drivers/usb/media 30 0.92% arch/arm/mm 29 0.89% arch/alpha/kernel 28 0.86% drivers/media/video 27 0.83% drivers/acpi/include 26 0.80% drivers/ieee1394 25 0.77% drivers/net/wan 24 0.74% drivers/usb/host 23 0.71% net/ipv4 22 0.67% drivers/isdn/i4l 21 0.64% drivers/acpi 20 0.61% drivers/block 19 0.58% arch/arm/kernel 18 0.55% drivers/input/joystick 17 0.52% include/asm-sparc64 16 0.49% fs/reiserfs 15 0.46% sound/core/seq 14 0.43% drivers/acpi/namespace 13 0.40% arch/sparc/kernel 12 0.37% drivers/net/hamradio 11 0.34% Documentation 10 0.31% drivers/net/arcnet 9 0.28% sound/isa 8 0.25% fs/ext2 7 0.21% net/sunrpc 6 0.18% include/asm-arm/arch-shark 5 0.15% Documentation/video4linux/bttv 4 0.12% drivers/isdn/eicon 3 0.09% drivers/isdn 2 0.06% arch/sparc 1 0.03% arch/arm/tools - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at Please read the FAQ at | http://lwn.net/2002/0425/a/bk-directory.php3 | crawl-003 | refinedweb | 748 | 59.26 |
Today we feature an in-depth interview with three members of FreeBSD‘s Core (Wes Peters, Greg Lehey and M. Warner Losh) and also a major FreeBSD developer (Scott Long). It is a long read, but we touch a number of hot issues, from the Java port to corporate backing, the Linux competition, the 5.x branch and how it stacks up against the other Unices, UFS2, the possible XFree86 fork, SCO and its Unix IP situation, even re-unification of the BSDs. If you are into (any) Unix, this interview is a must read.
1. What is the status of the Java 1.4.x port to FreeBSD? How has its absence impacted FreeBSD’s market penetration? (Editor’s Note: Java patchset 3 for BSD was just released)
Scott Long: Several months ago the FreeBSD Foundation funded a contract to bring Java 1.4.1 to FreeBSD. Unfortunately, the process of gaining
certification from Sun is quite lengthy, and the money available for
the contract ran out before it was complete. Still, the work that was
done is quite impressive. Most users have reported that it is
relatively bug-free for common applications like tomcat, and some have
also reported that it is measurably faster than the Linux version. It
is even in production use by a very large internet portal company. The FreeBSD Foundation is currently working to raise funds to complete the contract and have it certified by Sun.
Wes Peters: The current status has been answered well by Scott Long.
As for the market penetration, the only possible answer is “we don’t
know,” at least partly because we don’t have a marketing department. I
know of a few embedded development firms who use FreeBSD and Java
successfully, but cannot comment on how they use it or on their
performance needs, etc. I and a number of other developers are very
much looking forward to being able to distribute Java 1.4.x in binary,
but in the meantime the source distribution works well.
Developments in FreeBSD 5.x may have a strong positive effect on the
performance of Java threads once we have time to sort out the
interactions between the JVM and the new threading capabilities found
in FreeBSD 5, but this work will be completed after the 5.1 release.
Greg ‘groggy’ Lehey: It’s interesting that this is your first question: I would have
considered it relatively uninteresting.
M. Warner Losh: I find this answer a little rude.
Greg ‘groggy’ Lehey: Scott has described the status. As others have said, it’s difficult
to assess the impact, but I would suspect that Sun’s current licensing
strategy would have more of an effect on the use of Java under
FreeBSD: it’s a real pain just getting the software. Possibly Linux
users are more accustomed to jumping through hoops to get software
installed, but FreeBSD users expect to be able to type ‘make install’
and have things done automatically. Sun’s licensing conditions make
this impossible.
2. A few years ago, companies like WindRiver/BSDi were helping out the FreeBSD project in many ways, including PR, handling relationships with other companies regarding drivers, etc. Now that the FreeBSD project is completely autonomous, how do you handle these issues? PR, tech specs for drivers that might require NDAs (e.g. an ATi/nVidia relationship) etc…
Scott Long: The loss of corporate backing from BSDi has slowed FreeBSD down without
a doubt. Without a central focus point anymore, FreeBSD has relied on a
more distributed set of backers. This includes NAI Labs, Yahoo!, The
Weather Channel, and Apple, among others. They have provided employment
for key developers, helped coordinate NDA deals with other companies,
and donated server space and bandwidth to the project. Our experience
with PR issues is also growing over time and we hope to make a good PR
splash with the 5.1 release.
Wes Peters: Scott also answered this quite well. I want to note that FreeBSD was
not ever a “division of” BSDi, or Wind River, nor was it ever a product
of either of those companies. It is inaccurate to say that FreeBSD is
*now* completely autonomous; it always was. I hope your article
reflects this point.
BSDi (and Walnut Creek CD-ROM before it) were quite helpful to the
FreeBSD Project in many ways; it’s not clear (to me) that Wind River
ever helped in any meaningful way.
Greg ‘groggy’ Lehey: This is an interesting perception. We never felt more or less
autonomous. Yes, different groups have supported us; before WindRiver
it was Walnut Creek CDROM, and now FreeBSD mall, which you could
consider a successor to Walnut Creek, is doing the same thing. There
are also many others.
M. Warner Losh: FreeBSD has grown beyond the one company that nurtured it in the old
days. FreeBSD gets much of its development done via different kinds
of funding, but from the private and public sectors. My.
Greg ‘groggy’ Lehey: The FreeBSD Foundation handles these issues. You might like to get in touch with them. See here for further details.
I note that my reply to this question contradicts Scott’s. Perceptions obviously play a role here.
M. Warner Losh : I disagree with Greg here. Most of the time when there are NDA
issues, individuals will enter into agreements with the companies in
question, or it will be done through their current employer. We’ve
had a number of drivers contirbuted by people who had inside access to
information. Some of these were done on a individual basis (much of
my work on the wi driver for Prism II, 2.5 and 3 chipsets was based on
an NDA that I have on file with Intersil, for example). I know that
the nVidia stuff was done under contract with one of the developers,
but the FF wasn’t in the loop on that.
Greg ‘groggy’ Lehey: However, a lot of people are motivated more than by money to work on FreeBSD. It is their hobby or passion. They find an itch to scratch
using FreeBSD and FreeBSD benefits.
3. FreeBSD’s ever present “competitor,” GNU/Linux, started winning the crowds with a first wave of hype around 1999, while now many try to convince us that Linux can perform well in the desktop space as well as in the server space. How does the FreeBSD project see the whole situation and how do you feel about a sub-project of “FreeBSD on the desktop?”
Scott Long: GNU/Linux actually got its first PR win with the USL lawsuit in the
mid-1990’s. That drove an unbelievable amount of momentum away from BSD
and towards Linux. In light of that I think that it’s a testement to
the quality of BSD in general that FreeBSD, NetBSD, and OpenBSD have
remained viable and interesting.
I think that Max OS X has really set the bar for what Unix can do on the
desktop. FreeBSD is just as capable as Linux as a desktop OS, but I
think that OS X has reminded us that making a desktop OS with mass
appeal is a huge task and that FreeBSD should still concentrate on its
other strengths as a server OS.
Wes Peters: Most FreeBSD users use FreeBSD on their desktops daily; I have for just
about ten years now. I don’t know that we have the same drive our
friends over in the Linux camp have to rule the world, we just want to
make a system that works well for our needs.
To some extent the BSD world in general has already conquered the
desktop in the form of Mac OS X. It’s a very good product; it has all
of the wonderful strengths of BSD and UNIX underneath, and has an
unaparalleled user interface and world class applications on top. To
many in the BSD world, OS X freed us from any need to become the
desktop to the masses; we can concentrate on making a really good
technical workstation for users that are comfortable with the X Window
System, window managers, and such, and let Apple pick up those who
specialize in something other than computers for a living.
I’ve been a part of the FreeBSD Community right from the start; I downloaded the 1.0 distribution onto floppies the night it was released. In the ensuing ten years the issue of making FreeBSD the operating system of choice for everyone has rarely come up, and when it has it’s been mostly ignored.
This doesn’t mean I don’t think it’s suitable to be a commercial operating system. Whatever pretty face your Linux distributor throws on top of Linux will run just as well on FreeBSD. The graphical installer might make a bit of difference, but the key to becoming a commercial operating system is not to have a nice graphical installer but rather to get IBM, Dell, HP, and Gateway to pre-install your OS on their hardware. Without the kind of financial backing that RedHat provides for Linux, that’s not likely to happen to FreeBSD anytime soon. It’s only just barely happened with Linux, in terms of shipping volume. Better operating systems than Linux or Windows have died on the cross of getting support from just one vendor, BeOS being the most recent visible victim.
Greg ‘groggy’ Lehey: There are a couple of issues here:
1. Linux and FreeBSD both separate the operating system from
applications software, including the concept of a “desktop”. The
applications layer on Linux is usually identical to that on
FreeBSD, so from that aspect you should expect to see no
difference.
2. What is a “desktop”? There has been a lot of effort in the Linux
space to duplicating Microsoft functionality; see OpenOffice for a
good example. FreeBSD also supports OpenOffice. The real
question, though, is whether we’re doing anybody a favour by
copying Microsoft. Like Wes Peters, I have been using BSD on the
desktop for well over ten years. I find the current crop of
“desktop” software incredibly difficult and frustrating to use. I
am forced to do it from time to time, but it’s both limited and
limiting in its approach. The BSD community should be working
towards a better alternative, not playing copycat.
As regards ease of use on the desktop, consider: recently, the
Australian UNIX User Group (AUUG), of which I
am currently president, participated in a seminar by the Australian Government. We supplied all delegates with a CD-ROM of OpenOffice for a number of
platforms, including FreeBSD, Linux and Microsoft. It proved to be
easiest to install the FreeBSD version of OpenOffice. Linux required
significantly more work.
[quote]Geeks and developers don’t mind extra complexity or unpolished desktops or different toolkits that all look different and inconsistent. [/quote]
I contend that geeks and developers would also prefer a consistent and
tidy approach. The question is, why do so many choose not to use the
current “desktop” software?
I most certainly see KDE and Gnome as issues. On the face of it they
should make life easier. On several occasions I have attempted to
adopt one or the other. The real issue is this term “desktop”. Both
KDE and Gnome give you a set of tools, some of them good, which fit
together. They don’t make it particularly easy to do things that the
developers didn’t think of.
I recently investigated desktops in some detail for my book “The
Complete FreeBSD“), which will be on the bookshelves in the next few weeks. I had intended to describe only one desktop, and spent some time trying to decide whether it should be KDE or Gnome. For whatever reason–exhaustion
may be part of it–I chose KDE.
At the same time I rebuilt an old machine and installed KDE and
OpenOffice on it. I had two intentions here: first, a neighbour
needed a newer computer, and secondly I wanted to be able to describe
first-hand how to use the software. The machine wasn’t very fast (233
MHz AMD K6, 96 MB RAM), and the results were painful: KDE needs more
memory, and preferably a faster CPU.
Just to get the thing to run at any speed, I installed fvwm2 and
discovered that, apart from flashy graphics, it wasn’t missing too
much. My neighbour is completely non-technical, and I gave her the
choice of which to use. She chose fvmw2. As a result, I added a
section on fvwm2 to the book, as an alternative to KDE.
I could go on on this topic for hours, but that’s probably enough.
4. FreeBSD 5.0 has come out, and while this was mostly a “preview” of sorts, many were unhappy with the instability and slowness the 5.0 release offered compared with the 4.x branch. With Linux getting many advances in its kernel due to help from engineers working at big commercial companies like IBM, Red Hat and SGI, how do you feel your roadmap is holding up against the competition? Do you believe that a (mostly) commercial engineering-free project can pull out advancements faster than the Solaris or Linux teams can today?
Scott Long: The major focus for FreeBSD 5.x has been reworking the SMP capabilities
of the system. This task has been huge and is largely the cause for the
delays that 5.0 experienced..
While a lot more development money may be going into Linux right now,
FreeBSD is helped by the 20+ years of development and maturity that the
BSD base brings. Companies like NAI Labs also greatly help out by
funding projects in the enterprise, stability, and security spaces, so
FreeBSD keeps on advancing and setting the bar for others to follow.
Wes Peters: It’s hard to understand how they could be unhappy with something they
had been warned about for months before the release.
It’s not clear that Linux and FreeBSD are in competition with each
other, other than in editorial opinion pages. We have clear evidence
that in many cases they are complimentary to each other, and numerous
clear cases of cooperation, especially in the application world.. Paid Linux
developers are paid to develop what their employers want, not what is
best for the Linux system at this moment in time. The involvement of
so many different entities is pulling Linux in many directions, it
remains to be seen if the commercial success will make it a better
system.
This certainly happens to some extent in the FreeBSD world; some of my
own contributions to FreeBSD are for features my employer(s) have
requested. The difference is on the emphasis.
Greg ‘groggy’ Lehey: While we expected this, I haven’t heard any concrete reports. We
warned people about this issue, so it’s hard to understand why they
should be disappointed, unless they didn’t want to believe us.
I personally also think the slowness and instability are exaggerated.
I’ve been using both release 4 and release 5 on my personal desktop
systems for a couple of years now, and I don’t notice significant
differences in stability or performance.
M. Warner Losh : I’ve done benchmarks that show that 5 is slower than 4 in a number of
areas, but the biggest one is gcc. gcc 3.2 is a lot slower than 2.95,
but it produces better code more of the time. That’s one area where
the system will feel slower to developers. Interactive performance is
about the same on my laptop booted 4 as it is in 5.
Some people that are trying 5.0-current will notice things are slower
because more debug options are turned on by default. We tried to
clear most of them for the release, but maybe one or two snuchk
through.
Greg ‘groggy’ Lehey: Until recently my day job was working on Linux with one of those [commercial] companies, and I spent a lot of time looking in the Linux kernel.
Yes, it’s getting better, but I think it will be some time yet before
it overtakes FreeBSD. I’m certainly very happy that I no longer have
to work on Linux.
[Do you believe that a (mostly) commercial engineering-free project can pull out advancements faster than the Solaris or Linux teams can today?]
No. Agreed, that’s a distinct disadvantage.
M. Warner Losh : It makes things riskier in a lot of ways. There’s a lot more chance
and projects go awry for the strangest of reasons. When there’s money
involved, the project will get done, but the quality may or may not be
high. Such is the nature of the power relationship between employer
and employee, and work for open source is no different than work for
other areas. When it is done because of the passion, it generally
turns out better, people tweak it more, but it has a higher risk of
not being finished. And timeline tend to be more predictible in
compesnated realm than in the uncomensated. So having big money
behind you is a mixed blessing.
5. Strictly technically speaking, what are the biggest advantages of FreeBSD against Solaris, Linux, IRIX and AIX today, and where does it still lack compared with these Unix alternatives?.
FreeBSD has traditionally excelled with it’s VM, SCSI, and network
subsystems. The lack of a journaled filesystem is seen by some as a
shortcoming, but UFS2 with softupdates and background checking solves
most of the problems that journaling filesystems attempt to solve. This
particular area is very polarized, though, and it should be noted that
efforts are also underway to port the SGI XFS to FreeBSD.
Wes Peters: Against IRIX and AIX, it’s a no-brainer. Those systems are staring at
an open grave. IBM and SGI have obviously switched horses; their
marketing releases are attempts to placate their current customers
while they come up with a migration plan better than “buy Sun now.”
One of the biggest advantages of FreeBSD over Solaris that may not be
mentioned by others you are interviewing is the FreeBSD Ports system.
There are currently more that 8,000 applications, development tools,
and other software packages that you can install on a FreeBSD system by
simply asking for them. The amount of work that has been doone in
creating and maintaining this system is incredible, and it is one of
the crown jewels of the FreeBSD Project.
Against Linux, the situation is different. One of the problems in the
Linux world is perhaps too much choice. With nearly 200 different
distributions of Linux, each going in different directions, it’s hard
to know where to go to get what you need. The applications can be
difficult to find; you’ll come across an app that looks like exactly
what you need, only to find that the developer hates whatever distro
you’re using and only makes packages for a different distro that is
almost but not quite compatible with what you’ve got.
Yes, these are technical issues. Having two major packaging formats,. Our backwards compatibility has remained quite good as
well; a number of commercial vendors have FreeBSD 3.x and even 2.x
binaries they still sell today because they don’t need to change the
executables.
Greg ‘groggy’ Lehey: I can’t think of a way to answer that question in a few sentences.
Firstly you need to distinguish between UNIX and Linux; under the hood
the UNIX kernels and FreeBSD are very similar, while Linux is very
different. A few thoughts:
compared to UNIX:
* FreeBSD has much tidier source code and fewer crocks. Hardly
anybody sees UNIX source code, and keeping it tidy is expensive, so
big companies don’t go to great lengths to do so. FreeBSD code is
kept tidy by a number of beginning kernel coders who pride
themselves on the neatness of the result.
* At the other end of the scale, we haven’t made as much progress as
we would have liked with pervasive kernel changes such as SMP and
kernel threads. This is probably a direct result of the open
source development methodology, which is not as rigorously
structured as in closed source projects.
M. Warner Losh : I’d agree on the threading issues, but much progress is being made
there, but disagree with you on the SMP area.
Greg ‘groggy’ Lehey: compared to Linux:
* FreeBSD is a tidier system. You “know what you have”. If I say “I
am running FreeBSD 4.8-RELEASE”, I define exactly the system I am
running. Currently, there is no way to say something similar about
Linux. You can say “I am running Red Hat 8.2”, but if you have
built a new kernel, that no longer applies. You would at least
have to say something like “… with kernel 2.4.20”. Every time
you add software to a Linux system, you have (to make) a choice of
where to get it from. With FreeBSD, you install nearly all
software from a known place (the Ports Collection) in a defined
manner. This makes bug fixing much easier.
* Independently of the issue of how Linux gathers its system
components, you have the choice of distribution. The tools in one
distribution may be completely different from those in another, and
the configuration files might also be completely different.
*.
* Linux is probably ahead in the issue of SMP support. Numerous
people are working on such projects. But where do you see the
results? Most such work that I know of is “bleeding edge” and
hasn’t yet found its way into Linux distributions.
6. Which are the hardest parts during the resolution of the architectural or coding bugs in the 5.x branch? How many people are working in the 5.x branch these days?
Scott Long: The hardest problems have come from mediating two solutions to the same
problem that take competing directions. We formed a Technical Review Board last fall that is chartered to resolve these kinds of disputes and help set future technical direction. Luckily, these problems rarely happen, and the issues they bring up are valuable and help us grow.
Wes Peters: Keeping all the various development projects moving and from bumping
into each other. This is no different than any other software
engineering project involving hundreds of developers all over the
globe. In point of fact, it’s quite a bit easier than any of my
experiences with similar size development teams in the commercial
world.
I believe this is because everyone in FreeBSD wants to be there, and
wants the system to grow and succeed by our standards, rather than
being there just because it pays the bills, but don’t have proof of
this.
Since FreeBSD 5 is such a new system in many ways, everyone who wasn’t
intimately involved in the underlying changes comes to 5.x as a mostly
new system. We try to ameliorate this with documentation on the new
and changed APIs; the documentation group is one of our big strengths.
Greg ‘groggy’ Lehey: Resolution of bugs is seldom an issue. A bigger one is the question
of direction. In open source projects, there’s a tendency for the
person with the greatest amount of drive or spare time to get to
choose the solution. It may not be the best solution, but others
don’t have the time or energy to fight to get their version accepted.
This is one of the backgrounds for our Technical Review Board, which
should help stabilize the direction of the project.
M. Warner Losh : Actually, I’d say that the hardest parts of getting things right is
more getting the progammers to play nice together. I sit on both the
trb and on the core team. Often times technical disputes are really
personality conflicts using the technical issues as proxies.
Typically they reach a fairly advanced state of dysfunction before the
core team is brought into the loop, and it takes a lot of time and
effort to resolve the issues. I’ve spent 20x the time on the “play
nice” aspects of the project than I have on the technical aspects.
Usually after individuals start to play nice, they resolve the
technical problems. I know of only one case where the trb had to get
in the middle of a dispute and actively work on a solution that both
parties could agree on. All the other times people were able to work
it out, or the role of the TRB was to pick A or B as being better. In
contrast, the core team has mediated something like 10 or 15 developer
disputes in the past year.
Greg ‘groggy’ Lehey: We don’t distinguish between people who work on the 5.x branch and
other parts of the src tree. We’ve seen 152 different people commit
code to the src tree in the last 12 months, 102 in the last 3 months,
63 in the last month, and 13 in the last two days.
7. Are there any plans on offering a graphical installer for FreeBSD and maybe some graphical or Curses-driven front-ends for most of the main services (e.g. for NAT, dhcp, firewalling etc)?
Scott Long: The ‘libh’ project was meant to provide a modern replacement to the
venerable ‘sysinstall’, but work on it seems to be stalled. There has
been talk over the years of different FreeBSD vendors and resellers
stepping into this area, but nothing yet has happened publically.
Wes Peters: This is an issue that always generates a lot of discussion but little
code. There is a long-running project aimed at creating the
underpinnings for an entirely new installer, in the form of a library
of code that handles the really hard parts of doing things like
updating system configuration files in a way that can be islotated to a
single installation, and backed out. We anticipate that as this
project matures it will grow into an actual installer program, but
don’t have a timeline on such a development.
I don’t know of any currently running projects to develop a graphical
installer along the lines of what RedHat or Mandrake have for Linux.
Nobody who wants one badly enough and has the skills to do so has
materialized yet. Its not completely clear that such an installer is
critical to the ongoing success of FreeBSD either; we are asked more
often for installers that can be run remotely over a network, or over a
serial console.
Greg ‘groggy’ Lehey: There have been various plans, but none have been overly successful.
Currently people are enhancing sysinstall (which is curses-based) to perform some of these functions.
I also investigated such tools in some detail while updating “The
Complete FreeBSD”. I came to the conclusion that such tools are
frequently counterproductive. They give people the impression of
control when in fact they’re frequently just capable of updating files
without understanding the effects. This is particularly the case with
firewalling. If people can’t read a configuration file and use an
editor, why should they be more successful with less powerful tools?
Yes, lots of people ask for this kind of tool, but I’m reminded of the
punch line “It’s just what I asked for, but not what I want”.
8. Are there any plans to optimize FreeBSD by default for the i586 or i686 architectures only as opposed to plain i386? How are your port to PPC is coming along? Are there any plans for a working SPARC port? Most importantly, what about support for the AMD Opteron and Intel Itanium?
Scott Long: FreeBSD 5.0 stopped support for the 80386 in the default installation
due to the pessimisation it brings to kernel. The ‘make world’ command
makes it very easy to rebuild the entire OS, so we leave it up to users
to decide what optimizations they want to enable.
FreeBSD 5.0 introduced support for the sparc64 platform. With used
UltraSparc systems cheaply and easily available, this has been a hit.
There are no plans for supporting the older sparc32 platform as it is
quite dated in comparison.
AMD64 platform support is quickly gaining momentum. Peter Wemm from
Yahoo! has a native 64-bit amd64 kernel booting right now, and it is
only a matter of time before it is running useful userland applications.
FreeBSD 5.0 also introduced support of the ia64 platform, though it only
supported the Itanium1 systems. Itanium2 support has progressed
significantly since then.
Wes Peters: i586 or i686 only? No, but FreeBSD already has a number of
optimizations that are specific to each class of Intel (and AMD)
processor.
Even more than that, parts of the kernel support functionality that only
works on 586 or 686 processors. FreeBSD 5.1 will ship with support for
PAE, the Physical Address Extensions in the 686-class processors. This
means you can put more than 4GB physical RAM in a machine and make use
of it. Each process still has a 4GB virtual address space, but you can
have more than one 4GB process resident in memory without paging to
external storage. This is very important for databases, for instance,
where you may need to store large index tables in memory.
This doesn’t mean we’re going to remove support for 386- or 486-class
processors, these optimizations are produced as you build the kernel or
the rest of the system. In the BSD world, rebuilding the system is not
seen as a barrier; in my home workstation it takes 30 minutes or so, on
a garden-variety Athlon XP 2000+ system.
Note that some of gcc’s optimizations produce code that fails, you do
have to be careful when trying to optimize some kernel functions. We
try to focus on correct and elegant code rather than relying on
esoteric compiler optimizations to achieve performance. At the lowest
levels, understanding the interactions between critical code segments,
the data structures they reference, locking issues, and cache
interactions lends enough complexity without worrying about what the
compiler might be doing behind your back as well.
The SPARC64 and IA64 (Itanium) ports have been running stably for months
now and were included in the 5.0 release. We even have clusters of
machines building binary packages from the ports system for these
architectures. x86-64 (Opteron) boots and runs but is still in the
early stages of development. It is in the hands one one of our most
experienced and capable developers (a fellow Core Team member) and I
expect it to progress rapidly.
The PowerPC port has been progressing slowly, only a single developer is
working on it. He has gotten the kernel booting on at least one of his
development systems. You can keep up to date with his progress at the
Daily Daemon News site. ;^)
Greg ‘groggy’ Lehey: Yes, we’ve been doing this for some time, and we’ll continue to do so.
[ How is your port to PPC is coming along?]
Slowly. PPC is not a priority for the FreeBSD project: if you want BSD on PPC, buy a Mac. MacOS X is a BSD operating system, and it’s not clear what advantages a FreeBSD implementation on this hardware would have for normal users.
Yes, we have a working SPARC port.
M. Warner Losh : The Sparc64 port is one of the tier 1 platforms in FreeBSD 5.x. A
tier one platform has full support, and everything is expected to work
on that platform. In addition, full releases are built by our release
engineering team.
9. Are there any talks with Intel to port their compiler and tools to native FreeBSD? How about Rational’s Purify? (commercial companies developing for or under FreeBSD might be in great need of these tools).
Scott Long: Intel’s ‘icc’ and Rational’s Purify both run great under FreeBSD’s Linux
emulation layer. This layer provides a linux-like kernel and userland
environment for linux applications to run in. Linux games like Quake 3,
Return To Castle Wolfenstien, and NeverWinter Nights also run flawlessly
on FreeBSD via this layer. Some say they even perform better than
running on native linux.
Wes Peters: Not that I know of, in either case. The Intel compiler for Linux runs
adequately on FreeBSD and can be used to compile C source files into
object files that are linked into FreeBSD executables. While the Intel
compiler produces object files that are in some cases quite a bit
faster on Pentium 4 processors, making a compiler that can’t just plug
into the Linux native development tools was a curious step.
If they would release the source code to the compiler we would have a
port to FreeBSD in short order at no expense to Intel, but I dont’
foresee that happening anytime soon.
10. What is the official position of the FreeBSD Project on a possible fork of the XFree86 codebase?
Scott Long: Until a release comes from the new fork, we have no official position.
FreeBSD 5.0 will ship with XFree86 4.3 as it is the latest stable release of X.
Wes Peters: We would have to consider that as a group. None of what I’ve written
here is an official position of the FreeBSD Project, these are my
ideas. We don’t do position papers, as the Core Team is really just
the 9 people tasked with keeping the project under control, not a true
board of directors. The direction in FreeBSD is controlled by the
people that contribute to the project.
As for my own opinion, forks can be good or bad. When the Apache team
forked their codebase, it hardly raised a wimper; ditto for Samba.
They were both done for the best of positive reasons, to rearchitect a
system where the developers thought it was badly needed. We basically
do this every time we create a new FreeBSD development branch, so
FreeBSD has forked development at least 5 times already.
That said, if XFree86 forks because one of the developers can’t get
along with the rest, it rests on his shoulders to go forth and make a
success of his project, whatever it will be. OpenBSD essentially
started this way and the OpenBSD project has made many valuable
contributions to the computing and internet society in general, and to
FreeBSD in particular.
Greg ‘groggy’ Lehey: The FreeBSD Project does not have an official position on forks of
other projects. In any case of a fork in a project from which we
import software, we will evaluate the results and may end up
supporting one or both forks. In the case of the XFree86 project, it
would be difficult but not impossible to support both.
M. Warner Losh : I think most of the community is taking a wait and see attitude.
FreeBSD has traditionally picked the best available technology for
inclusion in the base system. At times FreeBSD has purposely lagged
the latest release of X11 or gcc because newer versions had too many
issue for too many of our user base. I suspect in the future we’ll
continue this tradition and base our releases on the best technology
available, possibly giving our users the choice to use the one that
best fits their needs. However, any fruit from this code fork is
months away so it would be premature to make any judgements.
11. Suppose a complete fantasy world… how would you feel about a “re-unite” between the big three BSDs, FreeBSD, OpenBSD and NetBSD, under a common umbrella/project where each project would merge its best features to the common code?
Scott Long: This has been discussed many times over the years. Each BSD has its
speciality, philosophy, and core developer personalities. The
competition and cooperation amoung the three has been very productive
over the years, and I expect it to remain that way. Several FreeBSD
developers are also OpenBSD and NetBSD developers, so development is
more united than it might appear on the surface. As long as each
provides a niche and has developer and user support, there is no
urgency to merge.
Wes Peters: The first task would be to determine if this fantasy land is paradise or
just a branch of hell. ;^)
Your question again assumes this would be a positive output; I’m not
certain of that. I think the best of the 3 projects, and many good
ideas from Linux, are already shared. The level of cooperation has
grown steadily greater over the years.
The differing focus of each of the 3 groups leads them not only to
different solutions, but also to different problems. When one of the
other projects discovers a similar problem, they have “prior art” to
consider in formulating their own solution. In many cases, the code
and ideas are shared, in some cases new solutions are attempted. The
reasons for this can vary from the orignal solution not fitting well
into the second system to wanting to create an independant solution to
see if anything can be learned from the experience, or a better
solution found.
Greg ‘groggy’ Lehey: I think a complete reunification would be a bad idea. Many of us are
also contributors to the other projects, so it’s not a case of NIH.
On the other hand, we see:
* Maintaining a big software project is difficult at a personal
level. A lot of the core team’s work involves settling disputes
between developers. If we were to merge, we would almost double
the size of the development team, and I would expect at least a
fourfold increase in such disputes.
* I mentioned above the “strongest developer decides how to implement
a feature” problem. One solution to this problem is to have
multiple projects. NetBSD implements something one way, OpenBSD
does it a different way, and FreeBSD does it a third way. At some
later date we can then compare the success of the three approaches
and then adopt the one we find best. This has happened a number of
times in the course of the projects. If we were to merge, we would
lose this advantage.
* What advantage would there be? There are dozens of different
versions of Linux, many of them with user interfaces which differ
more from each other than the BSDs do. There appears to be an
advantage in diversity.
On the other hand, it is a good idea to maintain more consistency
between the projects. We could do more to maintain a consistent user
interface, for example. The problem there is not so much a matter of
cooperation as a matter of the time it takes.
M. Warner Losh : I have lots to say about reunification. It sounds good on paper, but
the people in the various projects makes this hard. FreeBSD, NetBSD
and OpenBSD all smell different. Some people prefer one smell over
the others. To make them all smell the same would be difficult, and
I’m not sure completely desirable. The healthy competition between
the groups helps foster innovation, and the code bases are close
enough that people can pick and choose from the other project’s work.
I agree that large chunks of userland could be common, but we lack the
tools to make that happen. A number of attempts to make this happen
have ended in failure for a variety of reasons.
12. A lot of people are asking us about the differences between UFS2 and XFS/Reiser/JFS and NTFS. What are the strong points of UFS2 against these other modern file systems of this generation and which are its weak points,
technically-speaking?
Wes Peters: From my viewpoint, the strongest point of UFS2 is that it is based on
code that is known good, and has been working in production for more
than 25 years now. Some of the developers working on UFS2 in FreeBSD
are younger than UFS. This is not to imply that UFS2 was gifted to us
from on high, perfect in every way, but the path has been far less
rocky that I expected.
I attribute a lot of the development effort thrown into XFS, ReiserFS,
JFS, and others in Linux to the weak feature set of ext2. FreeBSD saw
a lot less interest in such “advanced” filesystems because UFS +
softupdates was already working in FreeBSD by the time ReiserFS became
usable in Linux, and UFS + softupdates was “good enough” for most
needs.
Others here who are more knowlegable about filesystems can give you a
more detailed, feature-by-feature comparison if that’s what you’re
looking for.
13. SCO went after IBM, now they seem to go after Linux, while they hinted that Mac OS X also uses their Unix IP. This does raise an eyebrow, as MacOSX is partly based on FreeBSD, 4.4BSD and Mach3… How does this situation affect the FreeBSD Project? Is FreeBSD using “clean” code, or are some remaining SysV code is still part of your project? Additionally, FreeBSD ships with Linux emulation libraries. Does this part of the Linux code in FreeBSD includes any claimed SCO IP?
Greg ‘groggy’ Lehey: Technically, not at all. It’s not clear what SCO’s motives are, but I consider them completely unfounded in all points. The Linux source
code is available to any user, and SCO themselves ship Linux source
code, so it’s difficult to understand how SCO can make these claims
without pointing to a single instance to substantiate the claims.
It’s also interesting to note that over the last few years SCO has
been attempting to release more and more source code under open
licenses. I was involved in an attempt to release sar a few years
back, but nobody in the BSD communities was interested enough. I get
the impression that new management has moved in without understanding
the obligations and commitments that SCO has made in the past.
Note also that SCO’s claims that IBM is stealing their SMP technology
are ridiculous. SCO never had any useful SMP technology, and the
implementation in Linux both predates IBM’s involvement, and is also
completely different from the SCO implementation.
There is some code in FreeBSD which was derived from System V. It was
released specifically for this purpose, and there had never been any
dispute about it. The “BSD Wars” of 1992 to 1994 were about code
imported from Research UNIX, not System V. SCO (then called Caldera)
released all Research UNIX code under a BSD style license in January
2002, so there is no way they could complain about this.
M. Warner Losh : The code was *NOT* derived from System V, but rather from Unix 6th and 7th edition, as well as 32V. Only the copyrights were similar to
those used in System V source files. The code in question was merely
blessed by USL and acknowledges as originating there by the Regents. Read here.
The settlement restricts further use and distribution substituted for Net2.
In any event, those files with USL copyrights on them have specific
permission to be distributed by the Regents of the University of
California to settle thse lawsuits, with an additional agreement that
Novel (and its successors) would not sue anybody basing systems on
4.4lite.
FreeBSD 2.0 base a new port from 4.4lite. It contains no code from
the net2 releases that isn’t in the 4.4 lite release. FreeBSD 1.x did
include code that was subject to that lawsuit, but since the FreeBSD
has not made that code available for years, I’d think that we’d be
safe from any IP claims.
Greg ‘groggy’ Lehey: I do have some concern about the way in which Caldera released the
software. The current litigation against IBM so completely
contradicts the release last year that I can only assume that the
people involved don’t know about each other. We (in this case the
UNIX Heritage Society) have asked SCO to put up
information about the release on own web site, but so far they have
not done so. A copy of the original is here. You may quote this URL if you wish.
[Linux emulation libraries threat] I don’t believe so, but as I say, SCO’s complaint was very vague.
FreeBSD simply uses existing Linux libraries for the emulator, so I
can’t see any reason why the FreeBSD project should be held
responsible for the content.
M. Warner Losh : SCO’s claims are based on bad action by IBM. They make a copyright
claim against IBM that is approximately: IBM derived AIX from System
V. IBM took parts of AIX and put them into Linux. Therefore, since
AIX is derived from System V, they put our IP into Linux.
The comments that they made about the Mac OS X sources are from a
position of ignorance. All files in the Mac OS or FreeBSD source
trees that have USL copyrights are specifically covered under an
agreement to settle the 1992 lawsuit between the University of
California Regents and Novel (the folks that purchased USL while the
lawsuit was going on). That agreement specifically stated that Novel,
and its successors, would not sue anybody who based their systems on
4.4lite. FreeBSD is based on 4.4lite, and is therefore immunized
against such legal action based on copyright claims. UCB, for their
part, removed certain files, rewrote others and added the copyright
notices to still others. FreeBSD has no code that infringes upon the
SCO group’s intellectual property.
There never was any System V code in any BSD. Ever. The IP claims
that USL made its 1992 suit were based on the inclusion of sixth and
seventh editions and 32V. While these were the forerunners to System
V and System III code bases, they are not specifically System V or
System III. Furthermore, SCO released, under its ancient unix
program, all sources that predated System III and System V to be
freely distributed under a BSD-like license. These specifically
included 6th edition, 7th edition and 32V.
IBM has never, to my knowledge, contributed significant work to the
FreeBSD project. Since SCO’s IP claims appear to be based in
Linux’s libraries are completely free of SCO intellectual property as
well. They are based on glibc, which has been written from scratch
over the past 15 years or so. Other libraries are similarly written
from scratch, or are based on code bases with well known lineages (for
example, the X11 libraries). Therefore, FreeBSD is safe on this
front.
Were we to include ibcs shared libraries that are necessary to run
ibcs emulation, we might be volnerable to an ordinary copyright
claim. However, we do not, so we are safe from that aspect of the
claims that have been reported in the press.
Some, not connected with SCO as far as I can tell, have alledged that
SCO is making patent claims against unix for its Unix IP intellectual
property. Since most of the key concepts on Unix were invented before
software patents, and also many years ago, the patents have either
expired, been placed into the public domain, or were never issued. It
is unlikely that SCO could prevail on claims in this area as well. A
careful reading of SCO’s statements show that they refer only to Unix
IP, and copyright law to justify their suit against IBM. Even if that
weren’t the case, FreeBSD is safe here as well, as far as we can tell.
Finally, the FreeBSD core team has not been contacted by SCO
representives directly. We have seen press reports, but they are not
sufficiently specific for us to know what, exactly, would be alledged
should SCO contact us. In addition, SCO’s own web site has only
talked about copyrighted code being transferred from IBM’s AIX into
Linux. Since there is no code that orginated in AIX in FreeBSD, we
can only assume that we’re safe from such claims. Our belief is that
we’re very safe from these actions, for the reasons I’ve outlined
above. However, in the absense of specific allegations against us, we
cannot, with certainty, say one way or the other.
Now THIS is what I call a good article, not the usual “Jack’s failure of installing DISTROHERE-VERSIONHERE”.
on a great OS. I wish more Linuxheads would try it.
My only gripe w/FreeBSD is it refuses to be installed in extended partitions and its not as good support for 3d on my ATI Radeon 7500.
It’s as well supported under the DRI with FreeBSD as it is under Linux.
Adam
Couldnt get it installed, went through the install process, appeared to install packages but it wouldnt boot.
I have some problems setting up Voodoo5 with FreeBSD 4.8 in 3D accelerated mode. “Load dri/glx/666/blahblah” is all set on my XF86Config file (XFree86 4.3.0 used), I have the agp module ON in the kernel (voodoos don’t use agp anyway), but I don’t get 3D acceleration, GL apps run on software mode (mesa/glut is installed). I have installed ‘driglide’ via the ports system btw (not DRM though – do I really need to mess up with that?). I have also read the pages here (), but still no joy. Any pointers or tricks?
Eugenia,
What does /var/log/XFree86.0.log say about Direct Rendering?
If it says it’s enabled, set the environmental variable LIBGL_DEBUG to verbose in an xterm and launch glxgears. See if it gives any errors.
Adam
I loved this article. I’m amazed at the level of detail the guys went into to answer the questions. Please continue this trend.
very nice article, indeed. i’d be more than happy to see similar interviews with members of NetBSD and OpenBSD projects (hint! hint!). thank you.
Today I run Gentoo Linux on my primary machine (this is important because it’s what I have most experience with, and therefore, what I’d most compare FreeBSD to). This past weekend I decided to give FreeBSD a try, for various reasons. My experiences:
(1) The installer is kind of like Debian’s – text-based, but simple to use. I was able to get the system installed properly without problems, without documentation. I didn’t consult the superb FreeBSD handbook (available online) until after the primary install process. It’s rather intuitive. It took about 30 minutes.
(2) The FreeBSD handbook is great. It’s not overly detailed but covers all of the basics clearly, succinctly, in plain language. As someone who has always been impressed with Gentoo’s documentation, I have to give kudos to the FreeBSD handbook. It’s in a similar vein, perhaps with a little more “why are you doing this” information than the Gentoo docs. Print it out or leave a browser window open when you’re first figuring things out; it’s your first resource for anything FreeBSD related. Small parts of it are a little out of date for 5.0 (which is understandable), but if you’re installing 4.8, this won’t matter.
(3) FreeBSD is almost more similar to Gentoo or Debian than Gentoo or Debian is similar to other distributions like Mandrake. FreeBSD’s ports system will be immediately recognizable to Gentoo users, though its use is a little more granular than Gentoo’s portage (cd into the ports directory, then do a make, make install, make clean – it then fetches the files from a CD or the net, does the requisite dependency/requirements checking, downloads and compiles those). There’s nothing as simple as USE, but FreeBSD does support various compile options, including specifying the CPU you’re using (via config options in a configuration file or on the command line). A Gentoo user will be able to figure it out easily with help from the Handbook. Debian folks will like the fact that you also have the option of downloading compiled packages. In a sense, the best of all worlds. I would add that I make no comment as to what’s under the hood with all of this; from a user’s perspective though, all of this is pretty familiar. If you live in the United States and are familiar with the US, think of FreeBSD as Canada, from a user’s perspective. A little different, but easily navigable (I may have just bothered some Canadians – my bad).
(4) Got KDE and Gnome installed. I’ve been compiling everything via use of ports, perhaps out of habit. They work great, and are of course indistinguishable from Linux.
(5) The directory structure is a little different, but not radically so. There’s a /stand directory, for example. There’s also a menu-based system management tool (as referenced in this article) called sysinstall, which is optional, but useful, especially in the beginning. I don’t see any great problem with it that it needs replacing; of course, you’ll probably eventually get into manually editing the various configuration options via a text editor instead.
(6) The boot sequence for starting services struck me as a little bit unusual. There’s nothing like rc-update in Gentoo. Took me about 30 minutes to figure out how all of my services were starting, but it’s just a matter of being used to something different. I’ve noticed that a little more attention is paid to “local” vs shared or global applications and daemons in FreeBSD, than what I’m used to. This can lead to things being installed in places you might not expect. For example,).
(7) Compiled a kernel. In FreeBSD, you edit a large textfile filled with commented and uncommented lines for each module you want compiled in. I didn’t see any menuconfig type tool like there is in Linux. I was able to successfully recompile kernels with ease; even a beginner should be able to follow the guide and get a new kernel compiled and installed without difficulty..
(9) For whatever reason (I am full of preconceptions from reading too many OS-related websites), FreeBSD (in my mind) had a sort of reputation for being a niche, geek OS, but I’ve found it to be rather intuitive, well documented, and about as easy as any non-commercial Linux to install (Alternately, I could just be a major geek. Which is probably the case). The ports collection is vast, and I haven’t had to mess around with the Linux compatibility layer (unless it’s enabled by default by some of the ports; I don’t know yet – I’ve just not had to specify anything about it, or go through any gymnastics as you have to with WINE under Linux or anything like that). Every single application I run in Linux is available via ports, and there are *thousands* of them. Also, it plays well with my Linux and Windows boxes in terms of NFS and samba; it’s a good “network citizen”. Most of the configuration files are the same. In short, it’s all familiar stuff. Ports may be new to you, depending on where you’re coming from, and if you’re stuck on an RPM based Linux distro, you’ll probably love Ports. No question.
.
Worth an install, if the only reason you’ve been avoiding it is fear, or something.
If, on the other hand, you are a Windows user and hate Linux for all of the typical reasons, you’ll probably hate FreeBSD too (With the exception of package management, which is wonderful in FreeBSD).
Users of Mandrake, Red Hat, SuSE, may find it a little more complicated, but nothing insurmountable. Easier than Gentoo to install, about the same as Debian to work with in terms of difficulty (Trust me, the dependency handling goes a long way).
Debian and Gentoo users (and probably Slackware users) should find it a breeze. Gentoo users will probably smile about how much Gentoo has borrowed and improved from FreeBSD (I assume FreeBSD, and not another BSD) in terms of portage. Of course, this is pretty common knowledge anyway, I think.
The choice of using precompiled packages, is, of course, a great plus for people who are too impatient to compile everything.
I like it a lot so far.
nice review of your experience, i’m a Linux user myself (both debian and redhat) am getting a new hard-drive considering puttin FreeBSD on the old one just to see what it’s like, then again nothing can beat IRIX on a Origin 3800 (i’ve an account on one
I’ve run FreeBSD 5.0 since it was released. I’ve had no problems at all with it. In fact I’d say FreeBSD’s unstable is considerably more stable than any Linux stable distribution I’ve ever run.
Also FreeBSD 5.0 is NOT SLOW, as long as you stick with the Release Engineering branch instead of current, all of the debug code is disabled. In addition, you can disable the debug code in Current by changing some options in the kernel config file.
I’ve been using FreeBSD since FreeBSD 4.3 and I’ve been very impressed with every release.
Keep up the good work team.
I couldn’t imagine using anything else on a production grade server.
-bytes256
freebsd is very nice. I got it installed on my home server. Now its dual boot linux, freebsd until I figure out how to set up all the services I have running on the linux side. Probably that would have happen much faster if I could sort out NAT so I can work on it from my other box.
Which firewall people use ? The IPfilter or the default freebsd one? I guess it will have to wait until the end of the semester…
I really like it thought.Thats proper unix.
And yes coming from an rpm distro I loved the ports.
However my favorite command will be “make world”
—
antonis
Lack of Java is bound to have a significant impact on adoption of FreeBSD at companies.
In Linux (and Windows) we have many great options in terms of development languages and environments (C/C++, Java, Python, Perl), and that allows us to use the right tool for the job.
Sometimes Java is that right tool. Without Java, FreeBSD is missing a major piece.
Hopefully Python (or another higher level language) will mature enough to fully replace Java, at which point FreeBSD won’t be left out.
It is interesting to me that I have seen something like the following a thousand time:
“Linux and Java: The killer combo”
Mandrake doesn’t let you install it by default and Redhat doesn’t either. For something that goes great together, you would think you could install it from the get go.
I heard Gentoo does a great job with Java though. ; )
Why hasn’t someone made a “distro” of FreeBSD in the same vein as RedHat, Gentoo, etc?
This was excellent reading, not just like someone noted already the detailed answers but the atmosphere and attitude. Professionalism, just pure professionalism is what I smell from this core team of people.
I’ve been trying to install FreeBSD for a while, but my GF4 card makes it halt during install process very early and I just can’t seem to get passed it. I hope this will be solved with 5.1.
I’d love to see half of these questions to be asked to NetBSD and OpenBSD too… very curious to hear how they consider things…
I agree, no proper Java support is a significant issue. As it matures (and more to the point, speeds up), Java is really starting to turn into a useful language.
Replace Java with Python? Surely, sir, you jest. I like Python as much as the next guy, but I think Java has a much better programming paradigm.
-Erwos
Why hasn’t someone made a “distro” of FreeBSD in the same vein as RedHat, Gentoo, etc?
Did you ever bother reading the article? The interviewees themselves stated clearly that FreeBSD isn’t just a kernel like Linux, hence making a distro is like buying a pre-fabricated PC only to throw everything out, save the motherboard, and populating it with off-the-shelf parts.
IOW: FreeBSD is a “distro” in itself. That is one of its strengths.
“Their attitudes speak volumes as to why ANY flavor of BSD isn’t ruling the desktop”
And that’s the thing, we don’t care. Go use Windows XP if it meets your needs. If all you want to do is run play games and run kiddie apps like kazaa, msn messenger and word, then Windows XP is just right for you; if you are a software developer, consultant, or use your computer for technical writing, a Unix-based OS might be a better choice.
Freethinker, if you want to use FreeBSD on your desktop, you’re free to do so. It will provide much the same experience as Linux on the desktop, even. However, the FreeBSD developers seem to have a certain grip on reality which is lacking amongst certain crowds swearing by another UNIX-like OS. BSD is a UNIX. Its conception dates back to the time before there were such things as graphics capable computers or non-keyboard input devices available outside a select few laboratories. They recognise that their system as a whole isn’t the ideal foundation for a modern desktop OS, especially not while still accomodating their current users.
In other words, if you want to use FreeBSD on your desktop, please do so. A lot of others prefer to keep it on their servers, but it can serve as a workable desktop OS, too. And since you obviously know what FreeBSD entails, as well as having Linux desktop experience, you know what to expect and will probably make do with that.
But if you want a desktop OS which extends beyond X11+KDE+GNOME, you’d do better with MacOS X. And it features a lot of FreeBSD goodies, so you wont miss out on much, either.
As for why any BSD flavour (save OSX) isn’t ruling the desktop (something which Linux can’t claim to be, either), you pointed it out yourself; they don’t consider it to be any important goal. Nothing particularly wrong with that. Not every OS has to be a desktop OS. Some OSes are aimed at embedded applications, others are server OSes, and content to be used in the field where they stand out.
FreeBSD has TONS of Java support. Due to it’s excellent Linux emulation, it can run all of the Linux JVMs and SDKs. I’ve used the Sun and IBM Linux SDKs and never had a single issue with stability or compatibility. I’ve also used the FreeBSD Java ports and i’ve read somewhere that the 1.3 version is actually upwards of 90% compatible with Sun’s version.
Hey Quag7, excellent writeup. I’m a long time Debian user who’s run all of the BSDs at one time or another. My experiences agree with yours entirely. Of course, where as I just kept them to myself, you posted them. Maybe you should try doing some guest articles for OSNews? You seem level headed, technically competent, and, best of all, coherant. Anyways, just a thought.
Thanks Eugenia, it’s kind one of best article that I have ever read about BSD!
Hey, I have some questions to ask you, Eugenia. I saw in your screenshot, you have the Straw installed and ran it. Does the Straw runs great on your machine? Does the RSS update (poll) fine without the problem by time in Straw? If so, then I guess it’s kind of broke on FreeBSD 5.0, but runs great on 4.x...
Quag7: (1) The installer is kind of like Debian’s – text-based, but simple to use.
I think, it’s more similar to Slackware than Debian does on the text-based installer. 🙂
Quag7: (5) The directory structure is a little different, but not radically so. There’s a /stand directory, for example.
Yes, FreeBSD follows the hier(7) pretty very well. I am fan of it. 😉 You can check in the ‘man 7 hier’ for the more details about the standard file system hierarchy, if you want to.
Quag7: (7) [….]. [….]
How about check in the LINT (4.x) or NOTES (5.x)? It’s in the same place as where you edited your kernel.
Quag7: (7) [….].
Most of time, it will tell you that option required to enable like scbus, da, miibus or so in the comment. But, I agree they need the better and improvement on documente of the kernel option.
Quag7: .
Me too, but GCC 3.2.x is taking the more time to compile, which meaning took more longer time to finish the buildworld/recompile kernel. That included the ports tree. It really doesn’t matter to me, when I am doing them while I am in the bed. 😉 But, after that the apps seem run at the same speed or so, I haven’t done any of benchmark. 🙂
Quag7: .
Since, you are still very new to FreeBSD. In case, just let you know about portupgrade. It’s one of best and recommend add-on third party for the ports tree. It will update all of your installed apps by automatic. This tool rocks! It’s in sysutils/portupgrade, which it’s a Ruby script.
1) CVSup your ports tree.
2) pkgdb -F
3) portupgrade -ra
bytes256: I’ve run FreeBSD 5.0 since it was released. I’ve had no problems at all with it. In fact I’d say FreeBSD’s unstable is considerably more stable than any Linux stable distribution I’ve ever run.
I second, I have FreeBSD 5.0 since it was before dp1.
bytes256: Also FreeBSD 5.0 is NOT SLOW, as long as you stick with the Release Engineering branch instead of current, all of the debug code is disabled. In addition, you can disable the debug code in Current by changing some options in the kernel config file.
You should update to -CURRENT, because it’s way more stable than -RELEASE right now.
> I saw in your screenshot, you have the Straw installed and ran it. Does the Straw runs great on your machine? Does the RSS update (poll) fine without the problem by time in Straw?
No, it doesn’t work…
It says “polling” forever and it doesn’t fetch the headlines.
No, it doesn’t work…
It says “polling” forever and it doesn’t fetch the headlines.
Damn, here too. Looks like I will have to send the PR about ADNS, py-adns and py-xml have the bugs on FreeBSD, they all need to be fix. I will see if I can get them fix, but I doubt I can thought.
Thanks again!
Eugenia,
What theme is that? Very nice.
Can’t remember, I think it is this one:
Can’t remember, I think it is this one:
Yes, it’s correct. It’s included in the Gnome 2.2’s theme package by default.
If that’s a long read, does it mean most osnews articles are just one or two paragraphs?
That was a great interview. Very refreashing. It was also good to read their comments on the whole SCO issue. Very vaild points and reasuring at the sametime.
Well done OSNEWS
Wow. Great stuff. Contradictions and all make for an illuminating discussion of the development behind BSD.
Makes me want to try it all the more.
Excuse me while I dowload an ISO or two….
Thanks for your positive comments. I’d like to help out in a few areas if I can.
As you note, the FreeBSD Handbook is a great resource. The Handbook chapter on compiling a new kernel might be of help to you as well. It will certainly point out that the place to learn what various kernel modules do is in the man pages for that kernel module. There are a few missing but generally they are an excellent resource. FreeBSD users, unlike Linux users, expect the manpages to be complete and accurate. ;^)
You may also want to take a look at The Complete FreeBSD coming soon from O’Reilly. Written by Greg Lehey, one of our core team members who also participated in this interview, it is a valuable resource. It is much more tutorial than the Handbook, and quite a professional publication. Greg has authored other books published by O’Reilly and previous editions of TCFBSD were published by Walnut Creek CD-ROM, so it’s OK to keep your expectations high.
For beginners, you may want to check out FreeBSD: An Open-Source OS for your PC by Annelise Anderson, Absolute BSD my Michael W. Lucas (who is also the FreeBSD Donations Liaison Officer), or FreeBSD Unleashed by Brian Tiemann and Michael Urban.
The ‘menu system’ or GUI for editing a FreeBSD kernel config exists, in fact you can choose from several. My favorite is ’emacs’, others prefer ‘vi’ or ‘vim’. We place a strong emphasis on storing configurations in human readable form rather than placing a layer of GUI between the user and the real configuration. The config file ‘LINT’ on 4.x or ‘NOTES’ on 5.x will show you what all the options are, and are heavily commented around the more esoteric functions.
The booting sequence that seems to puzzle you is new to FreeBSD as well. It is a port of the NetBSD boot system, designed by Luke Mewburn. It is known as ‘rcNG’ in FreeBSD, and has quite a few desirable features. The main attribute of interest is that it allows subsystem or application designers to drop in a startup script that will be automatically sequenced with the rest of the system boot. Say, for instance, you’ve written an application that relies on both PostgreSQL and Apache to be started before your application can be started. In the Linux SysV-type startup, the system administrator would have to look through the startup scripts and give the application startup a sequence number that occurs lexically after both the Apache and PostgreSQL startups. With rcNG, the script itself reports that it depends on Apache and PostgreSQL, and the system starts and stops it in the correct order. The rcNG project is also a great example of code sharing between these two development teams, who have goals that in some ways differ greatly.
Lehey exhibits all the worst things I have heardabout the BSD culture: he’s snobby and argumentative and can’t stop promoting himself (see the plugs for his books). But the other guy, Warner, seems to have his head on straight and really know what he’s talking about. I say promote the latter and give the former the boot.
I would like to try one
Why hasn’t someone made a “distro” of FreeBSD? Mainly cause there’s not much of a point to it. FreeBSD is a complete OS. “Linux” is a kernel only; most common Linux distros are the Linux kernel with tools that actually make it do neat tricks. FreeBSD already has the tools and everything as part of the OS, so there’s not much reason to create multiple distributions. Organization is key to any good OS and FreeBSD is *organized*; several distributions would most likely work against the organized model FreeBSD uses.
There’s nothing stopping someone from packaging up FreeBSD and doing something unique with it (I built a bootable, live-file system CD I use for backing up laptop hard drives at work using dd/split and writing to an NFS share), but it wouldn’t officially be “FreeBSD” any more. When you distribute “your” version of FreeBSD it becomes “based on FreeBSD,” not FreeBSD.
CURRENT is not more stable than RELEASE right now. Just last night I had repeated kernel panics while trying to compile gtk2. The nVidia driver doesn’t seem to play nice with CURRENT at the moment either.
One thing that I find interesting throughout this is that Linux is referred to as a single entity. It is not. Linux is just the kernel (and glibc+tools according to GNU zealots). Mentioning which OS it was easiest to install application X, Y or Z on should always refer to which linux distro was used. Redhat is the evil empire of linuxland, yet much more freebsd-like distributions with up to date packages with managed dependancies requiring little if any interaction exist (gentoo).
Another thing to add to the “what is the desktop really?” definition is that device driver support for random device Foo is important to non-technical users. Linux is much further along in this area than FreeBSD. (witness: el-cheapo network cards not always working on freebsd, cardbus support only just becoming available 5 years after cardbus was common, etc..). Despite all of this, both freebsd and linux fail on the non-technical users desktop when it comes to device support due to lack of a consistent super simple easy plug and play experience that can never get into trouble. Leave that to Mac OS X or windows who have real budgets and usability/idiot testing labs.
I just switched away from RedHat to FreeBSD 5.0-RELASE when the BIOS on my last computer frizzled. I’m now using an older system, the memory is the same, but the chip is about 1/3 slower. I haven’t noticed a real slowdown in system response which rather amazed me. That along with the easy install and overall polishedness of the complete system is keeping me leaning toward FreeBSD for my desktop. Seems to me there’s alot that ‘just works’ where you have to endlessly tweak a RedHat system.
Great job on this article, the detail and depth of questions and answer was excellent. I hope OSNews.com can continue to bring us such detailed interviews. Thanks OSNews & FreeBSD Team!
Keep up the good work 🙂
OK, so FreeBSD rocks, and beats Linux in most fields. That’s great. Now make it popular. Why don’t you add an installer for newbies, config tool for newbies and so on. Do to FreeBSD the same think that Mandrake did to Linux – make it easy to use for ordinary people.
Read my lips, MOST PEOPLE DO NOT WANT TO SPEND 2 HOURS TO GET THEIR_FAVOURITE_SYSTEM_TO_WORK. And please don’t tell me that it takes less for you. It’s the newbies you should be fighting for. Make a bsd for the masses
PS. don’t tell me about OSX, I’m not willing to buy Apple’s hardware – too expensive.
Well is it? Also I too wish the Linux kernel was as neat as FreeBSD’s and that the package management was as good. However portage and autopackage are looking good. Also SCO so far seem to be a bunch of lost liars continually making false claims.
“Why hasn’t someone made a “distro” of FreeBSD in the same vein as RedHat, Gentoo, etc?”
Hey, distro exists, they are called freebsd netbsd openbsd darwin etc……
On other side I can assure you that Debian has no distro. It is an integrated system, not only a kernel 😉
Perhaps you missed some of the statements in the article regarding the true focus of FreeBSD? It is not the same as Linux’s, i.e., an OS for the masses. The installer is suitable for those who want to use the operating system as they have probably determined that FreeBSD suits their needs. Windows XP can take more than 2 hours to get it to “work” and yet millions of users have chosen this path.
From the original article:
> FreeBSD 5.0 has come out, and while this was mostly a
> “preview” > of sorts, many were unhappy with the
> instability and slowness the
> 5.0 release offered compared with the 4.x branch.
The responders all missed noting that after 5.0 came out, 4.8 came out — for exactly this reason. In other words, development is actively continuing on the 4.x branch, which is the recommended branch for people to run production systems on while the 5.x tree undergoes its shakedown cruise. It is likely that the “push” to get people to upgrade to 5.x won’t happen until 5.1, or more likely, even 5.2, has been out for a while.
There were simply too many large changes in 5.0 — many of which had interactions — to hold off releasing it any longer. (There is a development roadmap at that goes into much more detail about the future plans).
From other’s comments:
>).
The FreeBSD philosophy, since it avoids the “distribution” paradigm in favor of the “one source base” paradigm, winds up including a very minimal subset of applications in the “base system”. This is entirely by intention. So, for instance, Apache is not part of the “base system”: it’s a “port”. Ports are expected to be generally well-behaved in terms of modularity: thus, by implication, they only install their configuration files under directories outside the traditional places such as /root and /etc. By default, it’s /usr/local, but you can change it. (There are probably ports that don’t get it right if you do change it, but those are defined as bugs).
A side-effect of this philosophy is that ports are expected to cleanly de-install themselves when asked to. Ports that don’t cleanly de-install are defined as having bugs, too.
Thus, the risk for “just trying” a port is lower, since no system files are supposed to be messed with.
> In FreeBSD, you edit a large textfile filled with
> commented and uncommented lines for each [kernel] module
> you want compiled in. I didn’t see any menuconfig type
> tool like there is in Linux.
No one commented on the fact that in 5.x there is a new “hints” mechanism for specifying things such as IRQs and other common configuration information _outside_ your kernel build. A great deal more work has gone into making this work smoothly (and thus probably saving many more users having to build kernels) than has gone into making the kernel build itself easier.
> Every single application I run in Linux is available via
> ports, and there are *thousands* of them.
8595 as of yesterday’s checkout 🙂
It’s important to emphasize that you don’t have to “hunt around” for a version that will plug into your system. Each port maintainer is responsible for making sure that each port “plays nice” both in terms of installing, deinstalling, and (for all but the binary-only ports) compiling. If it doesn’t, file a Problem Report (PR)!
To reiterate, there is none of this “hunt around for an RPM that works” effort needed, which I personally found annoying under Linux. It’s cd /usr/ports/<category>/<portname>; make install. If it doesn’t work, it’s a bug. (Given that most ports require installation as root … but people are interested in fixing that.)
> Why hasn’t someone made a “distro” of FreeBSD?
Well, in the FreeBSD philosophy — you don’t! The system exists in terms of “base functionality” (kernel and minimal toolset) and “ports” (all applications).
Now, the system installer “suggests” some combinations of ports that most people will want (XFree86 & related things being one).
But you can always design your own — just make it itself a “port” that installs nothing of its own, merely defines dependencies on other ports. See, for example, the ports contributed by teh aforementioned Greg Lehey, and… .
Of course, it turns out that some ports are “more equal than others”. Even though Perl is not installed in the base system, almost inevitably it’ll be installed as a dependency from some other port (I did mention that all ports check for, and if necessary install, all their dependencies, didn’t I? 🙂 ) Also the canonical port to manage the ports collection is portupgrade; almost everyone will want to start with that.
Next to last comment:
Having said all this nice stuff about FreeBSD, it’s fair to say that the system installer is graceless and has aged very badly. It is fine to use _but only once you understand exactly what it’s doing_. I found it very counterintuitive to learn.
But everyone who hates the installer, once they’re done with it, then goes on to work on other, more fun, stuff, and thus the old installer remains 🙂 But I think it’s also fair to say that a lot of people would welcome its retirement.
Final comment:
As for some of the questions of “why isn’t XYZ available”, the plain and simple answer is that no one’s done it yet — same as any other Open Source project 🙂
Ah hah, thanks Wes. I was looking for LINT, couldn’t find it in 5.0 until someone on a FreeBSD support channel gave me a command to generate it, but it didn’t generate what I expected. I’m looking at NOTES now, and this is just what I was looking for. Thanks. I’m going to read over this file completely. I am still digging into the documentation on a lot of things.
Actually I have been running portupgrade with the (-a) switch (Upgrade all installed packages) now for several hours, and it’s going along swimmingly (To make another Gentoo comparison, this is like emerge –update world, pretty much).
I still don’t have my sound working properly but it may be unsupported; it’s onboard sound from a 5 year old computer. I haven’t worked with it much yet.
Quag7, when you find something interesting in LINT (4.x) or NOTES (5.x) try: man module, most modules have their own manpages that explains what they do and how to use them.
Also note that on 5.x there are two NOTES files to look into, one machine independant (/sys/conf/NOTES) and one for the specific architecture that you are using (eg. /sys/i386/conf/NOTES) if you are on IA32.
FreeBSD officially slags its own installer, so if you hate it, it’s not just you. From the man page for “sysinstall”:
BUGS
This utility is a prototype which lasted several years past its expiration date and is greatly in need of death.…
It’s not really that horrible of an installer, though; there’s just a few places where the selection process is a little counter-intuitive, notably the “type of installation” selection, where it’s remarkably easy to biff the “select everything” option. (For FreeBSD, “everything” is much leaner than a typical Linux distribution!)
Compile from the Source
[quote].
[/quote]
=> I still don’t understand… if Linux is not as stable as FreeBSD, why would you run COUPLES of Linux and willing to reboot it again and again. Why don’t you just wipe it and install your FreeBSD? Linux vs FreeBSD ?
Don’t mind what Greg Lehey says… He comes off as a jerk throughout the whole interview.
Firstly, I almost became a *BSD user – 386BSD was getting popular at about the time I bought my 486 in 1991, and I wanted an OS that could use that power – since DOS couldn’t.
However, Linux had also started drawing attention, and it came on fewer disks, so it was a little cheaper. I got SLS 1.0, running Linux 0.99plsomething or other.
Then some time later I got a 1995 BSDisc cdrom with FreeBSD and NetBSD, and also got some cdroms with Slackware 2.8, with Linux 1.2.8 on them. I had an ATAPI cdrom at that stage, so since neither BSD could read the cdrom (not SCSI), and Slackware could, I went with Slackware.
It does seem that Linux has played to a lower denominator than the BSds. That’s my observation.
I would like to think that some sort of arrangement could be made to dual-license the device drivers, so that Linux could make use of the *BSD’s, and the *BSDs could make use of the Linux ones – even the playing field a little.
Then, also in the interview discussing SCO’s lamentable (and thoroughly baseless and stupid) case, it is mentioned that SCO has BSD-licensed Research Unix. As far as I can make out, that only applies up to Seventh Edition – it would be interesting if they also BSD-licensed up to Tenth Edition, but I have no idea who owns that.
Does anyone know?
=> I still don’t understand… if Linux is not as stable as FreeBSD, why would you run COUPLES of Linux and willing to reboot it again and again. Why don’t you just wipe it and install your FreeBSD? Linux vs FreeBSD ?
Since, you quoted on my sentence too. I never said that I still keep Linux, which I said that I dumped it. I don’t have any of Linux in my boxes, so they all have Win2k Pro, WinXP Pro, FreeBSD, NetBSD and OpenBSD. Soon, will be Yellowtab and BlueEyedOS, when they release them.
Greg Lehey:
Possibly Linux users are more accustomed to jumping through hoops to get software installed, but FreeBSD users expect to be able to type ‘make install’ and have things done automatically.
Kind of funny to read this coming from you, as you have pointed out the fickleness of the ports system numerous times in your diary.
Let me for instance quote from your diary entry of the 18. of April 2003:
Started upgrading the ports on battunga. Somehow it’s touch and go whether port upgrades work at all. People seem to have forgotten that one of the aims of the FreeBSD project is to keep machines running as long as possible. battunga has now been up for 219 days, not very long, and it was a relatively fresh install at the time, but now half the ports have rotted to the point that portupgrade can’t recognize them.
On a Debian system, this kind of behavior would be simply unacceptable.
I should take some time to think of a better way.
The ‘better way’ requires a lot more work. Just take a look at the Debian Policy Manual:
Greg Lehey:
Just to get the thing to run at any speed, I installed fvwm2 and discovered that, apart from flashy graphics, it wasn’t missing too much.
A little nitpicky, but there is no such thing as fvwm2 anymore, unless you installed a very old beta. Once fvwm2 was declared stable, it became the official fvwm.
Anyway, I think GL makes a very good point here. WMs like fvwm and sawfish gives end users who are willing to put in a little effort an amazing amount of power to create a desktop in your own image. I would think an effort well spent for people who spend hours a day, year after year in front their computer..
While a lot more development money may be going into Linux right now, FreeBSD is helped by the 20+ years of development and maturity that the BSD base brings.
And how exactly does that helps with regard to the development of ‘new’ technologies like SMP, threading, NUMA, and so forth?
This might have been a good argument back in the mid-nineties when the free unix-systems were still very much playing catch-up, but it’s becoming less and less relevant. Besides, doesn’t actual deployment and number of eyeballs count for something?
Wes Peters:.
Really? Any specific reasons why? I mean, nothing has changed as far as the actual development model is concerned (though a number of hackers including Linus has adopted BitKeeper). It’s still very much Linus and his lieutenants. Nobody has been able to cram anything down Linus’ throat, that I’m aware of. Perhaps you know differently?
Paid Linux developers are paid to develop what their employers want, not what is best for the Linux system at this moment in time.
Of course, quite a few Linux hackers have been employed to do exactly what they previously did as a hobby.
Here’s Linus’ take on it: (…)
Part of the thing I like about the commercial side of it is it’s doing a lot of things I personally wouldn’t be interested in – and we do need it, it’s just it’s not what I do. So when I say commercial, in a way that implies I’m not very interested, that doesn’t mean it’s not a good thing. A lot of the commercial impact, a lot of the green stuff, that you need to make it successful.
The involvement of so many different entities is pulling Linux in many directions
Which is in exact accordance with Linus’ philosophy of software development. To let Linux go where people are willing to take it..
And despite this supposed lower barrier of entry, there seems to be a lot more happening on the Linux side of things..
There were a few other silly things in at as well that I just can’t be bothered to respond to..
samb:.
The only benchmarks I’ve encountered regarding things like the performance of the I/O scheduler, VMM, and process manager were my own dbench tests, in which FreeBSD 5.0 held a commanding lead over Linux 2.4 in terms of total throughput.
Of course, even despite my own caveats, these were contested by many to the point that they were relegated to meaninglessness.
Neverthless, I’m yet to see benchmarks to the contrary.
Care to post some, samb? I’d be especially interested in system throughput benchmarks of Linux 2.6 versus FreeBSD 5.0.
Scott Long: While a lot more development money may be going into Linux right now, FreeBSD is helped by the 20+ years of development and maturity that the BSD base brings.
samb: And how exactly does that helps with regard to the development of ‘new’ technologies like SMP, threading, NUMA, and so forth?
I don’t think anyone is arguing anything along the lines of FreeBSD having better NUMA support than Linux.
As far as threading goes, Linux and FreeBSD took two very different paths initially, and both were terrible implementations. Linux suffered awful context switching penalties with its _clone() based implementation, and FreeBSD’s threads couldn’t scale across processors, and didn’t provide support for multiple concurrent system calls from within different threads due to its userspace implementation.
Only now has Linux solved its threading woes with NGPTL. FreeBSD is trailing behind as far as adding KSE support to its userland libraries and finishing KSE support in the kernel (see and for further information)
As far as SMP goes, Linux and FreeBSD took a virtually identical approach. Both added initial support for SMP via a global lock on the entire kernel, the Big Kernel Lock (BKL) in Linux and the Big Giant Lock (BGL) in FreeBSD.
The main difference is in the move to a more modern and scalable SMP implementation. Linux has been slowly and progressively increasing its locking granularity. FreeBSD did virtually nothing until the 5.x series to move from the BGL. However, FreeBSD 5.0’s locking granularity is much finer than in Linux. Furthermore, the inclusion of scheduler entities will provide a very nice tradeoff between the advantages of both kernel and userland threads implementations.
But back to the issue at hand, I think, as everyone know, Scott Long’spoint stands out the most the most with FreeBSD’s VM subsystem, which is, at this point, a very well tuned and mature implementation. Linux has a very ecclectic VM implementation, especially in 2.6, utilizing new, untested, and untuned technologies almost exclusively. This lack of testing and maturity was what lead to the VM switch in the 2.4 series.
I think that primarily because of that very incident many people are wary about the code quality of the mainline Linux kernel. Certainly 2.4 has grown to be quite mature, but will we see a similar incident with 2.6? One can’t really know…
The bottom line is that FreeBSD’s legacy code base does not result in a development path that is markedly different from that of Linux. The simple fact remains that Linux has more zealots, therefore more mindshare, and consequently more developers and corporate support.
Scott Long:.
samb: And despite this supposed lower barrier of entry, there seems to be a lot more happening on the Linux side of things..
Of course, the result of such corporate backing and its significantly larger mindshare is that Linux has eclipsed FreeBSD in most areas.
samb:.
No, no one does. The proof is in the code, and the code isn’t there yet. However, the following is known: Sun switched from an M:N threads implementation to a 1:1 implementation in Solaris. While Solaris’s M:N implementation didn’t support scheduler activations and faced many of the same I/O starvation issues that FreeBSD 4.x was experiencing, the overall opinion seems to be that the complexity required for an M:N implementation leads to deficient overall performance.
Of course, as I said earlier, the proof is in the code. For the time being Linux has FreeBSD beaten with the NGPTL.
And another thing is for certain: the KSE threads implementation will be a significant improvement over the userland implementation in FreeBSD 4.x. Whether or not it will outperform Linux and the NGPTL remains to be seen.
So which OS is “better” in terms of purely technical merit? Well, I think it’s important to keep the following in mind:
The majority of the systems running FreeBSD or Linux are going to be uniprocessor.”)
As things have stabilized later in the 2.4 series, is there anything now worth mentioning?
One of my biggest with Linux remains to be the OOM killer. The OOM killer uses somewhat arbitrarily constructed algorithms (see for a full explanation of criteria) to determine which processes to kill in a low memory situation. This approach follows a surge of low-quality code being churned out primarily on the Linux platform which doesn’t properly handle low memory situations in userland applications.
Personally I think the OOM killer is a horrible decision on the part of the Linux kernel designers. Because of this on systems which don’t properly set ulimits, or even in other conditions where a memory exhaustion attack may be carried out on some service, the kernel will arbitrarily kill another process, often times a mission critical one..
I know there is a road map, but I would have liked a
question about its date confirmation (5.1, I mean). Also if we are to expect a 4.9 or not (at this point in time).
A big Thank You to the BSD persons for a really good
interview (and long to read). Thanks to osnews (Eugenia)
too.
______________
FreeBSD will never be a BSD ‘for the masses’.
Start with Linux if you can’t get used to BSD and then
switch later(you won’t regret it). FreeBSD never attempted
to rule the world (as said by the developers) don’t try to
change its mentallity now.
The basic administration GUI would be a nice development
but seems like it’s a time consuming task.
We don’t need a new installer but some gui admin
frontend would facilitate things a lot on basic changes to
the system:
boot options and maybe a ports and packages
installed *manager* with python and tcl/Tk, that could
read a file with the port version and other information,
but not a gui ports installer, the prompt is better.
Plenty of people like FreeBSD.
I very much enjoyed reading the interview. Once again, another fine interview.
Amen. The installer as it is has been one of the most easiest installers that I’ve ever used.
Eroll Flynn wrote
“PS. don’t tell me about OSX, I’m not willing to buy Apple’s hardware – too expensive.”
Well you should!!!!!!!!!!!
I began using FreeBSD in the spring of 2001. For years I had tried, unsuccessfully, to wean myself from MS Windows. I had tried RedHat 3.0 when it came out as well as debian and then Redhat again. But alas it never quite stuck. From what I remember my biggest problem was getting the system up and running and connected to my isp. The simple task of correctly setting up my modem was difficult. Everytime I read a how-to or explaination on some webpage I discovered that my system was setup differently than the system refered to. I always eventually got things working, but the process wasn’t enjoyable. When I installed FreeBSD everything wasn’t roses, but I wasn’t nearly as frustrated. The FreeBSD Handbook more than anything else contributed to my overall satisfaction with the Operating System. With the help of the HandBook I was able to not only connect to my ISP, but share the connections with my wife’s running windows 98. I had never been able to do that so simply before using Debian or RedHat.
I’ve found that the learning curve for FreeBSD is very linear and easy to pace as compared to Linux. I discover and remember things about FreeBSD naturally as opposed to having to write things down in order to remember them with Linux. As subjective as this might seem I feel differently using FreeBSD, the expected way to do things just seems to make sense..
If anything, I’ve got to mess more with software on FreeBSD than on Linux, although admitidly that has more to do with a lack of carefull effort on the part of software developers than FreeBSD itself.
One thing that I hope could move most people from Linux to BSD is zeals like Samb. The way to discuss things in BSD land is obviously far more professional…
I’d be especially interested in system throughput benchmarks of Linux 2.6 versus FreeBSD 5.0.
So would I actually. The IO scheduler, process scheduler, VM and VFS work being done on the linux kernel in the past year are major changes. This doesn’t mean that they aren’t well tested though. Noone wants a repeat of the VM debacle (circa linux 2.4.10). Past mistakes are learned from quite often on lkml.
Only now has Linux solved its threading woes with NGPTL.
There were two approaches taken. IBM developers came out with NGPT (Next Generation Posix Threads), an M:N implementation, that performed around 2x as well as the older linuxthreads module to glibc. The glibc maintainer, (and RedHat employee) Ulrich Drepper,. and others whose names escape me wanted to keep the simplicity of a 1:1 implementation, and with the help of a simple and fast userspace mutex (futex) written by Rusty Russell, Drepper authored the NPTL (Native Posix Thread Library) pthread implementation, which is also binary compatible with the older linuxthreads implementation, differing in favor of closer posix compliance. Preliminary benchmarks show it performing 4x as well as NGPT, though it is not a finished product yet. Better info is <A HREF=”“>here .
Still remains to be seen which approach, M:N or 1:1 is simpler to code for and is more robust, and under what workloads..
It could also be the difference in licensing. It would be one thing for SGI to release XFS, a technology that they have invested much in, under a license that says ‘You may take this, and use it for your own commercial profit, no strings attached, total freedom’, and quite another for them to release it under a license that says ‘You can use this as you please, but if you make changes to it, you have to tell us what you did to make it better and share your improvements to this code you benefit from.’
In the case of the latter license, any advantage for possible SGI competitors (Sun, Microsoft, IBM), to take XFS and improve it and ship it with their respective OS’s is gone, due to the ‘viral’ aspects of the GPL. This works for SGI, and others. The BSD license was not appropriate for situations like this,. and this is likely what led to corporate interest in linux. The GPL would not work for Apple,. their situation is entirely different.
The rest of this particular sentiment is just mindless baseless trollbait.
( As an aside, I would have liked to see what fbsd-core thought of reiser4, if they had a chance to look at it. It is completely different from reiserfs. Also ext3, which journals metadata _and_ data.)”)
This is a confusing statement. AFAIK 2.4.13 isn’t a shipping default kernel anywhere. Which means your friends built their own. If they are downloading and building their own kernels, I assume they also follow linux kernel development. If so,. they’d have been aware of the change in 2.4.10 and the various versions it took so solve some of those issues. I’d put this one as comparable to the users who are complaining about FBSD 5-CURRENT as being slow with debug on.
Also I think the term ‘stable’ is thrown around far too often by all. Stability needs to be measured, quantified. It means far different things to a sysadmin’s servers than it does to a end user who may have no concept of what real LOAD is. There are even seasoned admin who will probably never see what real load is like. Also,. as was mentioned before, there is only one FreeBSD, the various linux distributors often patch and modify their systems, and every bit of software on them. Stability issues on RedHat Linux do not mean that SuSE Linux Enterprise Server is necessarily deficient. It really is “Redhat’s Linux”, and “SuSE’s Linux”. In Slackware for example, there are no patches to things unless Patrick absolutely must. The default kernel shipped in that case may fall over in a VM corner case that RedHat’s kernel (which uses Rik Riel’s rmap vm) may hum happily along in.
In this example, the unity of FreeBSD is very advantageous. Any one linux distributor can soil the reputation of a kernel that only makes up one part of the system, and that is used by many others besides. The attention and detail in finding out the causes and reasons for problems ‘in linux’is not something everyone is willing to invest in. Or even should.
Personally I think the OOM killer is a horrible decision on the part of the Linux kernel designers.
Agreed. It sucked. Live, learn, yank the bad stuff out..
One userspace application that is well written but hasn’t always performed well in all situations on linux is X.
Often distributions (like Debian), would renice X to a higher priority to enhance the interactivity of X. It’s a hack that should no longer be necessary in 2.5. One more step along a path towards a good desktop experience for even the most demanding users. In short, with the speed of linux development (helped in the past year by various commercial interests, IBM, SGI, Namesys, Bitmover, etc) the things you dislike about linux may be gone tomorrow. Literally.
samb:There were a few other silly things in at as well that I just can’t be bothered to respond to.
Likewise, I saw a lot of little things in the article, the interviewers responses, and the comments thus far that are skewed, snide, false and more than just a bit off, where the developers glossed over their weaknesses and poked at others’ failings. Then again they’re here to represent FreeBSD and that is their concern. The interviewer wasn’t exactly unbiased either. All in all though, good reading.
I really really enjoyed this interview. Wes Peters responses were really on point. I would have liked to see less said about linux, and more about FreeBSD development, as I don’t tail their mailing lists. The commentary on SCO from fbsd-core was something I had wondered idly about previously.
’.
Pretty good interview. I thought a few of the Linux bashings were a bit off though, the whole “linux is fragmented and inconsistant” – well, I don’t know for sure but I suspect you’d have problems running NetBSD binaries on FreeBSD, and OpenBSD binaries on NetBSD. What is their point? There are multiple versions of *BSD too, and Redhat is no less a “system” than Red hat is. I think they underestimate the amount of integration and testing the distros do.
Oh, BSD users always seem to come out with “Linux zealots want to rule the world”, and use that to try and make the “we don’t care what the rest of the world thinks, we can use what we like” attitude seem more reasonable. I personally don’t agree. Wanting to rule the world is a sign of confidence in the product!
Anyway, to pretend that a FreeBSD or Linux user exists in isolation is false – whenever you hit problems with friends sending you word files, not being able to play the latest games, view movie trailers etc you have the problems caused by the rest of the world using non-free software. So, it seems like a defeatist argument really to say improving the desktop doesn’t really matter. I don’t mind them taking that approach, they should work on what they want, but I detect slight bitterness over the fact that Linux does.
The comparisons with MacOS were rather funny as well. MacOS X is not FreeBSD, not even close. Yes, it may use some of its code, but some of the responses seemed to be “that’s a solved problem, just use MacOS X”, or “if you want FreeBSD on a Mac, use MacOS” seemingly ignoring the fact that MacOS is not a free nor open platform like FreeBSD/Linux is.
I have scanner and scsi adapter for it.
But i couldn’t find driver for my scsi(sym53c416) in FreeBSD 4.x and I don’t see any support of it in FreeBSD 5.0
So for scanning I forced to use Linux!
Anyway FreeBSD really is the most productive and performant operating system I ever seen. Of course it lacks some features that Linux filesystems have. I don’t mean logging, softupdates are better, but it really would be nice to have features like dynamic inode allocation.
I really liked to try FreeBSD, but that harddisk kept timing out. After a reboot, everything works fine, but the more hd access there has been, the more timeouts come. That made the system wholly unusable so I threw it away.
Does anyone know what the problem is here?
I also tried NetBSD, but when I installed too much software I did not trust the 102% disk usage with -5xxx free blocks.
probably wrong geometry of disk
Eugenia, I didn’t mean the QUESTIONS were full of errors, but the ANSWERS. Editing their words never entered my mind, but I think that spelling errors, if not corrected, would at least be indicated as such, i.e.[sic], as is the custom in print media. To me, online journalism is print journalism in a different way and feel that as print articles are proofed so to are electronic ones.
As for the rest of my post, which WAS critical of the attitudes
manifested by those interviewed, but I think no more so than some others I’ve read here today (and, in fact, quoted or otherwise referred to by other posters) why was it modded along with the flame? Critique them I did and I also said specifically what I was referring to in the article.
As for the thumbnails, I really can’t recall having seen the thumbnails accompanying each section like that. Most distro reviews/articles do have screenshots in them, but I thought the thumbnails were particulatly illustrative. If I’ve missed them before, I’ve missed out then and I’ll need to re-read those distro reviews.
Only once response toward happy bsd users(happy for them means always bash linux) and immidiately response “i wish that linux zealots would disappear..”. Every time we have article about FreeBSD all that BSD lovers (of course linux haters) appear.
And they still claim that linux users are zealots. Just look
at that 70 responses and calculate how much contain someth. like “yes on BSD in start in 1 millisecond in linux 1 hour , linux crash, linux unstable”. Why u cannot love your OS without hating another ? I love linux but donot hate FreeBSD,
may be only some “suporters”. And most of the time they really doonot have linux on their disk , but there is something called Winxxxx.
However, FreeBSD 5.0’s locking granularity is much finer than in Linux..)
Linux has a very ecclectic VM implementation, especially in 2.6, utilizing new, untested, and untuned technologies almost exclusively. This lack of testing and maturity was what lead to the VM switch in the 2.4 series.
That isn’t a mistake which’ll happen again as organisations like OSDL have been performing extensive regression testing and benchmarking all throughout the 2.5 development process.
Linux’s VM has to stretch a lot further than FreeBSD’s – with Linux being used on everything from swapless embedded devices to SSI boxes with half a terrabyte of RAM there are a lot of cases to cover. A /lot/ of work has been done on the VM in 2.5 and current indications are extremely good.
FreeBSD’s VM is very robust within the <=4GiB PC/Server segment, but it hasn’t been tuned (or even designed) to cope with situations outside of this bracket. cf. the PAE work.
It’s also not fair to claim that the technologies used in 2.5’s VM are untested or untuned – the work done by OSDL, IBM and various other vendors combined with the leasons of the early 2.4 debacle mean that it’s far from untuned and untested.
I met Greg during a small FreeBSD meeting in 1996 in Cologne, at a time when he still lived in Germany and used to work on the Siemens-Nixdorf Unix kernel.
Interesting guy, professional attitude with a slight consultant touch, good at networking with people.
What you interpret as jerky or typical bad BSD style,
is rather part of a professional’s mindset in my opinion.
I always felt that FreeBSD was more about doing a free project with the high standards of industry, than the just for fun hacking approach and/or political (world domination) that I believe is typical for Linux.
It is worth listening to Greg.
Or take OpenBSD’s Theo DeRaadt.
I can only judge that guy from what I read directly from him and that was always competent and very worth to think
about.
So I believe that professional attitude and/or high technical competence seems to attract undeserved bashing from some parts of the Linux crowd.
Regards,
Marc
Slashdot got a story up about Debian GNU/NetBSD running on a sparc, just went through the Debian page seems they also got a project involving FreeBSD, it uses the kernel, libc and a number of kernel related bits and then the rest of the Debian system.
What i’d find interesting would be if somebody took the FreeBSD system and swapped in the Linux kernel, (yeah i know we got gentoo for a ports based system)
be interesting to compare a Linux system using glibc and one using FreeBSD libc’.
Half the time I try that, something messes up, and I end up building from source anyway. Ports isn’t all it’s cracked up to be. It’s nice, but it’s far from 100% reliable. When I download a source distro, I do a make install and everything usually works, generally. Not so much with FreeBSD.
There was a distro of Tomcat that included -pthreads in it’s build, which caused Apache to freak out. The -pthreads was in there by mistake for the FreeBSD build (there was also -dlinux), which again isn’t any fault of the FreeBSD system, rather sloppy packaging.
So Linux source builds tend to go much more smoothly than FreeBSD builds, although that’s more of an effect of usage than Linux versus FreeBSD.
I recently was in a situation where we had to upgrade from Java 1.1.8 on FreeBSD to something more, oh, say modern. Why would I run a Linux compatibility mode on a FreeBSD kernel to get 1.4.1 working when I could just run Linux and eliminate the compatibility factor?
That takes an entire layer of things that could go wrong out of the loop. And that is important when you’ve got developers writing buggy code and when things go wrong they might say “well it’s the compatability layer”.
Eugenia,
Well done on the article. Thank you for providing a piece i enjoyed reading.
I have a misc question for you though:
What sys monitor app are you using (as shown on between gaim & gnumeric)? Is that also a theme on the app or standard look?
Thanks.
Great article! Kudos to Eugenia, and the FreeBSD participants.
Marc van Woerkom:
So I believe that professional attitude and/or high technical competence seems to attract undeserved bashing [of Greg Lehey and Theo DeRaadt] from some parts of the Linux crowd..”
Also, see this post by TdR about the DARPA-hotel situation:
Now, I can be a dink, and it’s fun sometimes — but I try to never be both 1) a dink and 2) stupid, in the same breath. In the GL quote above, it’s factually and demonstrably STUPID of GL to claim that he’d consider the question uninteresting, while insulting the interviewer’s journalistic acumen in the process. The interviewer found it interesting enough to ask — finito benito, answer the damn question. He even gets wisely put in his place by Losh. As for TdR, he’s often an intelligent dink, which is entertaining, but in his reponse in the URL above he is a STUPID dink — ironic, considering the fact that he insults someone else for being inattentive (the caffeine quip).
About FreeBSD (and other BSD) ‘distros’:
PicoBSD:
EmBSD: (page down — alternate?)
ClosedBSD:
TrustedBSD:
MicroBSD is no longer with us (RIP). There may be others.
Dan Langille: If that’s a long read, does it mean most osnews articles are just one or two paragraphs?
There’s some little green text under the article that says “Read More”. Click on that. Once you’re done reading the ensuing page, there’s some more little green text at the bottom. Bis, ad infinitum.
Quag7: If you live in the United States and are familiar with the US, think of FreeBSD as Canada, from a user’s perspective. A little different, but easily navigable (I may have just bothered some Canadians – my bad).
No insult here, except from residents of Toronto and Alberta — who might only be insulted because they’re so Americanised
And I guess this means that OpenBSD is like Québec! (before anyone thinks I’m being derogatory: I’m French-Canadian, and also an OpenBSD user).
novel is spelled novell. go check it out
Bascule: However, FreeBSD 5.0’s locking granularity is much finer than in Linux.
phil:.)
I probably should say something along the lines of “theoretical locking granularity”
However I’m simply going off the number of locks present in each of the various kernel subsystems, and there are significantly more in FreeBSD.
From what I’ve read of recent Linux kernel ChangeLogs though (and due to overall stalls in FreeBSD development) Linux is currently farther along in removing the global lock. A considerable number of FreeBSD device drivers still require the Giant lock (see for a complete list) whereas Linux has been working on removing it for the past three kernel revisions. So, in response to Linux being farther along than FreeBSD in removing the BKL, all I can really say is “One would hope so”
Linux could, in theory, have the BKL removed by the 2.6 release, whereas in the case of FreeBSD the driver rewrites would necessitate the removal as part of 6.0 at the minimum.
As far as kernel subsystems go, the only area where Giant continues to see considerable use is in support for non-native ABIs (see for more information) This issue isn’t even comparable between Linux and FreeBSD as Linux doesn’t really need robust support for any ABI but its own..”
It’s interesting that you use “dink”: I would have considered it a relatively unknown word, as I can’t translate it via babelfish and you probably don’t relate to “double income, no kids”.
As I don’t know its exact definition, I can’t comment on it. 🙂
Fun aside. Perhaps I worked with too many dinks (I studied physics
in the past, that I consider it appropriate style.
Regards,
Marc
‘Dink’ is Canadian for dick.
And the snarky comments here slagging Greg Lehey, in general, have been uncalled for and more telling of the posters’ characters’ themselves than of GH. I mean, really, the guy spends his time answering questions for you, and then in turn gets all this shit. A ‘thank you’ might have been more appropriate.
Our founder and chief system administrator, who is one of the biggest BSD advocates on the planet and trained most of us to do BSD system administration, told us that Lehey has tried several times to get him kicked out of the FreeBSD community out of sheer malice. I was not sure that I believed this before, it seemed far-fetched that someone who wrote a column called “Demon’s Advocate” would be so nasty. But now I do. Lehey really does come across like a snobbish, self serving jerk. -Sarah
This is from maillists (names do not really matter):
[ … some of irrelevant text skipped … ]
> > > We really need to think about efficiency. Our 5.x performance sucks.
> > > Really sucks. We’re being nickled and dimed to death by extra
> > > instructions here, there, and everywhere.
> >
> > Unfortunately 5.x attempts to run with a thread-safe kernel, and that
> > involves extra overhead to work around races that 4.x didn’t even have
> > to dream about. In theory the increased performance should come from
> > increased parallelism at the cost of increased overhead. If FreeBSD
>
> No, in theory increased performance should come from increased
> parallelism with no increased overhead. Any increased overhead is a
> bug. Linux 2.4 runs a thread safe kernel with less overhead than we
> have in 4.x. Its possible.
How are we going to achieve increased paralellism w/o increased overhead?
Discounting redesigning algo’s which would have been a win in the
non-parallel kernel as well. A mutex is far more expensive than an spl.
You have to protect against more things. Of course overhead is going to
go up.
> As we get closer to a stable branchpoing, and continue to suck, I’m
> starting to think we should start over.
Well, the Project is free to choose that if it wishes.
[ … a bit more of irrelevant text skipped … ]
Judging from theses, there seem to be some really serious problems and concerns within FreeBSD community. And frankly I don’t like it. *sigh*
Grog’s linux box running his sat connection probably went down again. He did sound a bit edgy during the interview though. If you do choose to judge him based on this interview, please do it after you run a google search on “Greg Lehey” and read the thousands of past threads on the FreeBSD mailing lists where he has personally helped people like myself with FreeBSD related questions over the years.
1. No bad mouthing or cursing.
2. No attacks to other users or news editors of this web site.
So those who agreed to the rules, still wish to belittle interviewees ?
Please show some respect for other people and their views/ideas/way of doing things ( i wouldn’t post this if there was no rules, but as you see above, there are! )
Hi !
First I want to say that this article is very great. Please more !!!
I have a look at FBSD since the late 3.x versions. I`m using Linux (Debian & SuSE).
The most nice things of FBSD is in my own opinion the speed, stability and topicality (apps) of FBSD.
But a little bit more comfort would be nice. Without spending time for additional configuration a useful shell prompt and dircolors like in Debian and SuSE woult be great. Ok, that are small things but things which make computerwork a little bit more pleasing. Also I couln`t find some graphical tools configurating my FBSD-System and handle my apps such as SuSEs yast2 or gnome-apt. (I just think at the graphical Layout and its features of yast2. Not the underlaying code !!!!).
An graphical Installer (alternatively) to the curses version would be nice too. I think the curses base Installer really good but a OS in The year 2003 shoud offer a graphical option.
Yes I no that these Points are not the core targets of developing FBSD. Working with Computers shoul make fun too. In Germany we have a Sentence which means “The eye is eating
too”. If you no what I mean. Beside all Features and technical questions we should not forget a sensible portion of user-friendliness.
Marc van Woerkom: It’s interesting that you use “dink”: I would have considered it a relatively unknown word
I’m trying not to be TOO vulgar here
The real word I was thinking of, I’d rather not commit to an OSNews post. I think I spend plenty of bile here already.
slang: the guy spends his time answering questions for you, and then in turn gets all this shit. A ‘thank you’ might have been more appropriate.
Does the “Kudos” I gave count? I do appreciate GH’s (and everyone else’s) input — that doesn’t suddenly make GH a SUPAR GUY. He didn’t seem too thankful that the interviewer gave him a public forum to express his view (unless his views are that the interviewer sucks).
I’m trying not to be TOO vulgar here
The real word I was thinking of, I’d rather not commit to an OSNews post. I think I spend plenty of bile here already.
If it is your opinion its ok to express that.
Thanks for dink-enlarging my vocabulary.
Regards,
Marc
Judging from theses, there seem to be some really serious problems and concerns within FreeBSD community. And frankly I don’t like it. *sigh*
This kind of message may seem discouraging, but it is par for the course of any large opensource project. What was the resolution in this thread,. if any? Trailing linux-kernel, one sees all manner of disparaging commentary on differnt approaches taken to solve a problem,. or to current problems.
(For example, dynamic allocation of device nodes has been something desired in linux for many years,.. noone has figured out a way to do it very well yet,. they’re considering making dev_t bigger in the interim).
Just because you see a few messages that are not terribly encouraging doesn’t mean that the Project is in any danger. To the contrary, it shows that it’s quite healthy :o)
“On a Debian system, this kind of behavior would be simply unacceptable.
I should take some time to think of a better way.
The ‘better way’ requires a lot more work. Just take a look at the Debian Policy Manual:“
I have tried Debian numerous times. Unfortunately I always seem to kill it. The apt-upgrade doesn’t always work properly and leaves the system half toasted. There have been times when I’ve somehow managed to toast the internal database that governs the packaging system so it has no idea what to update or upgrade. And NO, I have no clue what I did wrong. I was following Steve Hunger’s Debian GNU/Linux Bible. By all accounts a decent book.
Perhaps my hardware was flaky in this regard, but it’s happened to numerous systems I’ve tried Debian on. For all the much vaunted reputation of Debian, for me it just doesn’t work. A friend of mine seemingly has absolutely none of the issues I’ve had with it. Go figure.
I’ve had issues with the ports system in FreeBSD too. Normally this is simply a case of waiting a day or two for whatever is screwy to get fixed. Sometimes it’s a case of pkg_deleting the old program and reinstalling via the ports.
No system is perfect, and there will always be issues on any OS.
I’d use FBSD but want installation to be easy. Sorry. I see this issue like those who make a pretty good car and leave the Model-T type crank starter on it–“The rest of it works great” they say, “Our users don’t care!”
Certain writers produce a novel which appeals to a certain crowd, a certain market segment. Thereafter they write to that crowd. This is what the FBSD team seems to be doing. In so many words it was said that desktop usability is antithetical to the other functions this OS has been used for, and that the users don’t expect that.
I think the best idea is a system which is user-friendly like Mac, but runs on Intel/AMD chips–best of both hardware and software worlds.
I liked the interview very much.
I found that Greg Lehey didn’t come across at all well, in fact he came across as a very poor interview candidate and socially awkward. He seemed to frequently misunderstand or simply poorly answer questions (though in contrast Long and Losh came across very well).
The cheap shots shots made… e.g.:
Possibly Linux users are more accustomed to jumping through hoops to get software installed, but FreeBSD users expect to be able to type ‘make install’ and have things done automatically.
…are entirely puerile, quite stunningly stupid from a self interest perspective and display an amazing degree ignorance of modern software package management solutions.
No project needs a contributor with an immature attidue (and I belive that goes for commercial software development too). | https://www.osnews.com/story/3415/focus-on-freebsd-interview-with-the-core-team/ | CC-MAIN-2021-21 | refinedweb | 21,220 | 71.65 |
make sure the tutorial matches grokproject behavior exactly
Bug Description
The Grok tutorial has an introduction about how to install Grok. Unfortunately in the mean time grokproject has undergone some
evolution and the tutorial is evidently slightly out of date. We need to fix this.
I re-read the tutorial when grok 0.12 was released and I found some things to correct/ameliorate.
Grokproject 0.7 doesn't ask for the initial module name app.py anymore. The tutorial should be adapted.
I found two paragraphs where there is a mention to it:
You will be asked a number of questions now. First you need to supply
the name of the initial module that your package will contain. We'll
stick with the default ``app.py``:
In it is a Python package directory called ``sample`` with
the ``app.py`` file that grokproject said it would create.
In groktut/
<includeDepende
In the "Unassociated templates" sidebar, problem of rest syntax:
Since in the given ``app.py``e we have no more class using it, the
``bye.pt`` template will have become *unassociated**.
please replace by:
Since in the given ``app.py`` we have no more class using it, the
``bye.pt`` template will have become **unassociated**.
And always in this sidebar, it's mentionned that grok will crash if it found an unassociated template. It's false, it's a UserWarning in grok 0.12 now.
And I propose to ameliorate this:
In the "Automatic forms" sidebar, replace XXX by a link to http://
In "The rules of persistence", maybe you can say you can use:
Instead to add:
self.context.
You can replace
self.list = []
by:
self.list = PersistentList()
and add the following import:
from persistent.list import PersistentList
I'm re-assigning this to Brandon as he's the one working on grokproject right now. Brandon, could you go through Michael's changes to the tutorial to confirm whether they match the behavior?
i wonder if we should not simply use the 'paster create -t' formula to describe installation
and its output.
The current tutorial describes the Grok installation process
During the Grok sprint at PyCon, some of us went through the tutorial, carefully testing along the way, and made some corrections. These corrections should bring the tutorial up to date, and improve the grammar in some places.
Since we did not have SVN commit privileges, I sent the changes Brandon Rhodes an SVN diff for doc/tutorial.txt and doc/about.txt, and have also attached the SVN diff here. | https://bugs.launchpad.net/grok/+bug/172797 | CC-MAIN-2021-21 | refinedweb | 422 | 67.45 |
In one of Quarkslab's projects, we came across the issue of randomizing a large set of integers, described as a list of disjoint intervals. These intervals can be represented as a sorted list of integers couples, like this one: \([1, 4], [10, 15], [17, 19], \dots\). The idea is to randomly and uniquely select numbers across these intervals, giving a shuffled list of numbers that belong to them. For instance, \([1,10,18,4,3,11,15,17,19,12,14,13,2]\) is a possible output. Moreover, each possible permutation of the integers set should have equal probability of appearance. If you're just interested in the final library that "do the job", go directly to the implementation section to download the leeloo C++ open-source library on Github !
Trivial algorithm
The not-so-trivial (but still) algorithm is to generate an array containing all the original sorted integers, and then apply a shuffle algorithm (like Fisher–Yates [1]) that uses a common Pseudo Random Number Generator (PRNG). As an example, in C++, std::shuffle can be used to do that.
The main issue is that a buffer of n integers is required. For instance, with \(2^{31}\) 32-bit integers, one needs a buffer of 8GB, which is not acceptable in our situation.
Other trivial algorithm
Another approach to reduce the memory footprint is to randomly select numbers between \([\![0, n [\![\) (using a classical PRNG), and keep a bitfield of already returned candidates not to return twice the same. When we start to reach too many times the same numbers, we change the algorithm:
- With \(R\) the remaining numbers of candidates to find, get a random number \([\![0, R [\![\) and find the position of the R-th bit not set in the bitfield. That can be optimized thanks to SSE instructions ;
- Set that bit and return the value ;
- Go on with \(R=R-1\) until \(R=0\).
There are multiple drawbacks with this algorithm:
- It still needs \(O(n)\) memory bytes (even if it is less than the previous algorithm) ;
- The final stage can be really slow if R is such that the remaining bitfield does not fit in cache.
See also [5] for a description of a similar algorithm.
Problem reduction
Thus, the main issue is to generate a list of unique random numbers between a given \([\![0, n [\![\) interval, with good performances (say about 50 million numbers per seconds on a Core i7 3rd gen) and a small memory footprint (\(O(1)\)).
So, the final problem is to be able to choose in an equiprobable manner a permutation of \([\![0, n [\![\) among the \(n!\) ones (\(n!\) being the number of permutations of \(n\) distinct numbers [4]), using only \(O(1)\) bytes of memory (keeping in mind the performance criteria).
The first question is to understand if this is even feasible, and the second issue is to figure out an efficient method to achieve this.
Formalization
Let's do some math to formalize this problem.
Some context: let \(n < 2^{32}\) and \(\{i \in [\![0,n[\![\}\) the numbers' set to shuffle ; \(n\) is always chosen as a prime number.
The choice of \(n\) as a prime induces interesting properties (as it is shown below), but the careful reader would notice that we won't always have a prime number of integers to generate. However, we can still live with that.
Indeed, let \(n\) the original number of integers to generate and \(p\) chosen as:
- \(p\) is prime ;
- \(p \geq n\) ;
- \(\forall i \in ⟧n,p[\![\), \(i\) is not a prime.
Or, in other words, \(p\) is the smallest prime number greater than or equals to \(n\).
That way, we will produce numbers between \([\![0,p[\![\), and not \([\![0,n[\![\). It is not really an issue because:
- when a number in \([\![n,p[\![\) is generated, just discard it and compute the next one until it belongs to \([\![0,n[\![\) ;
- the density of prime numbers in \([\![0,2^{32}[\![\) allows us to do that, as the maximal gap between two consecutive prime numbers is 354 [2].
We will now work in \(F_p = \mathbb{Z}/p\mathbb{Z}\). \(p\) being a prime, \(F_p\) is a division ring [3] (that's the great property).
Then, with \(S_p\) the set of permutations of \(F_p\), our problem is equivalent to choose with equal probability a permutation in \(S_p\).
(Partial) resolution
All of that theory is nice, but it does not change a lot of things in concrete. Let's go now in the crux of the issue, and understand what can be done with \(S_p\) :)
Permutation polynomial
One can notice that every application \(F_p \rightarrow F_p\) can be written as a polynomial of \(F_p[X]\). Indeed, for instance, given \(F\) an application, a chebytchev polynomial equivalent to \(F\) can always be found.
Thus, every element of \(S_p\) can be described as a polynomial of \(F_p[X]\).
Trivial algorithm
That way, one (still not-so-trivial ;)) algorithm would be:
- Generate a random polynomial of \(F_p[X]\). This is equivalent to compute \(p\) random coefficients ;
- Check if this polynom represents a permutation ;
- If not, go back to the first step.
But wait... There are multiple issues here.
First, these \(p\) coefficients need to be stored in memory, giving a memory footprint of \(O(p)\) bytes.
Moreover, the problem of checking whether a polynomial is a permutation one or not can be somehow complex and slow. Probabilistic methods exist (shown in [6]), but it still leaves us with some potential errors. The performance cost of all of this could be important. We didn't take the time to benchmark this algorithm as it suffers from the \(O(p)\) memory issue...
And finally, left to generate a buffer of \(p\) integers, we could just stick to the first "trivial" algorithm described at the beginning of this paper.
The real great stuff
We need to find a better way to generate these polynomials. We will use the \(F_p\) division ring properties.
Indeed, it can be demonstrated that, in \(F_p\), every permutation is a bijection, and every bijection is a permutation.
Thus, a whole set of polynomials can be described:
- for every \((a,b) \in (F_p^*,F_p), X \mapsto a*X+b\) is a bijection, and thus belongs to \(S_p\) (Equation 1)
- for every \(c\) such as \(gcd(c,p-1) = 1\), \(X^c\) is also a bijection [6], and belongs to \(S_p\) (Equation 2).
Moreover, as the combination of two bijection functions is a bijection, combining these two sets of polynomials will produce new ones.
What's even more interesting is that it can be demonstrated that, for every \(a \in F_p* \{X+1, a*X, X^{p-2}\}\) is a generator of the \(S_p\) group, using the composition law. [6]
So, the final result is that, theoretically, every permutation of \(S_p\) can be defined as a combination of these polynomials aforementioned.
Entropy and equiprobability: how random is random?
For the following, we will defines three sets :
- \(G_a = F_p^*\), the values that can take a in (Equation 1) ;
- \(G_b = F_p\), the values that can take b in (Equation 1) ;
- \(G_c = \{c \in F_p \ / \ gcd(c,p-1)=1\}\), the values that can take \(c\) in (Equation 2).
Let's define these two applications:
\begin{align*} L : G_a \times G_b &\rightarrow S_p\\ (a,b) &\mapsto X \mapsto a*X + b \end{align*}
\begin{align*} G : G_c &\rightarrow S_p\\ c &\mapsto X \mapsto X^c \end{align*}
The first idea coming to our mind is to randomly combine the polynomials generated by these applications.
Let's define \(GS_p\) (\(S_p\) stands for 'seed part') as \(G_a \times G_b \times G_c\).
For instance, let's randomly choose \(S0=(a_0,b_0,c_0) \in GS_p\) and \(S1=(a_1,b_1,c_1) \in GS_p\). The couple \((S0,S1) \in GS=GS_p \times GS_p\) can be considered as the seed of our random number generator.
We know that \(L(a_0,b_0) \circ L(a_1,b_1)\) is a permutation polynomial, \(L(a_0,b_0) \circ G(c_0)\) is another one, \(G(c_0) \circ G(c_1)\) also, etc...
(Note: \(\circ\) is the function composition, which means that, for instance, \((L(a,b) \circ G(c))(X) = a*X^c+b\))
Thus, every 'seed' values that belongs to \(GS\) can produce a set of permutations.
Unfortunately, there are main issues with this approach, that we will call "entropy reduction". Indeed, we know that we can create permutations by composing \(L(a_0,b_0), L(a_1,b_1), G(c_0)\) and \(G(c_1)\), but :
- \(L(a_0,b_0) \circ L(a_1,b_1)\) can also be expressed as \(L(a_0*a_1, b_1*a_0+b_0)\). In other words, a combination of affine functions is an affine function). Moreover, as shown in Appendix A, choosing independently \(a_0\) and \(a_1\) in \(G_a\) and computing \(a_0*a_1\) is equivalent to randomly choose a number in \(G_a\). The same goes with \(b_1*a_0+b_0\). Thus, if we choose one seed \((S_0,S_1) \in (G_a \times G_b)^2\) and compute \(L(a_0,b_0) \circ L(a_1,b_1)\), this is equivalent to choose a seed \(S_0 \in (G_a \times G_b)\) ;
- The same issue comes with \(G(c_0) \circ G(c_1)\), which is equals to \(G(c_0*c_1)\) ;
- Even by combining \(L(a_0,b_0) \text{ with } G(c_0)\), then with \(L(a_1,b_1)\) and \(G(c_1)\), (giving \(L(a_1,b_1) \circ G(c_1) \circ L(a_0,b_0) \circ G(c_0)\)), the following question must be answered:
\begin{align*} \text{With } GS' = GS \times GS,\\ UPRNG: GS' &\rightarrow S_p\\ (a_0,b_0,c_0,a_1,b_1,c_1) &\mapsto L(a_1,b_1) \circ G(c_1) \circ L(a_0,b_0) \circ G(c_0) \end{align*}
is there any couple \((S_0,S_1) \in GS'xGS'\) such as \(UPRNG(S_0) = UPRNG(S_1)\) ?
Another way to formalize this problem is as follows: given a seed taking values in a space \(S \text{ of } s\) integers from \(\mathbb{Z}/p\mathbb{Z}\) (\(s\) unknown), is:
\begin{align*} UPRNG : S &\rightarrow S_p\\ seed &\mapsto \text{method to generate a permutation polynomial} \end{align*}
a bijective function?
Now, let's demonstrate a somehow intuitive result.
If \(F\) is a bijection, then \(\|S\| = \|S_p\|\), which gives \(s = p!\). This means that, in order to generate a random permutation of \(S_p\), we must choose a seed number between the \(p!\) ones. In other words, we must choose \(p\) unique random numbers. Well, this has just sent us back to the beginning of this article.
Our method for compromises
But the game is not yet finished, we haven't gone this far for nothing. So let's work a bit with our results.
We now understand that, somehow, some compromises have to be made. We know that the size of the seed must be reduced. By doing this, we know that we won't be able to uniquely generate all the possible permutations of \(S_p\). Moreover, we want to do this in such a way that these properties will be conserved the best way:
- we still reach a fairly "reasonable" amount of permutations among \(S_p\) ;
- all these permutations are unique (or a "lot of" them) ;
- all of this has still "good" performances (we haven't talk yet a lot about this one, but we don't forget it :)).
At this point, we decided to study the following UPRNG (named \(UPRNGcomp\)):
\begin{align*} \text{With } GS = G_a \times G_b \times G_c \times N^*,\\ UPRNGcomp : GS &\rightarrow S_p\\ (a,b,c,n) &\mapsto (G(c) \circ L(a,b))^n \end{align*}
This choice is made because it produces a function that can be easily computed, and can still give interesting results.
Number of generated permutations
If \(n\) is randomly chosen in \([\![1,N[\![\), then the number of generated permutations with this method, is : \(p*(p-1)*Phi(p-1)*N\) (with \(Phi\) the Euler totient function [3]).
As we've seen above, the number of unique generated permutations may be inferior to this.
Thus, if we have for instance \(n=2\), we can search for the set of \(seeds \in GS\) for which same UPRNG is the same.
Let
- \((S_0,S_1) \in G_a \times G_b \times Gc\) ;
- \(S_0 = (a_0,b_0,c_0)\) ;
- \(S_1 = (a_1,b_1,c_1)\).
We need to resolve:
The complete resolution of this equation being a bit human-time consuming, we'll do it with \(c_0=c_1=3\), and using mathematical software, we can find these solutions :
- obviously, \(\{a_0=a_1, b_0=b_1\}\) ;
- and \(\{a_0=p-a_1, b_0=b_1=0\}\).
Which means than, when \(b_0=b_1=0\), only half the numbers of possible values for \(a\) will give a unique permutation. By the way, this proves the fact that our UPRNG function isn't bijective.
We can test this easily with \(p=17\). \(gcd(3,17)\) being equals to 1, we can define:
\begin{align*} UPRNG: G_a \times G_b \times Gc &\rightarrow S_p\\ (a,b) &\mapsto G(3) \circ L(a,b) \circ G(3) \circ L(a,b) \end{align*}
And this python code:
def l(x,a,b,p): return (a*x+b)%p def g(x,c,p): return (x**c)%p def lgn(x,a,b,c,p,n): for i in xrange(0,n): x = g(l(x,a,b,p),c,p) return x list_x0 = list() list_x1 = list() p = 17 a = 5 b = 0 c = 3 n = 2 for x in range(0,p): list_x0.append(lgn(x, a, b, c, p, 2)) list_x1.append(lgn(x, p-a, b, c, p, 2)) print(list_x0) print(list_x1)
Which gives:
[0, 4, 8, 5, 16, 14, 10, 6, 15, 2, 11, 7, 3, 1, 12, 9, 13] [0, 4, 8, 5, 16, 14, 10, 6, 15, 2, 11, 7, 3, 1, 12, 9, 13]
One possible solution is to reduce the space of \(G_a\), for instance with \(G_a=[1,\frac{p-1}{2}]\). But, without the full resolution of the (equation 1), this is just a partial resolution.
Going further with another UPRNG
If we want to reduce the "entropy reduction", we need to improve the size of the seed values. For instance, this UPRNG could be defined as:
\begin{align*} UPRNG2 : GS = G_a \times G_b \times G_a \times G_b \times G_c \times N^* &\rightarrow S_p\\ (a_0,b_0,a_1,b_1,c,n) &\mapsto (L(a_1,b_1) \circ G(c_1) \circ L(a_0,b_0))^n \end{align*}
As above, we try to find \((S_0,S_1) \in GS\) such as \(UPRNG2(S_0) = UPRNG2(S_1)\).
Using mathematical resolution software, with \(n=1 \text{ and } c=9\) (for instance), this gives the following solutions:
Let
- \(S_0=(a_0,b_0,a_1,b_1)\) ;
- \(S_1=(a_2,b_2,a_3,b_3)\).
We have:
- \(S_0=S_1\) (trivial) ;
- \(\{a_1 = a_3*a_2^9*(a_0^{-1})^9, b_0 = b_2*a_0*(a_2^{-1}), b_1 = b_3\}\).
which gives constraints on the choice of our constants in order to try and have \(UPRNG2\) bijective.
The resolution of the full system is left for further work on the subject ;)
Implementation and benchmarks
The implementaion of what's described here (and more) has been done in the C++ "leeloo" open-source library that you can find on github here :. It also provides python bindings for python fans around here.
The library allows to manage integer intervals, aggregate them and randomly sort the elements as described in the introduction. It also provides an IPv4 range parser for convenience usage.
There are two main UPRNG implemented:
- one that uses the method described here [5]. This one is historical, optimised with SSE/AVX instructions and "fast" (see figures below) ;
- one that uses \(URPNGcomp\). It is 8 to 14 times slower that the original one (due to the modular exponentation), but provides a larger possible set of permutations.
Moreover, each UPRNG can be instantiated in "atomic" mode, which makes them thread-safe.
Some figures about performances : on a Core i7-3770 (3.4GHz, 4 cores with Hyperthreading), we obtain:
- with the first UPRNG, we can generate, with the SSE/AVX and parallelised version, about 290 millions of 32-bit numbers per second. This makes a memory bandwidth of about 1.2GB/s, making this generator CPU-bound (for now) ;
- with the second UPRNG, we can generate, with the parallelised version, about 30/n million numbers/s ('n' being the part of the seed that defines the number of compositon of \(G \circ L\).). This is because the performance of this generator is limited mainly by the modular exponentation computations. This generator is also clearly CPU-bound.
C++ and Python usage samples can be found on github at : and.
Conclusion
Giving some compromises, we find a solution to our original problem that is actually good enough for our project needs. We still are a bit frustrated not to have the actual time to go further in this subject.
There exists other ways that haven't been studied here to generate permutation polynomial, as for instance described on this wikipedia page :. This is also another interesting work that could be done :)
It can also be mentioned that some people already looked into the subject and published articles. For instance, [5] uses the quadratic residues with prime numbers. It can be noticed that the permutation given by this method can also be expressed as a permutation polynomial (but involves more computations).
Finally, thanks to Sebastien Kaczmarek (@deesse_k) for the original talks on the subject (and other ideas), to Ninon Eyrolles for her help on the redaction and some of the mathematics here and Kévin Szkudlapski for his advices.
Going further
For the reader that might want to go further, here are some ideas:
- Work on UPRNG2 ;
- For the described UPRNGs, find out the number of unique permutations ;
- Benchmark and analyze other ways to generate permutation polynmials ;
- Something that would be nice: given a seed space \(S\) of size \(s\), find out \(S\) and a subset of \(S_p\) such as:
\begin{align*} F : S &\rightarrow \text{subset of }S_p\\ seed &\mapsto F(seed) \end{align*}
is a bijection.
You're welcome to send us feedbacks :).
Appendix A
Let:
- \(p\) a prime number ;
- \(F_p = \mathbb{Z}/p\mathbb{Z}\) (which is a division ring) ;
- \(X \text{ and } Y\) two independant random variables of \(F_p\).
First, we have:
\begin{align*} P(X=x) = \frac{1}{p},\\ P(Y=y) = \frac{1}{p} \end{align*}
We have, for every \(n \in F, and \(y=n-x\) is unique for a fixed \(x\). The number of \((x,y) \in F_p^2\) such as \(x+y=n\) is then \(p\).
So,
which means that choosing independently two random numbers in \(F_p\) and sum the two of them is equivalent to only choose one random number in \(F_p\).
Let's demonstrate the same result with \(X. As \(F_p\) is a division ring, we have \(y=n*x^{-1}\) which is unique for a given \(x\). So, the number of \((x,y) \in F_p^2\) such as \(x*y=n \text{ is }p\), and
If we take these two results, and let \(X, Y \text{ and } Z \in F_p\),
\begin{align*} P(X*Y+Z=n) &= \sum_{x*y+z}(P(X=x)*P(Y=y)*P(Z=z))\\ &= \sum_{x*y+z=n} \frac{1}{p^3} \end{align*}
Let \(y\) and \(z \in F_p\), then \(x=(n-z)*y^{-1}\) exists and is unique for a given \(y\) and \(z\). Thus, the number of \((x,y,z) \in F_p^3\) such as \(x*y+z=n\) is \(p^2\), and: | https://blog.quarkslab.com/unique-random-number-set-computation.html | CC-MAIN-2019-09 | refinedweb | 3,215 | 58.82 |
Eclipse Community Forums - RDF feed Eclipse Community Forums [Dynamic Web Project] JSP - Servlet - web.xml <![CDATA[Hi everyone, Thank you for reading my post. Let me explain you what I am trying to do and what my problems are. I created a new "Dynamic Web Project" using Eclipse. The "Project Explorer" view shows the following architecture: (L1 = Level 1, ..., L5 = Level 5) ------------------------------------------------------------ -- 1. L1 -- WebProject 2. L2 -- -- Java Resources 3. L3 -- -- -- src 4. L3 -- -- -- servlets 5. L4 -- -- -- -- (default package) 6. L5 -- -- -- -- -- ClassUploadFile.java 7. L3 -- -- -- Libraries 8. L4 -- -- -- -- Apache Tomcat v6.0 [Apache Tomcat v6.0] 9. L4 -- -- -- -- EAR Libraries 10. L4 -- -- -- -- JRE System Library [jre6] 11. L4 -- -- -- -- Web App Libraries 12. L2 -- -- JavaScript Support 13. L2 -- -- build 14. L2 -- -- WebContent 15. L3 -- -- -- jsp 16. L4 -- -- -- -- FormUploadFile.jsp 17. L3 -- -- -- META-INF 18. L3 -- -- -- WEB-INF 19. L4 -- -- -- -- lib 20. L4 -- -- -- -- web.xml 21. L1 -- Servers 22. L2 -- -- Tomcat v6.0 Server at localhost-config ------------------------------------------------------------ -- As you can see, I have created: - a folder: "jsp" (line 15.), - a "Source Folder": "servlets" (line 4.) - two files: a JSP "FormUploadFile.jsp" (line 16.) and a servlet "ClassUploadFile.java" (line 6.). Roughly, - the JSP is a HTML form that I want to use to choose the file I want to upload, - the servlet does the uploading job. My first problem is the following: I do not really understand the architecture above and didn't find the reference documentation that explains it. I only had a glance at the file: ".settings\org.eclipse.wst.common.component" which seems to be linked to the problem... ------------------------------------------------------------ -------------------- <?xml version="1.0" encoding="UTF-8"?> <project-modules <wb-module <wb-resource <wb-resource <wb-resource <property name="context-root" value="WebProject"/> <property name="java-output-path"/> </wb-module> </project-modules> ------------------------------------------------------------ -------------------- Tiebreaker: what is that "default package" that was created? What to do with it?? Third problem: I have to customize the "web.xml file". Here is what is presently looks like: ------------------------------------------------------------ -------------------- <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns: <display-name>WebProject</display-name> <servlet> <display-name>ClassUploadFile</display-name> <servlet-name>ClassUploadFile</servlet-name> <servlet-class>ClassUploadFile</servlet-class> </servlet> <servlet-mapping> <servlet-name>ClassUploadFile</servlet-name> <url-pattern>/FormUploadFile</url-pattern> </servlet-mapping> </web-app> ------------------------------------------------------------ -------------------- Can you tell me if something is wrong whith this because when I submit the form, the servlet is not executed: the form is re-printed on the screen and re-initialized. Here is the "FORM" in the JSP file: ------------------------------------------------------------ -------------------- <FORM ENCTYPE="multipart/form-data" ACTION="" METHOD="POST"> <TABLE STYLE="background-color: lightgreen;" CELLPADDING="5"> <TR> <TD> Choose a file to upload: </TD> </TR> <TR> <TD><INPUT NAME="uploadedfile" TYPE="file" style="width: 227px"/></TD> </TR> <TR> <TD COLSPAN="2"><INPUT TYPE="submit" VALUE="Submit" /></TD> </TR> </TABLE> </FORM> ------------------------------------------------------------ -------------------- Thanks in advance for your help, -- Lmhelp]]> lmhelp 2009-02-12T09:33:54-00:00 Re: [Dynamic Web Project] JSP - Servlet - web.xml <![CDATA[From Konstantin Komissarchik: Questions about the use of WTP are best posted on the newsgroup. This mailing list is for discussing development of WTP itself. ebtools > > Tiebreaker: what is that "default package" that was created? > > What to do with it? When you don't create any folders to hold your classes (called packages in Java), your classes are considered to be in a "default package". Another way of thinking about it is that these classes do not have a package. Note that the package namespace is relative to the directory you specify as java source root. In your case, it looks like you designated the servlets directory as a source root. The default package node that you are seeing in the Project Explorer doesn't correspond to anything physical on disk. It just there to help you understand the above situation. If you are happy with your classes not having a package specified, then you don't need to do anything. > >? The WebContent directory is the root for the web content contained in your app. The reason that the web content root is not set as the project root is that you don't want to be picking up various project metadata files, java source files, etc when resolving URLs. You can control the directory that's designated as the web content root either at project creation or in project properties (search for web). > > Can you tell me if something is wrong whith this > > because when I submit the form, the servlet is not > > executed: the form is re-printed on the screen > > and re-initialized. You aren't specifying the servlet as the target of your action in your HTML form element. The standard HTML behavior in this case is to send your POST request to URL that originates it, which is what you are seeing. - Konstantin]]> lmhelp 2009-02-12T09:35:22-00:00 Re: [Dynamic Web Project] JSP - Servlet - web.xml <![CDATA[Hi Konstantin, Thank you for your answers. For the two first points, I am ok. For the last one: > You aren't specifying the servlet as the target of your > action in your HTML form element.... Can you please tell me what you would put in the "ACTION" attribute of the "FORM" element ; or how to solve the problem using the "web.xml" file? Thanks for your help, regards, -- Lmhelp]]> lmhelp 2009-02-12T10:13:03-00:00 Re: [Dynamic Web Project] JSP - Servlet - web.xml <![CDATA[-- Really do not manage to find the right value for the "ACTION" attribute of the "FORM" element... please tell me according to the project architecture I "drew" in my first post. -- Wondering if the "web.xml" file has to be "configured", after the "ACTION" attribute has been set properly... Regards, -- Lmhelp]]> lmhelp 2009-02-12T12:03:22-00:00 Re: [Dynamic Web Project] JSP - Servlet - web.xml <![CDATA[>... The empty ACTION is used in cases where you have a single JSP or a servlet that is capable of handling different app states depending on POST payload or URL segments in a GET. In your case, you are trying to hand over from the JSP to a Servlet, so you need to specify that. The value of the ACTION attribute is simply a relative URL for where to send the form content. In your web.xml file, you mapped your servlet to the FormUploadFile URL so that's what you need to put in your ACTION attribute (or it might have to be "../FormUploadFile" since your jsp file is nested in a jsp folder). Play around with this and it might be helpful for you to find a book on basic HTML and java web app development. - Konstantin]]> Konstantin Komissarchik 2009-02-12T17:21:07-00:00 Re: [Dynamic Web Project] JSP - Servlet - web.xml <![CDATA[Hi, thank you for your answers. > Play around with this I did. Not such a nice play... :) >. OK. Here I have a more accurate example to submit to you. What is it supposed to do: - first thing: the user fills in a form with his name and first name. [Cf. FormNameFirstName.jsp] For example: name = FOO, first name = BAR. - second thing: those informations are transmitted to a servlet which adds a suffix to the name and first name. [Cf. ServletFormNameFirstName.java] For example: suffix for name = popol, suffix for first name = momol. (You get: FOOpopol and BARmomol). - third thing: the servlet sends the "augmented" name and first name to another (response) page which displays: Your name is ... [Cf. ResponseNameFirstName.jsp] For example: You name is BARmomol FOOpopol. ------------------------------------------------------------ ------------- Here is the architecture of the project: ------------------------------------------------------------ ------------- 1. -- SmallExampleWebProject 2. -- -- Deployment Descriptor: SmallExampleWebProject 3. -- -- Java Resources 4. -- -- -- servlets 5. -- -- -- -- ServletFormNameFirstName.java 6. -- -- -- Libraries 7. -- -- -- -- Apache Tomcat v6.0 [...] 8. -- -- build 9. -- -- WebContent 10. -- -- -- META-INF 11. -- -- -- WEB-INF 12. -- -- -- -- lib 13. -- -- -- -- web.xml 14. -- -- -- FormNameFirstName.jsp 15. -- -- -- ResponseNameFirstName.jsp ------------------------------------------------------------ ------------- Here is "FormNameFirstName>Form name, first name</TITLE> </HEAD> <BODY> <DIV STYLE="background-color: teal; color: white; font-weight: bold; padding: 5pt;"> Form name, first name </DIV> <FORM ACTION="servlets.ServletFormNameFirstName" METHOD="POST"> <TABLE> <TR> <TD> Name </TD> <TD> <INPUT NAME="input_name" VALUE="FOO" TYPE="text" SIZE="20" /> </TD> </TR> <TR> <TD> First name </TD> <TD> <INPUT NAME="input_first_name" VALUE="BAR" TYPE="text" SIZE="20" /> </TD> </TR> <TR> <TD></TD> <TD> <INPUT TYPE="submit" VALUE="Submit"> </TD> </TR> </TABLE> </FORM> </BODY> </HTML> ------------------------------------------------------------ ------------- Here is "ServletFormNameFirstName.java": ------------------------------------------------------------ ------------- package ServletFormNameFirstName extends HttpServlet { private static final long serialVersionUID = 1L; public ServletFormNameFirstName() { super(); } protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String strName = request.getParameter("input_name"); strName = strName + "popol"; String strFirstName = request.getParameter("input_first_name"); strFirstName = strFirstName + "momol"; System.out.println("strName = " + strName); System.out.println("strFirstName = " + strFirstName); request.setAttribute("attrName", strName); request.setAttribute("attrFirstName", strFirstName); getServletContext().getRequestDispatcher("/ResponseNameFirstName.jsp ").forward(request, response); } protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { doGet(request, response); } } ------------------------------------------------------------ ------------- Here is "ResponseNameFirstName.jsp": ------------------------------------------------------------ ------------- <%@ page <% String strName = (String) request.getAttribute("attrName"); String strFirstName = (String) request.getAttribute("attrFirstName"); String s = new String("Your name is " + strFirstName + " " + strName + "."); %> <HTML> <HEAD> <META HTTP- <TITLE>Response form name, first name</TITLE> </HEAD> <BODY> <DIV STYLE="background-color: teal; color: white; font-weight: bold; padding: 5pt;"> Response form name, first name </DIV> <DIV> <%= s %> </DIV> </BODY> </HTML> ------------------------------------------------------------ ------------- And here is the "web.xml" file: ------------------------------------------------------------ ------------- <>/FormNameFirstName.jsp</url-pattern> </servlet-mapping> </web-app> Ok. I think everything we need has been reported here. So what I want to know is: 1. In "FormNameFirstName.jsp": <FORM ACTION="" METHOD="POST"> what to put in the "ACTION" field? 2. In "web.xml": - what to put in: <servlet-class></servlet-class> - what to put in: <url-pattern></url-pattern> The values above are not ok, the form data are not transmitted to the servlet. I get: Your name is nullmomol nullpopol. Please help. If you possibly know a documentation that would help, I'd appreciate. Regards, -- Lmhelp]]> lmhelp 2009-02-13T14:30:49-00:00 Re: [Dynamic Web Project] JSP - Servlet - web.xml <![CDATA[>>. Two points of practical advice: 1. It helps to target your questions to the most appropriate forum. This newsgroup is about Eclipse Web Tools. It is good for getting answers regarding the uses of this tooling, but not so good for getting answers to general HTML and Java web app development questions. Sun (as the source of the Java technology) is a good place to look for information regarding Java web app development. I suspect that you will find docs, articles and forums on the subject if you look around on. 2. Regardless of the forum, the most you can hope for is that people will give you tips and help point you in the right direction. Don't expect to be able to post your source code and have others fix the problems for you. People generally expect to be paid for that level of support. ;) - Konstantin]]> Konstantin Komissarchik 2009-02-13T22:33:39-00:00 Re: [Dynamic Web Project] JSP - Servlet - web.xml <![CDATA[> Don't expect to be able to post your source code and have others fix > the problems for you. Quite a lot of forums like to have to whole source code to figure out what the problems are. Moreover I perfectly know I may not have a specific answer posting on a forum: as you say, if I wanted such a thing I would pay someone for that. I only thought that details were lacking in my first post and I wanted to complement it. It is true that you are working for free when you help me but me too. And quite a lot of other people may benefit freely of our exchanges. It is true that you do not know me but when I find the answer to my problems I always post the solution on the forums that helped me. So we both work for free. I think other people helped you too in the past. This is the principle of the "forum approach". > the most you can hope for is that people will give you tips and help > point you in the right direction. I am not expecting anything more than that. A very nice person sent me a private message to give me a website reference. I haven't yet been able to examine it but I'll do it today and be sure that if I find the answer I'll post it here. -- Lmhelp]]> lmhelp 2009-02-16T08:58:18-00:00 Re: [Dynamic Web Project] JSP - Servlet - web.xml <![CDATA[Hi to everyone. Thanks to some help I got, I made my little stuff work. Here are the two modifications that have to be made in the code I posted previously: 1. In "FormNameFirstName.jsp" replace the "FORM" element with: ------------------------------------------------------------ ------------------------------------------------------------ ----- <FORM ACTION="ServletFormNameFirstName" METHOD="POST"> ------------------------------------------------------------ ------------------------------------------------------------ ----- 2. Replace the former "web.xml" with that one: ------------------------------------------------------------ ------------------------------------------------------------ ----- <>/ServletFormNameFirstName</url-pattern> </servlet-mapping> </web-app> ------------------------------------------------------------ ------------------------------------------------------------ ----- Like this, it works. -- Lmhelp]]> lmhelp 2009-02-16T11:07:36-00:00 | http://www.eclipse.org/forums/feed.php?mode=m&th=72846&basic=1 | CC-MAIN-2016-26 | refinedweb | 2,142 | 67.04 |
This morning I came into work and went through my usual 100 or so emails. One of the emails was from MSSQLTips.com, it was on how to monitor SQL Server Database mirroring with email alerts. By Alan Cranfield. While agree with Alan that every DBA should monitor their database mirroring with email alerts I disagreed with his method. He had the DBA create a job that was scheduled to run at some interval throughout the day. His job would query the sys.database_mirroring view. As DBAs we need to know immediately when something fails or changes. 5 minutes could be the difference between a quick fix and restoring a 500 GB db mirror.
So what would be a better way to monitor and alert a DBA when there is a change in the state of Database mirroring? I prefer to use Alerts for events. Event notifications can be created directly in the SQL Server Database Engine or by using the WMI Provider for Server Events. A DBA can specify which db mirroring event they wish to moitor. Here is a table of events to monitor for:
Now that we know the Event and the State here is how to add an Alert to notify you that the state of DB mirroring has changed.
USE [msdb] GO /****** Object: Alert [DBM State Change] Script Date: 10/15/2009 08:03:20 ******/ EXEC msdb.dbo.sp_add_alert @name=N'DBM State Change', @message_id=0, @severity=0, @enabled=1, @delay_between_responses=0, @include_event_description_in=1, @category_name=N'[Uncategorized]', @wmi_namespace=N'\.rootMicrosoftSqlServerServerEventsMSSQLSERVER', @wmi_query=N'SELECT * FROM DATABASE_MIRRORING_STATE_CHANGE WHERE State = 6 ', @job_id=N'00000000-0000-0000-0000-000000000000' GO
This is an alert that I created on the principal server. I also have created a similar alert on the mirror server where I look for state = 5. These two alerts will notify me if the connection between the Principal and Mirror is lost due to network or some other failure.
To receive notification when this event happens it is simple to just create an operator and have the event email the operator if and when the event conditions are met.
What other Mirror Events should every DBA monitor? I find the unsent and unrestored log to be two very import events to receive notifications for. For those events just simply create a new event for the event ID in the table below and set you monitor threshold.
You can also script this by using sp_add_alert as follows:
USE [msdb] GO /****** Object: Alert [DB Mirroring Unsent Log Warning] Script Date: 10/15/2009 08:14:29 ******/ EXEC msdb.dbo.sp_add_alert @name=N'DB Mirroring Unsent Log Warning', @message_id=32042, @severity=0, @enabled=0, @delay_between_responses=0, @include_event_description_in=1, @category_name=N'[Uncategorized]', @job_id=N'00000000-0000-0000-0000-000000000000' GO
Good Luck and Happy monitoring! | https://blogs.lessthandot.com/index.php/datamgmt/dbadmin/how-to-monitor-database-mirroring/ | CC-MAIN-2021-21 | refinedweb | 464 | 63.09 |
I want a excel like table widget in tkinter for a gui I am writing. Do you have any suggestions?
Tktable is at least arguably the best option, if you need full table support. Briefly, the following example shows how to use it assuming you have it installed. The example is for python3, but for python2 you only need to change the import statement.
import tkinter as tk import tktable root = tk.Tk() table = tktable.Table(root, rows=10, cols=4) table.pack(side="top", fill="both", expand=True) root.mainloop()
Tktable can be difficult to install since there is no pip-installable package.
If all you really need is a grid of widgets for displaying and editing data, you can easily build a grid of entry or label widgets. For an example, see this answer to the question Python. GUI(input and output matrices)? | https://codedump.io/share/5w0jNgE3Szhw/1/which-widget-do-you-use-for-a-excel-like-table-in-tkinter | CC-MAIN-2018-26 | refinedweb | 145 | 69.58 |
The OLE DB provider interface provides access to a database through an OLE DB provider installed on your computer. Most OLE DB providers are supported, but those that use OLE DB Version 2.5 interfaces aren't supported. Some unsupported OLE DB interfaces include:
OLE DB provider for ODBC (MSDASQL)
OLE DB provider for Exchange (ExOLEDB)
OLE DB for Internet Publishing (MSDAIPP)
Table A-3 lists some commonly used OLE DB drivers.
All OLE DB types are contained in the System.Data.OleDb namespace (see Table A-4). For low-level information about OLE DB providers, you can refer to the OLE DB programmer's reference on MSDN at.
The OLE DB managed provider doesn't include any structures for OLE DB types. However, the OleDbDataReader does include additional methods that allow you to specify the data type when retrieving a column value. Table A-5 shows the mapping between OLE DB types and .NET framework types (although it doesn't include types used exclusively for stored procedure parameters). | http://etutorials.org/Programming/ado+net/Part+IV+Appendixes/Appendix+A.+ADO.NET+Providers/A.2+The+OLE+DB+Provider/ | CC-MAIN-2017-22 | refinedweb | 169 | 65.32 |
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python.
from sympy import * init_printing()
var('x y z a')
Use the function solve to resolve equations (the right hand side is always 0).
solve(x**2 - a, x)
You can also solve inequations. You may need to specify the domain of your variables. Here, we tell SymPy that x is a real variable.
x = Symbol('x') solve_univariate_inequality(x**2 > 4, x)
This function also accepts systems of equations (here a linear system).
solve([x + 2*y + 1, x - 3*y - 2], x, y)
Non-linear systems are also supported.
solve([x**2 + y**2 - 1, x**2 - y**2 - S(1)/2], x, y)
Singular linear systems can also be solved (here, there are infinitely many equations because the two equations are colinear).
solve([x + 2*y + 1, -x - 2*y - 1], x, y)
Now, let's solve a linear system using matrices with symbolic variables.
var('a b c d u v')
We create the augmented matrix, which is the horizontal concatenation of the system's matrix with the linear coefficients, and the right-hand side vector.
M = Matrix([[a, b, u], [c, d, v]]); M
solve_linear_system(M, x, y)
This system needs to be non-singular to have a unique solution, which is equivalent to say that the determinant of the system's matrix needs to be non-zero (otherwise the denominators in the fractions above are equal to zero).
det(M[:2,:2])
You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).
IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages). | http://nbviewer.jupyter.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter15_symbolic/02_solvers.ipynb | CC-MAIN-2017-47 | refinedweb | 290 | 59.94 |
I have an application which parses XML documents with some attribute values being QNames (prefix:name). I would like to be able to know which URI corresponds to the prefix. I've done this by pushing/popping <prefix, URI> pairs on/off a stack in response to calls to the StartNamespaceDeclHandler/EndNamespaceDeclHandler functions. Is it safe to just push the pointers instead of duplicating the prefix and namespace before pushing them on the stack? i.e., does expat guanartee that the pointers passed to StartNamespaceDeclHandler remain accessible until the corresponding EndNamespaceDecHandler call is made? It seems logical to assume that expat maintains a stack of <prefix, URI> pairs so that it would be able to pass a URI,sep, local name to XML_StartElementHandler. If this is correct, can anyone tell me which expat source file contains the implementaion of the stack? Does expat check whether the URI string is syntactically correct? (I don't want this to be done twice) Thanks in advance. _______________________________________________ Join Excite! - The most personalized portal on the Web! | https://mail.python.org/pipermail/expat-discuss/2003-June/001078.html | CC-MAIN-2014-15 | refinedweb | 173 | 64.71 |
FxFxLHReader derives from the LesHouchesReader base class to be used for objects which read event files from matrix element generators. More...
#include <FxFxLHReader.h>
FxFxLHReader derives from the LesHouchesReader base class to be used for objects which read event files from matrix element generators.
It extends LesHouchesReader 44 of file FxFxLHReader.h.
Copy-constructor.
Note that a file which is opened in the object copied from will have to be reopened in this.
Make a simple clone of this object.
Implements ThePEG::InterfacedBase..
Make a clone of this object, possibly modifying the cloned object to make it sane.
Reimplemented from ThePEG::InterfacedBase.
Calls readEvent() or uncacheEvent() to read information into the LesHouches common block variables.
This function is called by the LesHouchesEventHandler if this reader has been selectod to produce an event.
Reimplemented from ThePEG::LesHouchesReader.
Initialize.
This function is called by the LesHouchesEventHandler to which this object is assigned.
Reimplemented from ThePEG::LesHouchesReader.
Open a file with events.
Derived classes should overwrite it and first calling it before reading in the run information into the corresponding protected variables.
Implements ThePEG::LesHouchesReader.
Function used to read in object persistently.
Function used to write out object persistently.
Calls doReadEvent() and performs pre-defined reweightings.
A sub-class overrides this function it must make sure that the corresponding reweightings are done.
Reimplemented from ThePEG::LesHouchesReader.
Scan the file or stream to obtain information about cross section weights and particles etc.
This function should fill the variables corresponding to the /HEPRUP/ common block. The function returns the number of events scanned.
Reimplemented from ThePEG::LesHouchesReader.
Skip n events.
Used by LesHouchesEventHandler to make sure that a file is scanned an even number of times in case the events are not ramdomly distributed in the file.
Reimplemented from ThePEG::LesHouchesReader.
If LHF.
Map of attributes (name-value pairs) found in the last event tag.
Definition at line 251 of file FxFxLHReader.h.
If LHF.
Additional comments found with the last read event.
Definition at line 245 of file FxFxLHReader.h.
If LHF.
All lines from the header block.
Definition at line 229 of file FxFxLHReader.h.
If LHF.
Map of attributes (name-value pairs) found in the init tag.
Definition at line 240 of file FxFxLHReader.h.
If LHF.
Additional comments found in the init block.
Definition at line 234 of file FxFxLHReader.h.
If the file is a standard Les Houches formatted file (LHF) this is its version number.
If empty, this is not a Les Houches formatted file
Definition at line 218 of file FxFxLHReader.h.
If LHF.
All lines (since the last open() or readEvent()) outside the header, init and event tags.
Definition at line 224 of file FxFxLHReader.h. | http://herwig.hepforge.org/doxygen/classHerwig_1_1FxFxLHReader.html | CC-MAIN-2018-05 | refinedweb | 449 | 60.61 |
Kevin Atkinson wrote: > I will not apply the patch as is as there are some changes I don't > approve of. That's ok, I expected as much:) > 1) > I will accept the changes to deal with the fact that the sun compiler > is to stupid to know that abort doesn't return as those are harmless. Well, yes CC is stupid, it doesn't check sprintf format strings either, but I'm told it seems to produce better code for SPARC, so here we are. > 2) > In may cases you changed: > > String val = config.retrieve("key"); > to > String val = String(config.retrieve("key")); > > what is the error you are getting without the change? There might be > a better way to solve the problem. There is a constructor for String that takes an PosibError (what things return), but there is also an operator= on String that takes a PosibError. Just the constructor ought to be enough, the = operator is redundant IMHO. Deleting String::operator= (PosibError...) would probably be the easiest and most correct solution, but I was a bit nervous about it, so I chose to use the constructor explicitly. What do you think? > Same for > -static void display_menu(O * out, const Choices * choices, int > width) { +static void display_menu(O * out, const StackPtr<Choices> > &choices, int +width) The sun compiler bitched about incompatible types, 'Choices *' is not the same as 'StackPtr<Choices>', which is what is needed in the function. I guess StackPtr<Mumble> just happens to resolve to the same as Mumble * with GNUs STL implementation, but assuming it always does is bad form or at least not something that you can assume all compilers understand. > 3) > The C++ standard requires "friend class HashTable", "friend HashTable" > is not valid C++ and will not compile with gcc Hmm, that wasn't good. With: friend class HashTable; I get: "vector_hash.hpp", line 243: Error: A typedef name cannot be used in an elaborated type specifier.. With: friend class Parms::HashTable; I get: Error: aspeller_default_readonly_ws::ReadOnlyWS::WordLookupParms::HashTable is not defined. How about: #ifdef __SUNPRO_CC // Fix for deficient sun compilers: friend HashTable #else friend class HashTable #endif > 4) > What is the reason for? > +#if (1) > + FStream CIN(stdin, false); > + FStream COUT(stdout, false); > + FStream CERR(stderr, false); > +#else > +#include "iostream.hpp" It seems the symbols from iostream.cpp were not available when linking the application, maybe because of defective name mangling, maybe because of scoping, it seemed a lot easier to simply define the vairables there rather than battle with the build system to figure out what went wrong. As far as I know libaspell.so is supposed to provide a C interface, not a C++ one, right? In that case referencing C++ symbols in it is wrong, even if g++ lets you get away with it. This is what happens if I try to link it with the default code: CC -g -o .libs/aspell aspell.o check_funs.o checker_string.o ../lib/.libs/libaspell.so -lcurses -R/home/ffr/projects/spell/test/lib ild: (undefined symbol) acommon::CERR -- referenced in the text segment of aspell.o [Hint: static member acommon::CERR must be defined in the program] ild: (undefined symbol) acommon::CIN -- referenced in the text segment of aspell.o [Hint: static member acommon::CIN must be defined in the program] ild: (undefined symbol) acommon::COUT -- referenced in the text segment of aspell.o [Hint: static member acommon::COUT must be defined in the program] > 5) > In parm_string you comment out one of my compressions due to a > conflict with the STL. I have a configure test to deal with this > problem. It will define the macro "REL_OPS_POLLUTION". Check that > the macro is defined in settings.h and if it is use an ifndef around > the comparasion. if the macro is not defined please let me know. Ah, right, I just checked and it isn't defined. The problem isn't that it's impossible to have your own == operator, the problem is that stl can make an std::string from a char * which can be made from a ParamString automagicly, that causes a conflict between the two == operators. ... Which IMHO is stupid of CC as the sane thing to do is to select the "nearest" operator for the job. As the current configure test doesn't define REL_OPS_POLLUTION on my system it might be wrong to expand the test to catch this case as well. I worry about doing what I do now and using the std::string == operator as it might not work the same way and it may be slower, but I havn't seen any problems with the approach while running aspell. -- Flemming Frandsen / Systems Designer | http://lists.gnu.org/archive/html/aspell-devel/2004-01/msg00009.html | CC-MAIN-2013-48 | refinedweb | 779 | 70.53 |
Opened 7 years ago
Closed 5 years ago
#10706 closed Bug (fixed)
Incorrect error from manage.py sql when app fails to load
Description
./manage.py sql stomp Error: App with label stomp could not be found. Are you sure your INSTALLED_APPS setting is correct?
This is an incorrect error when the app is found but fails to import. The actual error is never displayed, which is hard to diagnose. If I add some diagnostics in django/db/models/loading.py load_app to print the exception, the problem becomes obvious:
./manage.py sqlall stomp cannot import name TestModel Error: App with label stomp could not be found. Are you sure your INSTALLED_APPS setting is correct?
It should probably just pass the exception up and show a trace. I've attached a patch to do this, but it causes errors when running some tests:
Error while importing humanize: File "./runtests.py", line 134, in django_tests mod = load_app(model_label) File "/home/glenn/django/django/db/models/loading.py", line 74, in load_app models = import_module('.models', app_name) File "/home/glenn/django/django/utils/importlib.py", line 35, in import_module __import__(name) ImportError: No module named models
for humanize, syndication, sitemaps, databrowse, admindocs and localflavor. Spying on runtests.py django_tests, these seem to be failing silently anyway; this is just making the "Error while importing" exception handler actually get called, where before they were silent. I'm not sure if there's another bug in there that this is exposing.
(Watch out: there's both eg. regressiontests.humanize and django.contrib.humanize for a few of those tests, and usually one of them works and the other doesn't.)
Attachments (2)
Change History (17)
Changed 7 years ago by Glenn
comment:1 Changed 7 years ago by Glenn
- Has patch unset
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 Changed 7 years ago by nwelch
Django tries to import your app, and catches all ImportErrors when trying to do so, reporting them to the user as the app missing. So basically if you're trying to diagnose an unrelated ImportError in your project, it is incredibly hard to track it down without hacking django apart or some other contortion. I'm amazed so few people have reported a problem with this; it's very easy to run into when trying to use external Python libraries and whatnot.
I found the offending code in core/management/base.py in AppCommand.handle(). I think perhaps the component should be changed to core framework, although I'm a Django newbie so I'm not sure.
Changed 7 years ago by seveas
Make the error message clearer
comment:3 Changed 7 years ago by seveas
- Component changed from Database layer (models, ORM) to django-admin.py
- Has patch set
The easiest and least intrusive solution is to make the error message say something about both possible causes of app load failure. I've been bitten by this quite a few times before and closed a bunch of duplicates of this ticket yesterday as well. Attached patch changes the error message and the testcases that explicitely test for this message.
comment:4 Changed 7 years ago by Glenn
That fixes the message being misleading, at least, but errors that say "it caused an error" are the worst sort. It really needs to say what the actual error is, and show a backtrace.
comment:5 Changed 7 years ago by anibal
OMG :(
my code was:
..auth.modls import User #note the missing 'e'
.. and I lost 1 hour, lesson learned: fix my ide | pylint allways
comment:6 Changed 7 years ago by Alex
- Triage Stage changed from Unreviewed to Accepted
comment:7 Changed 7 years ago by martin maney
- Patch needs improvement set
comment:8 Changed 7 years ago by cwb
This is easy to bump into and very puzzling (though with the beneficial side-effect that I properly dug into sys.path, relative imports and the like). For me the fix (to loading.py) didn't work with admindocs enabled (not just the tests) because that doesn't have a models module. Still helpful though -- got me out of my pickle.
comment:9 Changed 7 years ago by xiongchiamiov
- Cc xiong.chiamiov@… added
comment:10 Changed 7 years ago by lawgon <lawgon@…>
comment:11 Changed 6 years ago by vincenth
comment:12 Changed 6 years ago by santa4nt
I ran across this unhelpful error message as well. At the very least the --traceback option should display the traceback when requested. Executing ./manage.py sql --traceback <app> doesn't say what it advertises.
comment:13 Changed 6 years ago by Annatar
I was a victim to this error message too. I reached desperation trying to figure out where did I do wrong in my settings.py file or in the folder where the new app resided. But it seems that there wasn't anything wrong with settings.py or the app folder in that respect. it was just a typing error within the modules.py file of the app.
it was something like:
from django.db import models
from django.contrib.auth.models import User
from Projectname.another_app.models import SomeModel #for foreign key usage
...and the new models defined here...
And instead of Projectname I should have typed ProjectName (silly error, but gave me quite a headache, because it's obscure)
Now I'm puzzled why Django only throws the correct errors only from the model classes and not from the starting declarations too. An error there tells me that the whole app is missing from settings.py, which is not true.
comment:14 Changed 5 years ago by SmileyChris
- Severity set to Normal
- Type set to Bug
comment:15 Changed 5 years ago by claudep
- Easy pickings unset
- Resolution set to fixed
- Status changed from new to closed
- UI/UX unset
I'm quite sure this should be fixed by now. Import errors in app models.py are not swallowed any more. Reopen with precise instructions about how to reproduce if you can.
This causes other problems on app load later on. I'm not sure what the right fix is here; I'll look further when I have more time. | https://code.djangoproject.com/ticket/10706 | CC-MAIN-2016-30 | refinedweb | 1,037 | 65.32 |
You can subscribe to this list here.
Showing
1
results of 1
To return to the subject: Is there anyone still asserting that when
namespace processing is turned on, a non-qualified element's local name is
_not_ equal to its Qname, with both being the name as entered? I've seen
several folks say it _isn't_ what they expect, and the Namespaces grammar
bears that out:
[6] QName ::= (Prefix ':')?
LocalPart
Are there any remaining sticking points, or have we reached consensus on
that?
The other items being discussed seem to be related to the fact that both
namespace-aware and -unaware views of a document are possible. That's
somewhat orthogonal to the above point.
Personal reactions:
I agree with the assertion that when namespace processing is turned off,
localname is not defined by the namespace standard, and applications should
be consistant: either turn the feature on, or don't look at that value.
I also agree that -- given how namespaces were patched into the XML
grammar, Namespace productions 9-12 -- the value which Namespaces called
QName *IS* the value which XML 1.0 called Name, and that QName is the right
place to return the full/raw/possibly-qualified name in either mode.
Re whether the localname should be best-approximation or null when running
in the namespace-unaware mode... Theoretically this shouldn't matter since
this combination is a programming bug; see first paragraph. Since it _is_ a
bug, the single greatest advantage of returning empty/null is that it will
probably cause code to break, thus advising the developer that they forgot
to turn on namespace processing.
The DOM is a slightly different case because it can intermix
namespace-aware and namespace-unaware nodes in a single document. This came
out of a requirement to let namespace-unaware applications operate on
namespace-aware documents, at the cost of not being able to safely use the
namespace-aware view thereafter. It was a necessary kluge, but it IS a
kluge and I would strongly recommend against SAX repeating that decision.
______________________________________
Joe Kesselman / IBM Research | http://sourceforge.net/p/sax/mailman/sax-devel/?viewmonth=200206&viewday=4 | CC-MAIN-2014-23 | refinedweb | 349 | 59.13 |
Calling inner classes?
Rob Brew
Ranch Hand
Joined: Jun 23, 2011
Posts: 99
posted
Jul 09, 2011 10:01:53
0
I'm following the oracle training guide as best i can, trying to play with inner classes though and i'm lost. I'm having problems at line 47. how do i call and use innner classes?
Thanks for all your help guys, it means a lot.
vehicle.java
abstract class vehicle { int date_month = 11; String colour = ""; int price = 0; static int id = 0 ; public vehicle() { colour = "blue"; id++; } public vehicle(int p) { price = p; id++; } public void set_colour (String c) { colour = c; } String get_colour() { return colour; } void set_price(int p) { price = p; } int get_price() { return price; } static int vehicleId() { return id; } public static void main (String args[]) { car o = new car(1445, "blue"); System.out.println(o.toString()); System.out.println(o.get_price()); System.out.println(o.get_colour()); taxi t = new taxi(); System.out.println("Number of cars = " + vehicle.vehicleId()); System.out.println("Rate "+ (new taxi().getRate(5))); } } class car extends vehicle { boolean in_showroom = false; int mileage = 0; int price; public car() { super(); } public car(int p){ super(p); taxi.Speedo speed = new taxi.Speedo(); } public car(int p, String s) { super(p); super.colour="green"; System.out.println(super.colour); } void drive() { System.out.println("beep beep"); } boolean test_drive() { if (in_showroom == true) { System.out.println("nice driving"); return true; } else { return false; } } }
taxi.java
public class taxi extends vehicle implements cab { boolean booked; int month_serviced; int rate = 5; public taxi () { booked = false; month_serviced=4; this.book(); System.out.println("Taxi booked"); }; public taxi (int r) { rate = r; System.out.println("booked at " + rate + "per hour"); } public int getRate (int hours) { return (rate * hours); } public boolean book() { if (booked == false) { booked = true; System.out.println("where to governor?"); return booked; } else { System.out.println("already booked"); return booked; } } public void service() { System.out.println("MOT passed"); } class Speedo { public int speed; public int getSpeed() { return speed; } } }
cab.java
public interface cab { public boolean book(); public void service(); }
marc weber
Sheriff
Joined: Aug 31, 2004
Posts: 11343
I like...
posted
Jul 09, 2011 17:11:19
0
Rob Brew wrote:
... I'm having problems at line 47. how do i call and use innner classes? ...
I think your problem is actually at line 64.
An instance of an inner class (Speedo) always needs to be associated with an instance of the enclosing class (taxi). This means you need an instance of taxi before you can create an instance of Speedo. So instead of just...
taxi.Speedo speed = taxi.new Speedo();
...create a new instance of taxi first, then use that instance to create a new Speedo...
taxi.Speedo speed = new taxi().new Speedo();
Edit: Note that if you already have an instance of the enclosing class, then you could simply use that. For example...
taxi myTaxi = new taxi(); taxi.Speedo mySpeedo = myTaxi.new Speedo();
"We're kind of on the level of crossword puzzle writers... And no one ever goes to them and gives them an award."
~Joe Strummer
sscce.org
I agree. Here's the link:
subject: Calling inner classes?
Similar Threads
overloading - basics
help needed with inheritance
static int not incrementing, constructor not called.
Dynamic Method Dispatch
variables in inteface
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/544761/java/java/Calling-classes | CC-MAIN-2014-35 | refinedweb | 560 | 60.21 |
empty,
headand
tail
if..
then..
else
let..
be..
in
declaredeclaring construct
definenaming construct
if..then..elseconditional construct
letblock construct
fixfixpoint operator
epsiloninterpreter
epsilonccompiler
eamlasassembler
eamldlinker
eamx2ceAM executable to C compiler
eamx2schemeeAM executable to Scheme compiler
epsilonlexscanner generator
epsilonyaccparser generator
epsilonccompiler
eamlasassembler
eamoldlinker
eamo2cbytecode-to-C translator
epsilonlexscanner generator
epsilonyaccparser generator
integer,
characterand
boolean
float
addi $a $b $c
addi_i $a $b n
andi $a $b $c
divi $a $b $c
divi_i $a $b n
f_divi $a $b $c
f_modi $a $b $c
ldci $r n
modi $a $b $c
modi_i $a $b n
muli $a $b $c
muli_i $a $b n
nxori $a $b $c
ori $a $b $c
s_f_divi
s_addi
s_addi_i n
s_andi
s_divi
s_divi_i n
s_eqi
s_gti
s_gtei
s_lti
s_ltei
s_modi
s_modi_i n
s_muli
s_muli_i n
s_noti
s_neqi
s_nxori
s_ori
s_subi
s_subi_i n
s_xori
subi $a $b $c
subi_i $a $b n
swp $a $b
xori $a $b $c
mka $a $b
mka_i $a n
s_mka
s_mka_i
gc
hlt n
nop
GNU epsilon is a functional language implementation.) The FSF's Front-Cover Text isA GNU Manual(b) The FSF's Back-Cover Text isYou have freedom to copy and modify this GNU Manual, like GNU software.
A copy of the license is included in the section entitled "GNU Free Documentation book is the comprehensive documentation of the epsilon functional programming language, library and tools.
In this book "epsilon" is always written in lower case, and even indicated as a lower case "e" in acronyms. This convention and the name of the language itself come from the idea that epsilon is intended to be a simple language with uniform syntax, easy to learn -- which however does not prevent it from being powerful and expressive.
epsilon is thought to be a production language, useful for writing real applications. It has an elegant type system, compositional semantics, referential transparency; it will be also easy to write compilers and interpreters in epsilon. But all these features are intended to help writing applications, and were not implemented only for the sake of creating a beautiful conceptual model. Also the default library has its weight, and it will be worked on as much as possible.
epsilon is a young project, and many things still remain to be completed or rewritten. Help is welcome.
This book is not finished yet. For any comment, suggestion or correction you can send us a message using the public mailing list bug-epsilon@gnu.org. Documentation problems are not unlike bugs, and any input from users aiming to improve the quality of this book is precious.
Please understand that Luca Saiu's English is not native, and as such it is surely far from being perfect; any reporting of misspelling, or generally of any mistake, is welcome. You can use bug-epsilon@gnu.org also for this.
If you need help for using GNU epsilon you can write to help-epsilon@gnu.org.
In this book we do not assume any previous knowledge of functional programming, nor of programming altogether; nonetheless some previous programming experience will ease reading.
The functional programming tutorial explains everything is needed to start, and the following chapters introduce concepts as they are needed. There should be no forward-references, so this book can be mostly read sequentially, from beginning to end. Some sections, however, are primarily meant as reference documentation, and you can safely just skim them in a first reading, and rewiew them with more attention at the time you actually need them. We are referring, first of all, to the chapters about Library and Internals.
The epsilon language was born at the end of 2001 as a small programming project (hence its name: the Greek letter ε
is used in mathematics to indicate small constants1), a way to experiment with the implementation of functional machines. That first implementation was fully written in C (with flex and Bison), including the compiler. The code could only be executed via a virtual machine written in C for that purpose, the LVM. The LVM managed memory with a reference counter, later replaced by the Boehm-Demers garbage collector.
In the winter of 2002, while playing with the language and adding new features the author Luca Saiu became more and more impressed by the power and expressivity of the functional paradigm and decided to make epsilon a "real" language: many important features were added at that time, including type inference, polymorphism, modules and abstract types.
Additions from the spring and summer of 2002 are garbage collector support, concrete types and exceptions. In Autumn 2002 the new abstract machine, the epsilon Abstract Machine or eAM, was started. The eAM works generating fast C code from the epsilon Abstract Machine Language (eAML), an intermediate code representation.
In Autumn 2002 Matteo Golfarini joined the project. In this period the
eAM,
epsilonlex and the purely functional I/O system were developed.
On 27th December 2002 epsilon was officially approved as part of the GNU Project2. Richard Stallman asked to enable epsilon to generate also Scheme as target code, so that epsilon can be used as an extension language for applications supporting GNU Guile3. Scheme generation from eAML is still at an experimental stage, but works.
The most recent important additions are the peephole optimizer and the eAM garbage collector. The collector works, is fast and reliable, but is not yet incremental and could be made parallel with relatively little work.
The eAM was essentially completed in Autumn 2003. The
epsilonlex scanner-generator worked and was usable, and the
epsilonyacc parser-generator was planned as the next
step.
Some new features introduced in late 2003 are C-libraries (a
clean an easy way to extend the eAM with compiled,
dynamically-loaeded C code), support for graphics and a library to
handle S-expressions;
epsilonlex was rewritten from scratch twice. The third
implementation is much cleaner and faster than the previous ones. It
still lacks the frontend, but the backend works very well.
epsilonyacc was initially written for SLR grammars; it
worked, but the author was not satisfied about the implementation, so
he started a new rewrite from scratch. This new version supporting
canonical LR(1) grammars is much better than the first one, and is
next to be finished.
The author now plans to push the language towards the direction of Lisp, allowing runtime generation and execution of epsilon code, but retaining type-safety and the functional style. This will be the feature making the epsilon language really unique.
On 20th January 2004 around 11pm
epsilonyacc
bootstrapped4 for the first time,
followed by
epsilonlex on 24th January, at 2:30am.
The language was influenced by ML, Haskell and Lisp, and in a minor way by the author's favourite imperative languages: Ada, Java, Python, C++, Smalltalk.
This chapter introduces functional programming, not assuming any previous programming experience. If you already know functional programming you can just skim it.
The functional paradigm is a very high-level programming style. "High-level" means that you program in an abstract way, "far" from the machine details and "near" your human way of thinking.
With functional programming you can safely ignore low-level details such as allocating and freeing memory; there is no need of using pointers or references; no need to know the internal representation of data structures. And don't even worry if you don't understand the above concepts. You will simply have no need of any such complication for using a functional language like epsilon.
Simplifying a bit, a program in a functional language is an
expression, i.e. a piece of code which computes some
value, and writes it back to you. In fact you can also use a
functional language as a desk calculator. You can simply write
2 + 3 - 1, and get
4 as result.
Of course you can also do much complex things: you can write a program playing chess, or drawing graphics. You will even be able to write programs which generate other programs and execute them. Reading this book you will learn, among the other things, why and when this is useful.
For understanding the principles of functional programming you need to understand some very basic mathematical concepts. No advanced algoebra or analysis is needed, and this presentation will be informal.
A basic concept involved in most functional languages, including epsilon, is the idea of set. A set is a collection of homogeneous objects, such as number, words, or even real-world objects like people, houses, books. You can represent any object you can imagine in some way; the one thing you must remember is that a set is homogeneous: you choose some related objects to represent, and you can think of a set containing all of them.
A common example of a set is the set of natural numbers, written as N . It contains all integer numbers starting from zero: {0, 1, 2,...}
N contains an infinite number of elements.
The set of integer numbers contains all natural numbers and also negative numbers: {..., -3, -2, -1, 0, 1, 2, 3, ...}
You can find a representation for any object you can think; for example, say you are interested in representing your collection of books (for brevity let's assume you only have three):
{Tom Sawyer, Macbeth, Ulysses}
This latest set is finite.
In a programming language, when a given object a belongs to a
set A, you say that a has type A. "Has
type" is commonly written as a colon (
:). For example, you can write
"
-27 : integer", or "
Ulysses : book". By
convention, sets have plural names but types have singular
names: you write
0 : natural, and not
0 : naturals.
A function is a relation from some elements in a set A and some elements of a set B5, with one constraint: for any element a belonging to A, the function must associate it with at most one element b of B.
If the function f is between the set A and the set B you
can say that f maps A into B or that f is a function
from A to B, and write "
f : A → B".
It is not a coincidence that we used the "
:" operator; the
function f is itself an element of a set, i.e. has a type: the set is
the set of functions mapping A into B.
Functions can be applied, i.e. they can be given an object of type A (called argument or parameter6); when functions are applied they compute some value of type B as result, and they finally return it.
For example, the successor7 succ is a function with maps the
integer set into the integer set itself. You can write
"
succ : integer → integer", and indeed succ belongs
to the set of functions from integer to integer.
Let's see an example of application: "
succ 10"
gives as result
11
(you can write "
succ : 10 |→ 11", or
"
succ 10 = 11").
For applying a function, just write its
argument after it. That's all.
If a function is undefined on one or more elements, we say it is partial, and if a is an element which f is undefined on we write "f(a) = _|_ ", or "f : a |→ _|_ "; read "_|_ " as "bottom". Partial functions are very common and useful.
Another example: let g be the function which associates a book with
its author (for example, it maps Ulysses into James Joyce). g
is from book to author, so we can write
"
g : book → author". Note that the constraint above
compels us to associate one book to at most one author; we
cannot use g if we want to describe the relationship between a book
and its authors when they are more than one.
How could we solve this problem? Quite easy: let's use another
function instead of g, say h, mapping the set of books
into the set of sets of authors. For example, h maps
The Capital into the set {Marx, Engels}: the element
is mapped into only one element (even if this single element is
a set containing two elements), so the constraint is respected.
For the type, we can write "
h : book → set of authors".
Let's get back to the succ example. How could we define succ?
A common way of defining functions is the lambda-notation8: we can define the successor as λ n . n + 1. This means "if we call n the argument of the function, then the value which the function computes is n + 1".
One more example: let's define the reverse_number function9; very simple: the definition of reverse_number is λ x . 1/x. Note also that reverse_number is a partial function, since it is undefined on 0.
How can we define a function taking more than one argument? In lambda-notation you can simply write the arguments sequentially, between the λ and the .; for example, (say this function is called plus) λ x y . x + y. We can say that plus is a function with two arguments, or that plus is a function of arity 2.
There is another way to see the question: we could define plus as λ x . λ y . x + y; with this definition plus is a function which takes a parameter named x and returns another function which takes one parameter named y and returns x + y. This way of defining functions is called currying10, and we can say that with this new definition plus is curried.
Note that with currying we can only use functions with one argument, without losing generality: λ x . λ y . x + y is a function taking only one argument, x; the object returned by the function is another function, also taking only one argument, y.
Another advantage of currying is the possibility of partial application: for example we can apply plus to only one argument: (λ x . λ y . x + y) 7 returns the function λ y . 7 + y (it's nothing so strange, just a function which takes an argument and returns it incremented by 7). Of course we can also pass two arguments:
(λ x . λ y . x + y) 7 3
returns 10, as expected. Just pay attention to the type:
plus : integer →
(integer →
integer)
plus is a function which takes an integer and returns a function which takes another integer and finally returns a third integer.
Curried functions are extensively used in epsilon.
Let's define a more complex function, the factiorial11 function fact.
A way to see the factorial is this: if the argument (we call it n) is 0 then the result is 1, else the result is n times the factorial of (n - 1). You should convince yourself that this definition is correct: for example
5! = 5 * 4! = 5 * 4 * 3! = 5 * 4 * 3 * 2! = 5 * 4 * 3 * 2 * 1! = 5 * 4 * 3 * 2 * 1 * 0! = 5 * 4 * 3 * 2 * 1 * 1
Ok, let's write it in a more formal way: the definition of fact is
\ n . if n = 0 then 1 else n * (fact (n - 1))
Here is the main point: while defining fact we used fact itself. The technique we used, defining a function using itself, is called recursion.
Recursion is a very powerful tool; you can define many important and useful recursive functions, from simple ones such as fact to very complex ones.
Another simple example: the identity12 function id, restricted to map integers to integers.
Let's define id as a recursive function. The idea is this: call the argument n; if n is zero then the result is zero, else the result is one plus (id (n - 1)).
More formally, id is
\ n . if n = 0 then 0 else 1 + (id (n - 1))
You have surely noticed some similarity with fact. In fact this pattern is quite common in recursive definitions, even if it is surely not the only one.
Before going on with recursion we are going to make a little digression introdcing a fundamental data structure (a data structure is an object made of other objects), the list.
A list is a sequence of objects of the same type. In lists order
does matter: for example
[1;2;3] is different from
[2;1;3].
Formal definition: a list can be the empty list (written
[]), or the cons13 of an object a and a list L (written
a::L).
Intuitively speaking, consing means adding one element
before a list: to obtain the list
[17;-2;32], for example, you can
cons
17 and
[-2;32], writing
17::[-2;32].
Beware of the types: you can't, for example, cons a book and a list of
integers; you can only cons a book and a list of books (and obtain
another list of books), or an integer and a list of integers (and
obtain another list of integers). The empty list
[] poses no type
problems: you can see it as a list of integers, of books, or of any
type you need in a given moment14.
An example: you can cons 1 to the empty list:
1::[]
then you may cons 2 to the list you built:
2::1::[]
The list you obtained,
2::1::[], can also be written as
[2;1], and the previous one
1::[] can also be written as
[1]; they are commodity abbreviations.
A final note: re-read the definition "a list can be the empty list, or the cons of an object a and a list L". You may have noticed that we defined lists using lists: it's a recursive definition.
empty,
headand
tail
Other than cons there are three operators for working on lists: they are called empty, head and tail15. We are now going to describe them in some detail.
empty : (list of τ1 ) → boolean
You can read τ1
as "any type"16. This is the first time you see the
boolean
type; it is a very simple yet important type: the set of
booleans is the set containing only the two values
true and false.
empty, given a list L as parameter, returns true
only if L is the empty list
[]; if L is not empty
then
empty returns false.
Two examples:
empty []
returns
true;
empty [1;2] returns
false.
head : (list of τ1 ) → τ1
head, given a list L as parameter, returns the
first element of L. It is an error to apply
head to the
empty list17. For example,
head [-2;450;0;3] returns
-2.
Notice that head is a partial function:
head :
[] |→ _|_ .
tail : (list of τ1 ) → list of τ1
tail, given a list L as parameter, returns the the whole
list without the first element. It is an error to apply
tail to the empty list18.
For example
tail [1;2;3] returns
[2;3];
tail [17]
returns
[] (don't forget that
[17] is the same as
17::[]).
tail is also partial: tail :
[] |→ _|_ .
Always pay attention to types:
head takes a list of objects of some
type τ1
and returns an object of the same type
τ1
;
tail takes a list of objects of some type τ1
and
returns another list of objects of the same type
τ1
. This is very intuitive: for example, if you have a list of
numbers and extract its first element with
head, you expect to
find a number, and not something different.
Now we are going to illustrate some important concepts about recursion, analyzing few noteworthy functions in all their important aspects.
In this section the concept of reduction will be used for the first time. We say that an expression E "reduces to" an expression F, and we write
E ⇒ F
when computing E leads to compute F, as a single computation step19. For example
3 + (7 - 2) ⇒ 3 + 5 ⇒ 8.
Reductions are a useful mean to express computations.
We can easily compute the first element of a list using head,
but say you want to know which is the last element of a list;
for example, if we pass
["a"; "b"; "c"] to the function (we
call it last), we expect to be given back as result
"c".
Always start thinking of the type: last takes as its parameter a list of objects, each having some type τ1 , and returns a single object of the same type τ1 ; the returned object belongs to the list, so it must have the same type as the elements of the list.
last : (list of τ1
) →
τ1
This is a definition of last:
\ x . if empty (tail x) then head x else last (tail x)
We call x the parameter.
There are two cases:
In this case the first element of x is also the last element of
the list: we return
head x.
The last element of x is the last element of its tail:
we return
last (tail x).
Pay attention to the second case: it the list is not one-element (for example say it has three elements), we return the last of its tail, which in the example is a two-element list: the last of the two-element list is the last of its tail, which is one-element. The important fact to note is that the arguments of successive recursive calls are more and more simple: this is very typical when recurring over data structures20, and a different behaviour is nearly always an indicator of errors.
Let's track down how this function works for, say,
[45; 43; -4; 35],
step by step:
last [45; 43; -4; 35]:
is
[45; 43; -4; 35] a one-element list? No (we are in case 2), return
last [43; -4; 35].
last [43; -4; 35]:
is
[43; -4; 35] a one-element list? No (we are in case 2), return
last [-4; 35].
last [-4; 35]:
is
[-4; 35] a one-element list? No (we are in case 2), return
last [35].
last [35]:
is
[35] a one-element list? Yes (we are finally in case 1), return
head [35], i.e.
35.
"Coming back" to the first function call we have:
last [45; 43; -4; 35] ⇒
last [43; -4; 35] ⇒
last [-4; 35] ⇒
last [35] ⇒
35, which is the value we were expecting.
Notice that in the definition of last it is assumed that the
argument is not
[]: if x is
[] then the evaluation
of
head (tail x) fails.
last is a partial function, being it undefined on
[]:
[] |→ _|_ .
Say you want to compute the list of integers from a given value to another
given value; for example, if we pass
1 and
5 to the function, we
expect to be given back as result
[1; 2; 3; 4; 5]; if the first
parameter is greater than the second one, then we expect the empty
list
[]; if we use the same value x for both parameters we
expect
[x]. We are going to call this function
interval.
interval takes two integer parameters and returns a list of integers; we write it curried, so we may think of it as a function taking an integer parameter and returning another function, which takes another integer parameter and returns a list of integers:
interval : integer →
(integer →
list of integer)
interval is
defined as
\ a . \ b . if a > b then [] else a :: (interval (a + 1) b)
As you could image interval is a (curried) function taking two parameters; we call them a and b, respectively.
As often happens, there are two cases:
We simply return the empty list, as the definition says.
The list that we are going to return will surely contain a as
the first element; the rest of the list will be the interval from
(a + 1) to b; so we cons a to
(interval (a + 1) b).
Let's examine an example of application, say
interval 5 7, step by step:
5 :: (interval (5 + 1) 7).
(interval (5 + 1) 7), i.e.
interval 6 7: is 6 greater than 7? No, so we are again in the second case: we return
6 :: (interval (6 + 1) 7).
interval (6 + 1) 7, i.e.
interval 7 7: is 7 greater than 7? No (it's equal to it, not greater than it), so we are in the second case: we return
7 :: interval (7 + 1) 7
interval (7 + 1) 7, i.e.
interval 8 7: is 8 greater than 7? Yes, so we are in the first case: return
[].
Now we have evaluated everyhing we needed: "coming back" to the first function call we have:
interval 5 7 ⇒
5 :: (interval 6 7) ⇒
5 :: (6 :: (interval 7 7)) ⇒
5 :: (6 :: (7 :: (interval 8 7))) ⇒
5 :: (6 :: (7 :: [])), i.e.
5 :: 6 :: 7 :: [],
which can be abbreviated into
[5; 6; 7], the value we were expecting
to be given.
Notice the difference between the evaluation of interval and the evaluation of last: each call to interval, except for the last one, is expanded to an expression containing another call to itself among other things (here the expression is the cons of an object and another call to interval). In last the situation is simpler: each call, except the last one, is simply expanded to another call, with no other expressions involved which "surround" the recursive call:
interval 1 3 ⇒
1 :: (interval 2 3) ⇒
...
last [345; -50; 4555] ⇒
last [-50; 4555] ⇒
...
Said in another way, you have no need to "keep track" of temporary
results when evaluating a call to last, but you have when
evaluating a call to interval: for computing
interval 1 3
you need to compute
interval 2 3, then cons
1 to it; to
compute
interval 2 3 you have to compute
interval 3 3
and cons
2 to it, and so on. With interval you have
first to compute temporary values, then to "attach" them with
expressions. With last you can simply forget all the temporary
values before the last one; once you have computed them, you will not
need them anymore:
last [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16] ⇒
last [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16] ⇒
last [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16] ⇒
...
⇒
last [16].
For all these reasons it's easy to see that last is somehow simpler than interval (apart from the number of parameters: this is not important in this context). last is said to be a tail-recursive function, while interval is not21.
We are going to explain more formally what means for a function to be tail-recursive, later in this book. For this time just an intuitive understanding is enough.
Evaluating a call to a tail-recursive function with a computer is noticeably more efficient than evaluating a call to a plain recursive function: when using a functional language the programmer should strive to use tail-recursion whenever possible.
Let's examine a recursive function called dontstop, defined in this way:
\ x . dontstop x
epsilon would detect (infer is the most appropriate term) its type as
dontstop : τ1
→
τ2
,
altough the reason for this may not appear evident yet. The argument x can have any type, hence the generic type τ1 . But what is the type of the object returned by the function? The answer is that the function never returns, so in a sense the returned object has an unknown type; and there is no reason to suppose that the returned object has the same type as x, hence the new type τ2 . This may seem counterintuitive now, but there are deep reasons22 (which will be explained later in this book) not to write, say,
dontstop : τ1
→
nothing.
The behaviour of dontstop is quite simple to understand: let's
see it when applied to
[[10; 3]; [2]] (an object with type
list of list of integer, which is ok: the parameter x can
have any type, as we have just said):
dontstop [[10; 3]; [2]] ⇒
dontstop [[10; 3]; [2]] ⇒
dontstop [[10; 3]; [2]] ⇒
...,
never stopping. A call to dontstop expands to another call to dontstop, with exactly the same argument. The fact that the complexity of the argument does not lessens in successive calls usually indicates that evaluation does not terminate, as in fact happens in this case.
Any expression whose computation is non-terminating is said to diverge. If E diverges you can write E↑ . By constrast, if the evaluation of E terminates at some point then E is said to converge, and you can write E↓ .
dontstop is a partial function, being undefined for
at least some values of its argument (in this case for all the
possible values), and the meaning of dontstop can be expressed as
λ x . _|_ ; note, however, that here the
symbol "_|_
" stands for something different from the meaning of
last [], also expressed
by "_|_
". Here "bottom" stands
for non-termination, in the other case it stood for
error. The actual meaning of an expression including
_|_
will be specified case by case, when ambiguity can occour.
As a final comment notice that dontstop is a tail-recursive function.
Of course dontstop is illustrative as an example, but such a function should never be needed in actual programs.
This section is aimed to the readers with some experience in imperative programming. If you are learning to program from this book you can safely skip this part.
The most obvious difference between functional languages and
imperative languages is that in functional languages there are
no side effects. For example in an imperative language you can
write something like
a := 67, or, to increment a variable,
a := a + 1. How can we increment a variable in a functional
language? The answer is simply that there is no way; what is
achieved with side effects in imperative languages must be dealt with
in some other way, in most cases using recursion.
This has some advantages: when you create a variable you give it a value, and you are sure that the value will never change; in imperative languages, instead, it's a common mistake to think a variable has some value, while instead it has been updated, possibly by some procedure you are not thinking of: this is impossible in a functional language. It's equally impossible to have uninitialized variables: you name it, you create it; there will never be a uninitialized-variable or null-reference error.
The absence of loops in functional programmers captures the attention of many programmers even more than the absence of side effects; however the absence of side effects obviously implies the absence of loops: iterating means executing some command many times, but what command are we about to execute? There is no state to change since there are no side effects. You just keep calling recursive functions and computing values until you reach the one you are interested in, and finally return it.
Recursion is not inherently more difficult to use than loops, but requires a little adapting to "think recursively". Don't be afraid if you find yourself stuck to think "iteratively" at first: just try to express the same idea as a recursive function. In many cases, when you have finished, you will be surprised from the clearness of recursive code with respect to iterative code.
In a functional language there is no need for pointers or references. Data structures are beautifully expressed without pointers or references23 using abstract and concrete types, which are very intuitive to define and to use, and not error-prone at all.
Concrete and abstract types will be fully dealt with, later in this book.
Semantics24 also gives some more justifications25 to our claims.
In functional languages a function is an first-class object, i.e. an object like any other: you can compute a function at run time, you can make a function return another function, you can define a function with no special syntax, you can write an unnamed lambda-expression in the middle of a bigger expression: for example
2 + ((λ
x . x + x) 3)
gives back
8 as result, as expected.
For an example of a function returning a second function, think of any curried function (look at its type if you don't understand at first).
This use of functions is natural and simple, but is forbidden in nearly all imperative languages.
In nearly all functional languages functions can have other functions as parameters (we speak about second order in this case), and the parameters of those functions can be yet other functions (third order), and so on. If there is no limit to the order of functions then the language is said to be ω -order26. epsilon is an ω -order language.
You will learn later in this book how higher-order functions are useful to write simple and compact programs; in many cases you can even use higher-order functions as substitutes for recursion.
Higher-order functions don't exist or their use is seriously restricted in imperative languages.
In functional languages a noteworthy property holds, named referential tranparency: in practical terms it says that if an expression E has value v, you can replace every occurrence of E with v in a program, without changing its meaning. This makes programming more clear and less error-prone and allows the compiler to make some optimizations which would be impossible in an imperative language.
Many functional languages, epsilon included, are strongly typed, i.e. they recognize as invalid all the programs which contain type errors, such as multiplying an integer and a boolean, at compile time27. Many type errors are subtle, and in general having the compiler detecting them is a great help, saving time and frustration.
Most imperative languages allow unsafe use of types, and this may lead to detect errors very late, even after program release.
Some functional languages28, including epsilon, infer types; it's the compiler to tell the type of an expression to the programmer, and the programmer is saved from the pain of declaring, say, the type of every function parameter. The output from the compiler is a mean to verify that the meaning of the program is really what the programmer intended.
Implementations of imperative languages do not usually provide type inference.
Purely functional languages, including epsilon and Haskell, completely
avoid the dangers of side effects forbidding the user to mix
input/output with normal computations, as a way
to preserve referential transparency29. For example, the epsilon
compiler refuses to accept code like
2 + input_integer + 3.
Anyway not all functional languages have these restrictions. ML, for example, has some imperative features including side effects and I/O in imperative style.
Languages like epsilon and Haskell are said to be purely functional.
This introduction to functional programming, even if brief, may seem too abstract at a first glance, but as the name functional language suggests the basic mathematical aspects are of fundamental importance: while programming in epsilon you will be defining recursive functions with complex types all the time.
If you have not understood at least the basics of types, lists and recursion you should read again this chapter, paying particular attention.
Of course any suggestion for improving this documentation is welcome, but we deem this chapter particularly important. You can use the public mailing-list bug-epsilon@gnu.org to talk with us about these matters. No subscription is needed.
However, don't be afraid if you still have some doubts; most of the same concepts which were outlined here will be presented in practical terms in the next chapter.
In this chapter you are going to learn the basics of the epsilon language by using the tools yourself.
We are now assuming that the epsilon meta-interpreter is already implemented and working. This is not yet true, but you can use the temporary quick-and-dirty REPL30 in the meantime.
You can invoke the temporary REPL simply typing
epsilon.
In the same way, in this chapter we purposefully ignore the bytecode interpreter eVM, which is likely to disappear in the future, when the interpreter is ready.
An epsilon program can be run using one of two distinct tools, which are useful in different situations:
When used in the way explained above, the interpreter is said to act as a REPL, i.e. "Read-Eval-Print Loop": it reads a piece of code, evaluates (executes) it, prints back the result and starts again.
This way of working is very comfortable when you are developing a new
program: it makes easy to write some code, to test it soon, and to fix
it soon if some error is found.
There is a drawback, however: the code runs slowly and uses much memory.
Using the compiler you use an "Edit-Compile-Run Loop"32 approach. Note that it is not a program to "loop", it's you; it's you who must manually edit the files, save them, compile them, wait for the translation to finish, and execute the translated program.
The advantage of this approach is the high speed and efficiency of the translated code. Its drawbacks are the slowness of the translation, and the general clumsiness of the approach. The compiler is the right tool to use when you have finished writing and testing a program which works well, and want it to run fast.
A program run with the interpreter (interpreted) or translated by the compiler (compiled) behaves identically: only speed and memory use are different. You have not to worry about compatibility, since the interpreter and the compiler support the exact same language.
The interpreter is also better suited to learn the language and experiment. In the rest of this chapter we are going to assume you use the epsilon interpeter.
Try starting the interactive interpreter: at the command prompt of your system, type
# epsilon
The interpreter will show a banner similar to
------------------------------------------------------------------------------- i ll eeeeee l version 0.2.1CVS ee p pppp ssss ii l oooo n nnnn eeee p p s i l o o nn n ee p p sses i l o o n n ee p p s i l o o n n eeeeee ppppp sses iii lll oooo n n p p ppp ------------------------------------------------------------------------------- GNU epsilon 0.2.1CVS, Copyright (C) 2002, 2003 Luca Saiu GNU epsilon comes with ABSOLUTELY NO WARRANTY; for details type `:no-warranty'. This is free software, and you are welcome to redistribute it under certain conditions; type `:license' for details. Welcome to the epsilon meta-interpreter. Type :? for help. 1 >
The prompt
> followed by a blinking cursor means that the
interpreter is ready to accept your code; now try typing
2 + 2;
(remember the trailing semicolon), and pushing <Enter>. The
interpreter will answer
- : integer 4 1 >The first line means that the expression you entered, indicated by
-, has integer type. The second line shows the computed value, which, as you were expecting, is 4. The third line is a new prompt; the interpreter is ready to accept more code.
For exiting the interpreter, type
:quit(note the leading colon, and the absence of a trailing semicolon) and push <Enter> at the prompt. Another way to exit the interpreter is by pressing <Ctrl>-<D>.
Pay attention to the syntax:
:quit,
:help and
:license are commands directed to the interpreter itself, in
the sense that they don't deal with your epsilon program. Interpreter
commands need a leading
: and no trailing
;.
Expressions such as
2 + 2;, instead, are part of the epsilon
syntax. They need no leading
: and they do need a trailing
;.
Now start the interpreter again, typing
epsilon at the
command prompt of your system.
Try making some computations with integers; parentheses are
used to group subexpressions to be computed before, as in arithmetic.
The 'times' symbol is written as an asterisk (
*), the 'divided'
symbol is written as a slash (
/).
1> (2 + 6) * 2 / 4; - : integer 4
You can try other more complex expressions if you like.
The expression above was a query: you asked the interpreter to evaluate an expression for you, and you were interested in the result. Queries are a common way, among the rest, to test a function you wrote, supplying a value and verifying the result is what you are expecting. Let's now show how to define a function, starting from a very simple one.
Say you want to define a function adding 3 to its only (integer) argument: you learnt in the previous chapter that such a function is written as λ x . x + 3. Since the letter λ
is usually not present on keyboards, epsilon uses the backslash
(
\) character instead of it. So try entering
\ x . x + 3;What you get as an answer is
- : integer -> integer <function>
which maybe is not what you were expecting. Let's examine the answer of the
interpreter: the first line says that the expression you entered has
type integer → integer, which is right; the
second one says that the value of your expression is a function;
often doesn't make much sense to write them: they are usually complex and
not very useful as output from the interpreter (they are useful
as input for it). Hence the interpreter just writes
<function> when the result of your computation is a
function. And indeed it is, in this case.
The problem is that you wrote a function, but it still was a
query. For a definition there is need for a different syntax.
Of course this syntax exists, and it is very simple: just write, for
this same example,
define f = \ x . x + 3;
The interpreter answers saying just
f : integer -> integer
Now you have given your function the name f (you could have
used any different name, of course). You can now use your function
f in queries and in other definitions: try
f 10;
The result is 13, as you expected.
We have shown examples of function definitions, and indeed that is the
most common case, but you can make definitions for objects of
any type, not necessarily functions. The following example
shows several non-function definitions:
define twenty = (\ x . x * 2) 10; define forty = twenty + twenty; define this_is_a_string = "Hello, world!"; define pi = 3.14159265358979323846264338327; define empty_list = [];
As we already said in Functional programming tutorial, a
boolean value (also called truth value) is either
true or
false. Booleans are useful in a wide range of
contexts. One of the most simple is
in a query comparing two objects: "is 1 less than 2?"
1 < 2; - : boolean true
A slightly more complex query (note that
>= stands for
`greater or equal
', and
<= stands for `less or equal
'):
(f 1) >= (f 2); - : boolean false
You can always think of reductions if this helps you:
(f 1) >= (f 2) ⇒
((\ x . x + 3) 1) >= ((\ x . x + 3) 2) ⇒
(1 + 3) >= (2 + 3) ⇒
4 >= 5 ⇒
false
You can also directly use the constants
true and
false:
try writing the trivial query
true;
The interpreter will answer
- : boolean true
You can use the usual logical connectives not , and , or
and xor with boolean expressions. "or " is also called "inclusive or", and "xor " is also called "exclusive or".
Let us explain the meaning of boolean connectives, where e, e_1 and e_2 are epsilon expressions with boolean type. The result has always boolean type, too.
The meaning of logical connectives is:
reduce to different truth values).
As we said above, boolean connectives applied to boolean objects yield other boolean objects, so they can be combined to form boolean expressions of any complexity. Try the following query with the interpreter:
(true and (not not false)) xor ((1 < 2) or false);
The result is true. Let us show why:
(true and (not not false)) xor ((1 < 2) or false) ⇒
(true and (not true)) xor (true or false) ⇒
(true and false) xor true ⇒
false xor true ⇒
true,
which is to say that reductions apply to boolean expression as to any other type of expression.
Here are some more sample queries; try computing them in your mind or with paper
and pencil using reductions before using the interpreter:
true and (true or false); (1 < 2) xor false; not ((20 < 22) and true);
if..
then..
else
In the examples of Functional programming tutorial we used the
conditional operator
if..
then..
else
several times, without explaining the details.
The syntax is
if guard
then expression_a
else expression_b
where guard , expression_a and expression_b are epsilon expressions. There are two constraints:
: boolean
: τ1
: τ1
The intuitive meaning is: evaluate guard
; then, if it reduces to true
then reduce the whole
if..then..else expression to expression_a
,
else if it reduces to false then reduce the whole
if..then..else expression to expression_b
.
Said more formally:
(
if guard
then expression_a
else expression_b
) ⇒
expression_a
(
if guard
then expression_a
else expression_b
) ⇒
expression_b
(
if guard
then expression_a
else expression_b
)↑
Some brief comments about the type constraints:
The first constraint is obvious34: for deciding
between two options you need a
boolean: any other type
(
integer,
string,
list, etc.) would not be the
right thing.
To understand the second constraint, try entering the query
if 1 < 2 then 1.0 else "abc";
This will lead to an error, since the second constraint was violated:
1.0 and
"abc" have different types (
float and
string, respectively). This is reasonable: in an actual program
it would be very difficult35 to do something reasonable if the two
branches (the "
then branch" and the "
else branch")
have different types; and thare are also other reasons: which
type would you give to the expression
\ x . if x then 1.0 else "abc"?
You would not be able to decide between
boolean → float and
boolean → string.
An example: the function monus is somewhat famous: it takes two
numbers x and y, and returns x - y if it is
not negative, else returns 0. Let's see a definition:
define monus = \ x . \ y . if x - y >= 0 then x - y else 0;
Try calling monus:
monus 10 12; - : integer 0 monus 12 (5 + 5); - : integer 2
A final note for imperative programmers: if you know imperative
programming, you might ask whether an
if..then operator,
without
else, exists. The answer is a strong no: in a
functional language an expression must always be reduceable to
something: it is not acceptable to say "if a is less than b
then 10"; and if a is not less than b, what are we
going to return? An explicit
else branch is always needed.
let..
be..
in
In many cases it is useful to have an "abbreviation" for a given subexpression, which is used more than once. For example, say you want to compute
2^5 + 3^5 + 4^5 + 5^5 .
A way to compute it with a query is:
2 * 2 * 2 * 2 * 2 + 3 * 3 * 3 * 3 * 3 + 4 * 4 * 4 * 4 * 4 + 5 * 5 * 5 * 5 * 5;
The above expression is perfectly good for the interpreter, but not
very readable by humans.
The
let construct allows you to write, instead,
let f be \ x . x * x * x * x * x in (f 2) + (f 3) + (f 4) + (f 5);
The meaning is quite intuitive: the name f is temporarily used for
(bound to is the correct term) a function which takes a number
x and returns x^5
; this name occours four times in the
following code (said the body of the
let
expression), as a placeholder for the function
(λ x . x ⋅ x ⋅ x ⋅ x ⋅ x).
Out of the
let expression, this association of a value to the name
f (this binding of f) is not visible.
If you expand the body replacing every occurrence of f with
the value which is bound to it, you obtain an equivalent expression:
in fact the whole query above has exactly the same meaning of
((\ x . x * x * x * x * x) 2) + ((\ x . x * x * x * x * x) 3) + ((\ x . x * x * x * x * x) 4) + ((\ x . x * x * x * x * x) 5);
This expansion shows how much
let can make programs more readable.
As a side note, writing
a * a * ... * a (with a
occourring b times) is not the most clever way to compute a^b
.
The right way is using the power operator
**, which
allows to simply write
a ** b. We did not do so above just
because using
** was not convenient for us to
illustrate the
let construct: we needed some more "visual clutter".
To do: syntax (single binding), intuitive semantics, more examples, multiple binding
To do: move this part to the beginning of this part, with a more precise explaination substitutions and reductions.
A free occurrence of a variable is an occurrence of the
variable which does not refer to an inner λ
or
let. An occurrence which is not free is called bound.
For example, if we replace the
free occurrences of
x with
100 in
x + ((\ x . x + 1) (x + let x be 1 in (x + y)))
we obtain
100 + ((\ x . x + 1) (100 + let x be 1 in (x + y)))
Explaination:
xis free.
xin
x + 1is bound by the inner
\ x ..
xin
x + let ...is free.
xin
(x + y)is bound by the inner
let x be.
Free occurrences were briefly introduced here since they will be needed once in the next subsection. This same topic will be covered at length later in this book.
A first approximation36 of the syntax of a
let expression is
let variable_a
be expression_a
in expression_b
Here is the intuitive semantics: the subexpression expression_b
typically contains one or more occurrences of variable_a
, even if
this is not required; every free occurrence of variable_a
in
expression_b
is replaced by the value which expression_a
reduces
to, and the whole
let expression reduces to the modified expression_b
.
More formally:
letvariable_a
beexpression_a
inexpression_b )↑ .
letvariable_a
beexpression_a
inexpression_b ) ⇒
expression_c , where expression_c is obtained from expression_b
replacing every free occurrence of variable_a with y.
An example: let's show the evaluation of
let x be 1 + 2 in x + x - 1.
1 + 2) reduces to:
1 + 2⇒
3. Ok, expression_a ↓ .
x) in expression_b (here
x + x - 1) with the value we have just computed (here
3):
3 + 3 - 1.
letexpression reduces to what we have just computed:
let x be 1 + 2 in x + x - 1⇒
3 + 3 - 1
3 + 3 - 1⇒
6 - 1⇒
5
To do: computability:
let is not needed for Turing-completeness
We are now going to define the factorial function with the epsilon
interpreter. The definition, as we already said in
Functional programming tutorial, is
λ n .
if n = 0
then 1
else n ⋅ (fact (n - 1))
To do
In this part we give a complete and formal description of the epsilon language, of its library and tools.
Some notions of Semantics and Languages would help to understand the mathematical parts, but they are not essential.
This is essentially reference material: feel free to skim it at a first reading.
declaredeclaring construct
definenaming construct
if..then..elseconditional construct
letblock construct
fixfixpoint operator
The programs in the epsilon distribution, like all the other GNU programs, accept the following two options:
-?
--version
-V
Every option in the long form
--XXXX has also the negative form
--no-XXXX, which does the opposite.
epsiloninterpreter
epsilonccompiler
To do: non-option parameters
The
epsilonc compiler translates epsilon modules source
files into eAML. By default it also calls the
eamlas
assembler, the
eamld linker, the
eamx2c
C code generator, and finally the system C compiler to
generate a full native executable program.
If any of the intermediate pass fails the compiler writes an error message to the standard error and exits with failure.
The default behavior is all that is needed in simple cases; for
more complex program it's conventient to direct
epsilonc via
command-line options to stop after any stage of compilation, allowing
the user to manually call the other translators.
The program accepts the options described below.
--verbose
-v
--show-types
-t
This option is on by default.
--unescaped-string
-s
"ab\ncd"the generated program would print
ab, a newline character and
cd, without surrounding double quotes.
--generate-eaml
-S
--generate-eamo
-c
--generate-eamx
-x
--generate-eama
-a
--generate-c
-C
--generate-scheme
--cc-options=XXXX
--main=XXXX
-m XXXX
--output=XXXX
-o XXXX
eamlasassembler
eamldlinker
eamx2ceAM executable to C compiler
eamx2schemeeAM executable to Scheme compiler
epsilonlexscanner generator
epsilonyaccparser generator
This part contains a detailed description of the implementation of epsilon.
It is of no particular utility for simple users of epsilon, except for who wants to have a deep feeling of how things work "below". It's very important, instead, for programmers who want to modify epsilon to extend it or to re-use a part of its code for some other purpose 37.
The chapters of this part have somewhat stronger requisites than the rest of the book: proficiency with compiler and run-time support design, C, flex, Bison, bash and Scheme is required to fully understand the sources. Also some notions of operating systems would help.
This chapter describes the implementation of epsilon, how it was written the way it is and why.
As any nontrivial software project, epsilon is structured in layers.
To do: talk about the REPL
At the top level there is the compiler, translating a module written in epsilon into a lower-level form. This form is called eAML, for "epsilon Abstract Machine Language". The compiler outputs a textual form of the eAML language, relatively easy to read for humans. This helps the developer of epsilon to test and debug the compiler, and also allows to write code directly in eAML. The eAML language is quite low-level and resembles an assembly language. It is an imperative language, making explicit the control flow, the ordering of the computations and the environment management.
Under the compiler there is the second component, the assembler, translating an eAML module from the textual form into a binary form called bytecode object file, so that the computer can deal with eAML in a more efficient way. Instructions are translated one by one with no important modifications. It's simply a change of format.
The linker takes several bytecode object files (normally each one derives form an epsilon source module) and links them into a single larger bytecode object file; this is necessary since the lower part of the system can only deal with one bytecode object at the time. The linker can also read or write a bytecode archive file, which is a library of bytecode object files, tipically with some external references not yet resolved.
The bottom part of the system, called the epsilon Abstract Machine or eAM, supports execution via one more tranlation pass: the bytecode-to-C translator compiles a bytecode object file into a C source program, to be compiled by an optimizing C compiler into native machine code, and finally executed. The drawback of this approach is the delay due to the compilation of optimized C code (the delay of bytecode-to-C translation is negligible), but the runtime speed of the generated code should be very high.
epsilonccompiler
To do: I need to rewrite the compiler in epsilon, and to document it.
eamlasassembler
The
eamlas assembler is a simple single-pass translator with
backpatching, written in C with flex and Bison.
A few words about source file organization before starting: the
assembler source files are in the
eam/ subdirectory. For each
category of instructions with the same format (e.g. with an integer
parameters, or with no parameters, or with a label parameter) there is
a subdirectory under
eam/c_instructions/, containing the code for
the instructions, one instruction per file. For example, the code for
s_nlcl instruction, having two integer parameters, is in
the file
eam/c_instructions/integer_integer/s_nlcl.
To ease the developing of epsilon and to provide a better structuring,
the source files
eamlas.l and
eamlas.y are not written by
hand; instead they are automatically generated by the bash scripts
make_eamlas_l and
make_eamlas_y. The scripts scan the
instructions/ subdirectories to find the opcodes of all the
instructions and to divide them into categories after the format of
parameters. The division into categories is useful while generating the
frontend files
eamlas.l and
eamlas.y.
The advantage of this approach is evident: to add, remove or rename an
instruction it is sufficient to work only on the single file
for that instruction: the assembler (together with the other low-level
parts of the system) is updated automatically.
The output of the bytecode in written into the file using the module
bytecode.c (also used by other parts of the system).
Other than
bytecode.c, nothing more than the generated
eamlas.l and
eamlas.y is needed; the main logic is in
eamlas.y: just a single scan in which all the found
instructions are memoized, and all labels uses and definitions are
stored in a data structure (essentially an hash table). At the end of
the parsing all label references are resolved with a backpatch, and
the output file is finally written.
The epsilon Abstract Machine is relatively complex, and deserves a whole chapter. See The epsilon Abstract Machine.
eamoldlinker
To do: write the linker and document it
eamo2cbytecode-to-C translator
To do: document eamo2c internals
epsilonlexscanner generator
epsilonyaccparser generator
To do: talk about how to create new eAM instructions.
The epsilon Abstract Machine, or eAM, is a model of the operations involved in the execution of epsilon programs.
The eAM is imperative and, at the time of this writing, sequential; many functional properties of the epsilon language such as referential transparency and indipendency from evaluation order are lost in the translation from epsilon to eAML.
The eAM is relatively low-level, based as it is on a stack, a heap and an array of registers. The garbage collector is run automatically, even if it can be tuned with some special instructions.
It is worth repeating that the eAM is an abstract machine, and not a virtual machine. This is to say that there is not necessarily a step-by-step interpretation of bytecode instructions. The eAM model is only an abstraction of the functionality which is available at this level; in the implementation eAM instructions are translated into C and then compiled into native code with optimizations, or into Scheme code and then passed to Guile. However, for ease of implementation and for better undertanding, it's also useful to think of the eAM as a proper machine with its registers, stack, heap and instructions. Just remember that this does not mirror the execution model.
Any datum used by the eAM is of exactly one of these types:
NULLs in C. Internal pointers are forbidden: pointers are not allowed to refer to memory addresses inside a word or inside an array. Pointers are guaranteed to be exactly as wide as integers.
The internal representation of floats, wide floats and wide wide floats should follow the IEEE 754 Standard on modern architectures.
Note that no booleans, characters or strings are provided. Objects which in epsilon have these types, or higher-order types (such as epsilon functions, lists or tuples) are implemented using only the above eAM types.
The integer, float and pointer types are collectlively known as word types. The reason is that, in all reasonable architectures38, any word object exactly fits in a physical machine general register. By contrast array, wide integer, wide float and wide wide float objects may be larger than a physical word. For this reason word objects are typically faster to manipulate.
No objects smaller than a word are provided.
Note that no run-time type tagging exists39: it's up to the
compiler which generates eAML code to check for type errors at compile
time, or to arrange runtime checks generating appropriate instructions
when needed. When type tagging is needed on an object a (for
simplicity you can think of a C
union; all other cases can be
reconducted to this one), it can be realized implementing a as an array
whose (say) first element is an integer value discriminating between
all possible types that a can assume at run-time; the second
(and third, fourth and so on if needed) element holds the proper datum.
References are managed via pointers, directly from runtime-support structures such as the stack or the registers, or from elements of array type.
Data structures can be realized with arrays containing pointers (and other word objects, if needed). Cyclic data structures are allowed without restrictions.
Storage allocation is realized with explicit eAM instructions, but storage reclamation is automatically managed by the garbage collector.
Every epsilon object has an underlying representation in the eAM; epsilon objects of most basic types are quite easily mapped to eAM objects of word types; for higher-order objects the mapping is more complex.
The eAM deals with non-word objects using pointers, which are word objects: for example a list is represented with the usual pointer-based data structure, and the whole list is referred using a pointer to the first cons (or a null pointer if the list is empty); to summarize, an epsilon datum which does not fit in an eAM word object is represented with non-word objects, and a pointer (word) referring "the first element", whatever we mean as "the first element". Note that internal pointers are forbidden in the eAM, so the first element must be a whole eAM objects (which can be an array).
We are now going to describe the mapping in its details.
integer,
characterand
boolean
The epsilon types
integer,
character, and
boolean
are easily mapped into eAM integers.
Note that also a character uses a full word; this enables to use modern encodings such as Unicode (even if such support is not yet implemented). GNU epsilon is a new language, and we deliberately chose not to be restricted by obsolete 8-bit encodings such as ASCII or Latin1.
The epsilon boolean
false is represented as the eAM integer
0.
true is represented by any non-zero eAM integer.
When held in the stack or in a register these objects are copied rather than referred by a pointer. The rationale behind this is that it would be a waste of time and memory to hold pointers to immutable objects (remember that epsilon is a functional language), when the pointer has the same cost as the whole object.
float
The epsilon type
float is trivially mapped into the eAM type
float. A float object fits in a machine word.
Floats can be directly held in the stack or in a register: there are no pointers to float objects. The rationale is the same as the one for the case above.
epsilon tuples are mapped into eAM arrays holding the representation of each element; the order of the elements in the eAM representation always reflects the order of the elements in epsilon.
No information about the length of the tuple is held at runtime, since if the program was correctly compiled no bound-checks are ever needed at runtime for tuples.
When held in a register or in the stack the tuple is always referred by a pointer to the eAM array which represents it.
epsilon
arrays of size n are mapped into eAM
arrays of size n + 1, where the first element is an
integer holding the size of the array, and the following elements are
the representation of the actual elements.
Holding the information about the lenght at runtime enables the eAM to make bound checks.
Empty arrays follow the general rule: they are represented as eAM arrays of size 1, where the only element is an integer with value 0.
When held in a register or in the stack the array is always referred by a pointer to the eAM array which represents it.
epsilon
strings are representated just as if they were arrays of
characters.
Note that this representation allows computing the size of a string in O(1).
epsilon
lists are represented with the usual pointer-structure:
each cons holds a pointer to the next one.
Each cons is represented as an eAM array of size 2, where the first
element (the
head, or car) holds the representation of
an actual list element, and the second element (the
tail, or
cdr) holds a pointer to the rest of the list,
or a null pointer if the rest of the list is
[] (nil).
The empty list
[] has no representation.
When held in a register or in the stack the list is always referred by a pointer to the eAM array which represents its first cons, or by a null pointer if the list is empty.
To do
To do
The objects of an abstract type actually have another type (said the implementation type), which is usually hidden in the module which defines operations on them.
epsilon objects of abstract types are represented as objects of their implementation type; abstract types have no penality on representation. They are essentially gratis.
We are now showing the representation of some epsilon objects as eAM objects. We describe what appears in a register holding each of the sample epsilon objects:
12
The eAM integer
12.
1.323
The eAM float
1.323.
(1, 0.4)
A pointer to an eAM array of two elements: the first element holds
the eAM integer
1, the second one holds the eAM float
0.4. Note that the size of the tuple (2) is not needed at
runtime, so it is not explicitly stored anywhere.
<| 1, 0.4 |>
A pointer to an eAM array of three elements: the first element holds
the eAM integer
2, which represents the size of the array; the
following elements are the eAM objects
1 and
0.4.
Note how in the eAM the proper elements of the array are indicized starting at 1: the zero-th element holds the size, which can be extracted very efficently (one or two assembler instructions in most processors).
<| |>
A pointer to an eAM array of one element holding the eAM integer
0.
It's worth repeating that epsilon empty arrays are not represented as eAM null pointers. This convention saves some nullity tests at runtime and makes the representation more uniform.
"Abc"
A pointer to an eAM array of four elements: the first element holds
the eAM integer
3, which represents the size of the string. The
following elements are eAM integers holding the integer representation
of the characters
'A',
'b' and
'c'.
[ 1; 2 ], which is an abbreviation of
(1 :: 2 :: [])
A pointer to an eAM array c1 of two elements: the first element
of c1 holds the eAM integer
1. The second element of
c1 holds a pointer to the eAM array of two elements c2.
The first element of c2 holds the eAM integer
2. The
second element of c2 holds a null pointer, i.e. a "reference"
to the representation of
[].
Note that the elements of the list are integers in this example. If they were something more complex, for example strings, the first elements of c1 and c2 would be pointers.
To do: examples of objects with behaviour.
Most non-word objects are stored in a garbage-collected heap. The management of the heap is entirely transparent even at the eAM level: many instructions allocate a datum on the heap and return a pointer to it, storing the pointer on the stack or in a register. The stack and the registers are provided to hold temporary data.
The stack is a simple LIFO container of word objects, divided into frames. Each frame represents an activation of a subprogram or a block, and is essential especially to implement recursion. Many eAM instructions work on the stack, taking operands from it or using it to return a computed value. Other instructions push or pop entire frames on the stack: they are needed to enter or exit a block, and to implement subprogram calls. The stack is not limited in size and can not overflow.
Some eAM instructions use the registers instead of the stack to make computations. Registers are faster to manipulate than the stack, but are provided in a limited number and are not sufficient by themselves to handle recursion.
Registers are created at initialization time; their number is defined by the program and cannot grow at runtime. Registers are divided in several groups, according to the data they can hold:
Word registers are also the only ones used by eAM instructions
operating with registers to do compuations on integers and pointers.
Computations40 with floats can not be done in word registers, even if
word registers can hold float values (since they are word-sized).
A similar distinction does not exist for the stack: there is only one stack, and it is limited to word objects (including floats): you can not directly push a non-word object onto the stack: you can only push a pointer to it.
This is an eAM program example using the stack to evaluate
the expression (2 + 5) - 1:
pshci 2 # Push 2 pshci 5 # Push 5 s_addi # Pop two values, sum them and push the result # The value of (2 + 5) is on the top of the stack pshci 1 # Push 1 s_subi # Subtract top from undertop, pop both and push # the result of the subtraction # The result is on the top of the stack
This program also evaluates the same expression, but uses registers
instead of the stack:
ldci $g1 2 # $g1 := 2 ldci $g2 2 # $g2 := 5 addi $g3 $g1 $g2 # $g3 := $g1 + $g2 # Now $g3 holds (2 + 5), i.e. 7 ldci $g4 1 # $g4 := 1 subi $g5 $g3 $g4 # $g5 := $g3 - $g4 # Now $g5 holds the result
The previous examples are meant to illustrate the two possible styles of evaluation, and nothing more. Both could be heavily optimized.
To do: talk about frame pointer, stack pointer, frame format
To do: document To do: an example
To do: design, implement and document To do: an example
The eAM includes a pseudo-generational mark and sweep collector with conservative pointer finding, not incremental at the time of this writing.
The implementation is relatively simple; it can be roughly divided into two parts: the allocator and the collector.
Objects are allocated from buffers called pages. There are two sorts of pages: homogeneous pages and large pages.
A homogeneous page contains objects (called homogeneous objects)
of the same size k (in words); all homogeneous pages have
exactly the same size S (tunable via a C macro
#define); hence
the number of objects in a single homogenous page depends on k:
pages with a smaller k contain more objects, and pages with a larger k
contain fewer objects.
In each homogeneous page non-allocated objects are linked via a simple unidirectional free-list. For each object there is an associated allocated bit, also stored in the same page, which is set to 1 if and only if the object is allocated. Each page finally contains a field holding the number of its allocated objects.
These simple data structures allow to perform the following operations with time complexity O(1):
Each large page also holds a GC bit for each object, used by the collector.
All homogeneous pages are allocated with alignment S using
memalign(); this allows to find the page of an homogeneous
object with a bit-masking of its address, a very fast operation. In the
implementation the value of S is computed from the value of
the C macro
PAGE_OFFSET_WIDTH defined in
eam/gc/heap.h.
A pointer to each homogeneous page is stored in the hash table
set_of_homogeneous_pages (defined in
eam/gc/homogeneous.c). This makes possible to check whether an
object is homogeneous with complexity O(1) in the average case
(one bit-masking plus one hash table access).
Homogeneous pages with various values of k are created during initialization, but of course not covering every possibile size. So, if an object of a given size is asked, the allocator could return a slightly larger object: in this case we speak about inexact allocation (in the other case we speak about exact allocation). The little waste of space implied by inexact allocation seems not to be a problem in practice.
There is an array indicized by any given possible k from 0 to
the maximum allowed value
MAXIMUM_HOMOGENEOUS_SIZE,
homogeneous_pages, defined in
eam/gc/gc.c; among the
rest it contains, for each size, its best approximation. So also
inexact allocation has complexity O(1) when a non-full page of
the right size exists.
The other fields of
homogeneous_pages are, for each k,
the bidirectional list of non-full homogeneous pages for objects of
size k and the bidirectional list of full homogeneous pages for
objects of size k. The list structures make easy adding or
removing homogeneous pages as needed.
A final note: even if the GNU system allows to free a block allocated with
memalign() there is no portable way to do it; so homogeneous
page are actually destroyed only on GNU systems41; on the other
systems they are created and kept allocated forever, in the hope that
they will be needed again.
Sometimes objects larger than
MAXIMUM_HOMOGENEOUS_SIZE words,
or even larger than S words, are needed. Some other structure is
needed, since those objects can not fit in homogeneous pages some
other structure is needed.
Managing a large page is simpler than managing a homogeneous page: a large page holds one and only one large object: large pages are created when allocating a large object, and destroyed when a large object is freed: there is no need to keep free-lists or allocated bits. Moreover freeing a large object immediately releases storage; this can be an advantage when really large sizes are involved.
Each large page also holds the GC bit42 for its object.
Large pages do not need a specific alignment: they are simply
allocated with
malloc() and freed with
free().
A pointer to each homogeneous object is kept in the hash table
set_of_large_objects, defined in
eam/gc/large.c. This
enables to check whether an object is large with complexity
O(1) in the average case (one hash table access).
All large pages are linked in the bidirectional list
list_of_large_pages, defined in
eam/gc/gc.c. Note that
all large pages are full, since when they become empty they are immediately
destroyed, and there is no intermediate condition: large pages can
only be full or empty.
A large page is just a thin shell enclosing its large object and the little bookkeeping information needed.
After the initialization performed by
void initialize_garbage_collector(),
declared in
eam/gc/gc.h, all data structures are set up and a
homogeneous page is created for the values of k belonging to a certain
predefined set Q43. No large pages are created at
initialization time. They are created at runtime, just when needed.
The interface of the allocator is very simple; all the needed
functions are declared in the header
eam/gc/gc.h.
Exact allocation is performed by
word_t allocate_exact(integer_t words_no).
allocate_exact() works by allocating an object from the first
page of the list of non-full homogeneous pages at
homogeneous_pages[words_no]; the list is always kept non-empty.
When the page gets full it is moved from the list of non-full pages to
the list of full pages at
homogeneous_pages[words_no]; if the
list of non-full pages becomes empty then a new page is created.
Inexact allocation is performed by
word_t allocate_inexact(integer_t desired_words_no).
If
desired_words_no is not greater than
MAXIMUM_HOMOGENEOUS_SIZE then
allocate_inexact()
computes the best approximated size by simply looking at the field
inexact_size of
homogeneous_pages[desired_words_no], then
calls
allocate_exact(); else it creates a new large page and
returns its object.
Note that allocation of large objects is always considered inexact.
Exact allocation is slightly faster than inexact allocation, but can be used only when the requested size k belongs to Q; if the requested size does not belong to Q the behaviour is undefined, which is a nice way to say that the program will most probably crash, and the collector will surely not work.
The header
eam/gc/gc.h also contains the interface to the collector.
void initialize_garbage_collector() transparently starts a new
concurrent thread which from time to time44 checks
whether a collection would be needed, and in the affermative case sets
the flag
int should_we_collect to a nonzero value.
The mutator has the responsibility to periodically45 check the flag, and request a collection if the flag is nonzero. Note that there is no need of synchronization here: one thread reads the flag but does not write it, the other one writes it but does not read it.
For each collection cycle the mutator must explicitly notify the
collector about all roots, calling
void add_gc_root(word_t p) or, when there is more then one root in a single buffer,
void add_gc_roots(word_t* buffer, size_t words_no).
The roots of the eAM are:
exception_value
exceptions_stack[i].environment, for each element i of the exception stack
String constants are not roots: there is no need to keep
them in the garbage-collected heap, so they are simply allocated with
malloc() at startup time. This saves a little time when marking.
After notifying the collector about the roots a call to
void garbage_collect() performs marking and sweeping46.
To do: more details? Implementation is quite "conventional" here...
It would be slow to perform a full mark at every garbage collection cycle, so the eAM collector implements a pseudo-generational marking algorithm. What we call "pseudo-generational" garbage collection is a particular case of the generational garbage collection, particularly simple but quite effective. The heap is conceptually partitioned into two generations:
Minor collections, performed relatively often, only scan the young generation, making the marking phase noticeably faster; note that marking time usually dominates over sweeping time.
Major collections, performed less often47, scan both the young and the old generation. Major collection are slower than minor collection but free more storage.
The main idea of pseudo-generational collection is that the GC bits are cleared only at the beginning of major collections: in minor collactions the old generation objects are seen as already marked, so they are not recursively48 scanned.
The mutator has no direct control over the generations. It can only
request a collection, and it's the function
garbage_collect()
to decide whether a minor or major collection is needed.
To do
The instructions of the eAM are described in full detail in eAM instructions.
This chapter describes in detail all the instruction of the epsilon Abstract Machine. Familiarity with the epsilon memory model and runtime structures is assumed.
This chapter is of no particular use for writing programs in epsilon. It is useful, instead, to understand how the internals of epsilon work and especially to extend the language or the runtime system.
Each eAM instruction is identified by a unique mnemonic in the textual form of eAML. We are going to introduce the rules which were used to choose the mnemonics names to make them more consistent and easier to remember.
By convention, when an instruction works on operands
with fixed type, a suffix of the mnemonic identifies that type:
i for integers,
f for floats and
s
for strings.
For example the
s_addi instruction executes an addition on
integer operands.
When more than one instruction exists doing the same work, but with a version taking a parameter from a register, another taking a parameter from the stack and still another taking one or more immediate parameters, the versions are easily recognizable from the prefix or desinence in their mnemonic:
_idesinence in its mnemonic.
s_prefix in its mnemonic.
For example,
s_addi adds two integer taken from the
stack, and
s_addi_i adds an integer taken from the stack to the
immediate integer which is the parameter of the instruction.
Some instructions are provided in two versions, one safe
(i.e. making runtime checks) but less fast, the other faster
but not safe. "Fast" versions are identified by a
f_ prefix in their
mnemonic. If the
s_ prefix is also present,
s_ preceeds
f_ in the mnemonic, such as in
s_f_divi.
In the following immediate integer parameters are indicated by n,
m, x, y or z;
strings are indicated by S, and labels by L:.
Register parameters are indicated by a dollar-sign (
$) followed
by an letter, such as $a or $x.
In this chapter we also describe instructions updating the stack or the registers: for brevity we follow these writing conventions:
"a";3.7]. The leftmost element represents the content of the
$0register, the second one of
$1, and so on. For the ...-notation we follow the same conventions as in writing the stacks.
For example:
||...|a|
s_addi_i n
||...|a+n|
For brevity's sake register updates can also be noted as
$a := EXP($b, ..., $z)
where $a is the updated register, and EXP($b, ..., $z) is any expression involving registers $b, ..., $z. Note that in the expression at the right of the ":=" symbol "$x" represents the content of the $x register, not its address.
If we stay at the level of the eAM, epsilon structures are not anything more than arrays (or tuples).
Arrays are shown in a visual way as sequences of objects separated by commas and surrounded by angular parentheses. Null pointers are written as "null". For example this
<1, <2, <3, <4, null>>>>
is the eAM representation of the epsilon list
[1; 2; 3; 4].
This, instead,
<<
"test", null>, null>
is the eAM representation of the epsilon object
(["test"], []),
and also of the epsilon object
[["test"]].
eAM instructions are divieded into several categories:
Arithmetic/logic instructions operating on integer values are essential since they are used even in the simplest programs. They do not involve memory management, and for this reason they are fast compared to other ones.
Some instructions taking two operands from the stack exist also in a
version with the second operand as an immediate parameter (for example
s_addi and
s_addi_i n). The versions with immediate
parameters are not always applicable, but faster.
In the following subsections we are going to describe every instruction in detail.
addi $a $b $c
The
addi $a $b $c instruction adds the
content of the $b register to the content of the $c
register, storing the result into the $a register:
$a := $b + $c
addi_i $a $b n
The
addi_i $a $b n instruction adds n to
the content of the $b register, storing the result into the
$a register:
$a := $b + n
andi $a $b $c
The
andi $a $b $c instruction stores a
nonzero value into $a if both $b and $c have nonzero
value, else it stores zero into $a.
divi $a $b $c
The
divi $a $b $c instruction divides the
content of the $b register by the content of the $c
register, storing the result into the $a register:
$a := $b / $c
If the content of the $c register is zero the execution terminates reporting an error.
divi_i $a $b n
The
divi_i $a $b n instruction divides
the content of the $b register by n, storing the result
into the $a register:
$a := $b / n
No division-by-zero check is made, since it would make never sense to use this instruction with n=0.
f_divi $a $b $c
The
f_divi $a $b $c instruction divides the
content of the $b register by the content of the $c
register, storing the result into the $a register:
$a := $b / .
f_modi $a $b $c
The
f_divi $a $b $c instruction divides the
content of the $b register by the content of the $c
register, storing the rest of the division into the $a register:
$a := $b mod .
ldci $r n
The
ldci $r n instruction updates the content of
the $r register to the integer constant n:
$r := n
modi $a $b $c
The
modi $a $b $c instruction divides the
content of the $b register by the content of the $c
register, storing the rest of the division into the $a register:
$a := $b mod $c
If the content of the $c register is zero the execution terminates reporting an error.
modi_i $a $b n
The
modi_i $a $b n instruction divides
the content of the $b register by n, storing the rest of
the division into the $a register:
$a := $b mod n
No division-by-zero check is made, since it would make never sense to use this instruction with n=0.
muli $a $b $c
The
muli $a $b $c instruction multiplies the
content of the $b register for the content of the $c
register, storing the result into the $a register:
$a := $b ⋅ $c
muli_i $a $b n
The
mul_i $a $b n instruction multiplies
the content of the $b register for n, storing the result
into the $a register:
$a := $b ⋅ n
nxori $a $b $c
The
nxori $a $b $c instruction stores a
nonzero value into $a if either both $b and $c
have nonzero content, or both $b and $c have zero content,
else it stores zero into $a.
ori $a $b $c
The
ori $a $b $c instruction stores a
nonzero value into $a if at least one of $b and $c
has nonzero content, else it stores zero into $a.
s_f_divi
This instruction is identical to
s_divi, except that it does not
check for the divison-by-zero error condition.
The program may crash if the top of the stack is 0; you should use this instruction only if you are definitively sure that the divisor is not zero.
s_f_divi is faster than
s_divi.
s_addi
The
s_addi instruction replaces the two integer objects on the
top of the stack with a single object with their sum as value.
||...|a|b|
s_addi
||...|a + b|
s_addi_i n
The
s_addi_i instruction replaces the integer object on the
top of the stack with the sum of it and the n parameter.
||...|a|
s_addi_i n
||...|a + n|
s_addi_i n is faster than
pshci n; s_addi.
s_andi
The
s_andi instruction replaces the two integer objects a
and b on the top of the stack with a single object with a
nonzero value if both a and b have nonzero value,
otherwise with a single object with zero value.
For example:
||...|-34|0|
s_andi
||...|0|
s_divi
The
s_divi instruction replaces the two integer objects on the
top of the stack with a single object with their quotient as value.
In case of division by zero this instruction prints an error message and terminates the execution of the program.
||...|a|b|
s_divi
||...|a / b|
s_divi_i n
The
s_divi_i instruction replaces the integer object on the
top of the stack with the quotient of it and the n parameter.
The program may crash if n is 0; no divide-by-error
check is done since it makes never sense to use
s_divi_i 0.
||...|a|
s_divi_i n
||...|a / n|
s_divi_i n is faster than
pshci n; s_divi, and
even than
pshci n; s_f_divi.
s_eqi
The
s_eqi instruction replaces the two integer objects on the
top of the stack with a single object with a nonzero value if the
integer objects are equal, otherwise with zero.
For example:
||...|a|a|
s_eqi
||...|1|
s_gti
The
s_gti instruction replaces the two integer objects a
and b on the top of the stack with a single object with a
nonzero value if a is greater than b, otherwise with
zero.
For example:
||...|172|200|
s_gti
||...|0|
s_gtei
The
s_gtei instruction replaces the two integer objects a
and b on the top of the stack with a single object with a
nonzero value if a is greater than or equal to b, otherwise
with zero.
For example:
||...|172|172|
s_gtei
||...|-1|
s_lti
The
s_lti instruction replaces the two integer objects a
and b on the top of the stack with a single object with a
nonzero value if a is less than b, otherwise with
zero.
For example:
||...|172|200|
s_lti
||...|-1|
s_ltei
The
s_ltei instruction replaces the two integer objects a
and b on the top of the stack with a single object with a
nonzero value if a is less than or equal to b, otherwise
with zero.
For example:
||...|172|172|
s_ltei
||...|-1|
s_modi
The
s_modi instruction replaces the two integer objects on the
top of the stack with a single object with the rest of their division
as value.
In case of division by zero this instruction prints an error message and terminates the execution of the program.
||...|a|b|
s_divi
||...|a mod b|
s_modi_i n
The
s_modi_i instruction replaces the integer object on the
top of the stack with the rest of the divion of it by the n parameter.
The program may crash if n is 0; no divide-by-error
check is done since it makes never sense to use
s_divi_i 0.
||...|a|
s_modi_i n
||...|a mod n|
s_modi_i n is faster than
pshci n; s_modi, and
even than
pshci n; s_f_modi.
s_muli
The
s_muli instruction replaces the two integer objects on the
top of the stack with a single object with their product as value.
||...|a|b|
s_muli
||...|a ⋅ b|
s_muli_i n
The
s_muli_i instruction replaces the integer object on the
top of the stack with the product of it and the n parameter.
||...|a|
s_muli_i n
||...|a ⋅ n|
s_muli_i n is faster than
s_pshci n; muli.
s_noti
The
s_noti instruction replaces the integer object a
on the top of the stack with a nonzero value if a has zero
value, else with zero.
For example:
||...|-34|
s_noti
||...|0|
s_neqi
The
s_neqi instruction replaces the two integer objects on the
top of the stack with a single object with value zero if the
integer objects are equal, otherwise with a single object with a
nonzero value.
For example:
||...|a|a|
s_neqi
||...|0|
s_nxori
The
s_nxori instruction replaces the two integer objects a
and b on the top of the stack with a single object with
zero value if exactly one of a and b has a nonzero
value, otherwise with a single object with a nonzero value.
||...|-34|0|
s_nxori
||...|0|
s_ori
The
s_ori instruction replaces the two integer objects a
and b on the top of the stack with a single object with a
nonzero value if at least one of a and b has
nonzero value, otherwise with a single object with zero value.
||...|0|-23|
s_ori
||...|1|
s_subi
The
s_subi instruction replaces the two integer objects on the
top of the stack with a single object with their difference as value.
||...|a|b|
s_subi
||...|a - b|
s_subi_i n
The
s_subi_i instruction replaces the integer object on the
top of the stack with the difference of it and the n parameter.
||...|a|
s_subi_i n
||...|a - n|
s_subi_i n is faster than
pshci n; s_subi.
s_xori
The
s_xori instruction replaces the two integer objects a
and b on the top of the stack with a single object with a
nonzero value if exactly one of a and b has a nonzero
value, otherwise with a single object with zero value.
||...|-34|0|
s_xori
||...|1|
subi $a $b $c
The
subi $a $b $c instruction subtracts the
content of the $c register from the content of the $b
register, storing the result into the $a register:
$a := $b - $c
subi_i $a $b n
The
subi_i $a $b n instruction subtracts
$n from the content of the $b register, storing the result
into the $a register:
$a := $b - n
swp $a $b
The
swp $a $b instructions swaps the contents of
the $a register and the $b register. The
semantics of this instruction might be summarized as:
$a := $b, $b := $a
where the two assignments take place in parallel.
xori $a $b $c
The
xori $a $b $c instruction stores a
nonzero value into $a if exactly one of $b and $c
has nonzero content, else it stores zero into $a.
mka $a $b
The
mka $a $b instruction assigns to the $a
register a new array with undefined content and with the content of
the $b register as size.
mka_i $a n
The
mka_i $a $b instruction assigns to the $a
register a new array with undefined content and with size n.
s_mka
The
s_mka instruction replaces the integer n on the top of the
stack with a new uninitialized array of size n:
For example:
||...|3|
s_mka
||...|<???, ???, ???>|
s_mka_i n
The
s_mka_i n instruction pushes a new uninitialized
array of size n on the top of the stack:
For example:
||...|
s_mka_i
The
ccod S instruction exectutes the C code contained in the
immediate string parameter S.
gc
The
gc instruction performs a full garbage collection cycle,
temporarily suspending the program.
This instruction is used when there is an urgent need of free memory. Normally the garbage collector would run concurrently with the program, without the need of executing eAM instructions for this purpose.
hlt n
The
hlt n instruction halts the program. The integer
parameter n is returned to the operating system as the process
exit code.
nop
The
nop instruction does absolutely nothing. It is used in
contexts where an instruction is expected but no effect is needed,
such as after a label which is the target of a jump.
For example:
# Compute a number and store it into $4: ... jz $4 END_OF_PROGRAM: ... END_OF_PROGRAM: nop
nop instructions have no effect on runtime performance.
This part deals with the problems of extending GNU epsilon with code written in other languages, and with the ways of interfacing epsilon with other languages.
Knowledge of the eAM (especially data representation, see Representation of epsilon data in the eAM) and experience with C programming are assumed.
Say you want to access a database, or to do some 3D graphics in an epsilon program. These are not unusual requirements.
It's relatively easy to do that in C: there is an interface someone else
has written (for example the client library of PostgreSQL
libpq, or Mesa3D); you just have to call some already defined
functions in your code.
Most other languages, such as C++, Java, Python and most Lisp implementations have some feature allowing you to call code written in C; this is a solution to the problem: instead of, for example, natively supporting PostgreSQL, most languages allow you to call your C code in some way, and your C code can make use of all the needed libraries.
This is also the solution provided by epsilon.
Let us start with a wrong solution, which was actually
implemented in an early version of epsilon. An "easy" way to extend
the library is to extend the eAM: for example, if you need access to the
C function
system(), you can just add a new eAM instruction
sstm, which takes a string, converts it from the epsilon
format into the C format, and calls the
system() function of
the C library.
They you must write some "glue" code to call the eAM instruction
sstm: you will need an epsilon function taking a string and
returning an action of integer:
execute_program : string -> i/o of integer
Note that now
execute_program will have to be written in
eAML! There is no way around this49, since you need to use the eAM instruction
sstm, which
is not generated by the epsilon compiler.
Also note that the aAM has changed: the new instruction has made the abstract machine incompatible with the old version; all epsilon libraries must be recompiled, too.
This solution is wrong for several different reasons:
The right solution is writing the extension code at a level which is somewhere in the middle between eAML and epsilon; you must be able to do that without recompiling the eAM, to retain compatibility.
The eAM provides a mechanism for linking a C library at
initialization time. The C library can define some functionalities
(either limited to pure calculus or comprising I/O) to be made
available to epsilon. Often a C library needs to refer to some other
shared library written to be called from C (such as
libpq). This can be linked at initialization time, too, and is
called a dynamic library. Dynamic libraries are not directly
visible from epsilon.
To do: I don't like this chapter. Rewrite most of it.
In this part some nontrivial examples of epsilon programs are presented; they are not necessarily useful by themselves or complete, but nonetheless they show some usage patterns quite common in epsilon programming.
In this context formal elegance and readability is considered more important than efficiency.
Of course also the example programs are free software, covered by the GNU General Public License (see Copying). The complete source code for all these examples is distributed along with GNU epsilon.
Like any other Lisp implementation μ -lisp is interactive: the implementation of its REPL constitues a good example of purely functional I/O.
It is a Lisp/1, i.e. it has a single namespace for variables and functions.
To do: scanner and parser...
To do...
The predefined symbols are...
To do: source overview
To do: example programs.
To do
This part will be a collection of various important non-technical information related to epsilon, licenses and references..
:: Functional programming tutorial
::: Functional programming tutorial
[]: Functional programming tutorial
bytecode.c: Internals overview
car: Functional programming tutorial
cdr: Functional programming tutorial
eamlas: Internals overview
eamlas.l: Internals overview
eamlas.y: Internals overview
empty: Functional programming tutorial
epsilonyacc: Introduction
exception_stack: The eAM garbage collector
exception_value: The eAM garbage collector
fact: Functional programming tutorial
head: Functional programming tutorial
id: Functional programming tutorial
make_eamlas_l: Internals overview
make_eamlas_y: Internals overview
memalign(): The eAM garbage collector
null: Functional programming tutorial
plus: Functional programming tutorial
succ: Functional programming tutorial
tail: Functional programming tutorial
A late thought: ε is also used to express errors in numerical analysis. Fun.
See for information about the GNU Project.
See for information about Guile.
When we say "bootstrap" dealing with a tool like a scanner generator or a parser generator we mean generating the scanner or parser for the tool itself.
A and B can also be the same set.
Formally there would be some difference between an argument and a parameter, but the difference is not important in this context. We will use these names interchangeably.
The successor of a number n is n + 1; for example the successor of 34 is 35.
Since the letter
λ
does not belong to most standard keysets,
epsilon uses the character
\ in its place. This convention
was inspired by the language Haskell.
reverse_number maps 2 into 1/2, 3 into 1/3, etc.
The name is in honour of Haskell Curry, the mathematician who invented this technique.
The factorial of n, written n!, is the product of all natural numbers from 1 to n included. For example 4! = 1 ⋅ 2 ⋅ 3 ⋅ 4 = 24. By definition 0! = 1.
The identity function maps an object into itself. An obvious definition is λ n . n
The name cons derives from the Lisp language, and is now a universally accepted way of denoting the "construction" operation inserting an element before a list.
This feature is called polymorphism. Polymorphism will be fully discussed later in this book.
In the Lisp language (hence by tradition) they are
called
null,
car and
cdr, repectively.
Here's one more case of polymorphism.
Applying
head to
[] raises an
exception. Exceptions are a general and powerful way to deal with
errors, and will be dealt with later in this book.
tail [] also raises an exception.
The idea of "computation step" is indeed quite subjective. In this book we say "a single computation step" to mean the minimum computation unit which is interesting in the context. The discussion about reductions will be informal.
For example lists as in this case, opposing to, say, naturals. This is not conceptually exact, but the complexity of a natural number seen as a data structure is not entirely evident.
When defined in this form. interval can also be defined in a tail-recursive form, but the definition becomes somewhat more complicated.
These reasons are bound to the type system of epsilon, i.e. the set of rules which govern the typing of expressions. epsilon supports the Hindley-Milner type system, also used by the languages ML and Haskell. The enforcing of such rules prevents many programming errors and can also make programs run faster. Details will be explained later in this book.
References (as in Java, or, to a lesser extent, in C++) do not to cause the same problems which pointers cause. However they are nonetheless error-prone: you can forget initializations by mistake, and in general references suffer from all the vulnerabilities which are bound to side-effects.
Semantics is a branch of Computer Science dealing with the formal meaning of computer programs. It's not required you know Semantics for using epsilon.
Semantics says pointers and references are related with memory, also named store. All side effects are also operations on stores. In a functional language there is no concept of store and all operations are made only on environments; imperative programs, by constrast, use both environments and stores. This is a deep reason why functional programming can be simpler than imperative programming.
In mathematics the letter ω
is used to indicate the cardinality, i.e. the number of elements, of the set of the natural numbers N .
i.e. before execution.
ML and Haskell are important exemples.
Other referentially transparent solutions exist in the functional world, such as linear types; however we decided to implement the I/O system following the lesson of Haskell, which in our opinion employs the most clean and usable way to make safe I/O.
The REPL, (Read-Eval-Print Loop), is a simple C program which takes an epsilon expression, compiles it into bytecode, and runs it on the eVM (epsilon Virtual Machine). The REPL as it is now has some serious flaws, and can not execute all the code which the compiler can run. You can always use the compiler instead of the REPL, at the cost of some additional complications. In the rest of this chapter we assume you are using the interpreter, which when it's ready will behave much like the current REPL.
The interpreter is also called meta-interpreter, or meta-circular interpreter. This means that the interpreter was written in the same language it implements: in this case the epsilon interpreter itself was written in epsilon.
Often called "Edit-Compile-Debug Loop", since it's very uncommon to write a nontrivial program which works without errors the first time.
If guard
↑
then we will never be able to
choose between the
then branch and the
else branch: we
will keep evaluating the guard for ever, without ever evaluating
either branch.
Even if it is absent in some languages such as Lisp and C; however nowadays it is widely known that such absence of type constraints can lead to many programming errors.
Some dynamically-typed languages
such as Lisp do permit having different types in the
then and
else branches; however this lack of compile-time checking
makes programming errors very frequent. It's rare to actually need
such a feature, and when it is really needed it can be easily simulated in
epsilon via concrete types. Concrete types will be explained
later in this book.
This first description does not cover
the "multiple
let", so it is not the complete syntax. It will
be explained later.
Remember that epsilon is free software ("free" in the sense of "free speech": see) covered by the GNU General Public License. See Copying for the full text.
These days, and in the foreseeable future, physical processors are 32-bit or 64-bit (the GNU Project does not support 16-bit machines, since they are long obsolete). 32-bit processors should have 32-bit pointers, and 32-bit general registers. 64-bit machines should have 64-bit pointers, and they should be able to do computations with 64-bit integers at assembly level. If they aren't, it's hoped that at least the C compiler provides support for 64-bit integer operations. We don't know of any counterexample; please write us to bug-epsilon@gnu.org if you know some, specifiyng.
opposing to Lisp and Smalltalk implementations, for example. The absence of type tags at runtime speeds up execution and reduces memory usage.
The idea of "computation" does not include copying: any word-sized value can be passed in a word register and hence blindly copied into the stack, the heap or another register. This generic operation does not depend on the type of the operand but only on its size, and it is reasonable to allow it in general registers.
The type of system is automatically determined at configure time.
The "bit" is effectively implemented with a word. In C it makes sense to use a bit vector, but a bit vector of just one element effectively takes more space than a bit.
The current algorithm allocates pages for
S = 1, 2, 3, ...
MAXIMUM_SMALL_HOMOGENEOUS_SIZE and
S= 2⋅
MAXIMUM_SMALL_HOMOGENEOUS_SIZE,
4⋅
MAXIMUM_SMALL_HOMOGENEOUS_SIZE,
8⋅
MAXIMUM_SMALL_HOMOGENEOUS_SIZE, ...
MAXIMUM_HOMOGENEOUS_SIZE.
MAXIMUM_SMALL_HOMOGENEOUS_SIZE and
MAXIMUM_HOMOGENEOUS_SIZE are
defined in
eam/gc/gc.h.
In the current
implementation the concurrent thread wakes up every
GC_TEST_TIMEOUT nanoseconds.
In the current implementation the mutator checks the flag right after each application of a recursive function.
In
the current implementation
garbage_collect() suspends the mutator.
The current
implementation uses a rough heuristic: one collection every
MINOR_GC_CYCLES_NO minor collections is
major (
MINOR_GC_CYCLES_NO is defined in the header
eam/gc/gc.h). This will be improved.
However old generation objects can be scanned from the roots. This is rarely a problem: roots should have not a very large size (at least if stack usage is not high, and high stack usage usually indicates subotimal use of tail-recursion).
Except modifying the compiler.
For more information about Lisp, and for realistic implementations, see. | http://www.gnu.org/software/epsilon/manual/epsilon.html | crawl-002 | refinedweb | 17,630 | 59.43 |
Documentation helpers¶
fabric.docs.
unwrap_tasks(module, hide_nontasks=False)¶
Replace task objects on
modulewith their wrapped functions instead.
Specifically, look for instances of
WrappedCallableTaskand replace them with their
.wrappedattribute (the original decorated function.)
This is intended for use with the Sphinx autodoc tool, to be run near the bottom of a project’s
conf.py. It ensures that the autodoc extension will have full access to the “real” function, in terms of function signature and so forth. Without use of
unwrap_tasks, autodoc is unable to access the function signature (though it is able to see e.g.
__doc__.)
For example, at the bottom of your
conf.py:
from fabric.docs import unwrap_tasks import my_package.my_fabfile unwrap_tasks(my_package.my_fabfile)
You can go above and beyond, and explicitly hide all non-task functions, by saying
hide_nontasks=True. This renames all objects failing the “is it a task?” check so they appear to be private, which will then cause autodoc to skip over them.
hide_nontasksis thus useful when you have a fabfile mixing in subroutines with real tasks and want to document just the real tasks.
If you run this within an actual Fabric-code-using session (instead of within a Sphinx
conf.py), please seek immediate medical attention.
See also
WrappedCallableTask,
task | http://docs.fabfile.org/en/1.5/api/core/docs.html | CC-MAIN-2018-05 | refinedweb | 209 | 59.9 |
#include <iostream> using namespace std; struct account { account(); int accountnum; float balance; float interestrate; }; account::account() { accountnum = 50; } struct account2 { account2(); account * data; void test(); }; account2::account2() { } void account2::test() { data->accountnum = 1000; } int main() { account test; account2 test2; cout << test.accountnum << endl; test2.test(); //Error cout << test.accountnum << endl; return 0; }
I have the following code. It contains two structs, account and account2.
I want to change the value of accountnum of account using a pointer in account2. However when I run the program it will keep on crashing. Please tell me what is wrong with this code and how I could fix it. | https://www.daniweb.com/programming/software-development/threads/394199/quick-question-about-pointers | CC-MAIN-2017-34 | refinedweb | 105 | 58.99 |
"
How snull Is Designed
Connecting to the Kernel
The net_device Structure in Detail
Opening and Closing
Packet Transmission
Packet Reception
The Interrupt Handler
Changes in Link State
The Socket Buffers
MAC Address Resolution
Custom ioctl Commands
Statistical Information
Multicasting
Backward Compatibility
Quick Reference
We are now through discussing char and block drivers and are ready to
move on to the fascinating world of networking. Network interfaces are
the third standard class of Linux devices, and this chapter describes
how they interact with the rest of the kernel.
The role of a network interface within the system is similar to that
of a mounted block device. A block device registers its features in
the blk_dev array and other kernel structures, and
it then "transmits" and "receives" blocks on request, by means of
its request function. Similarly, a network
interface must register itself in specific data structures in order to
be invoked when packets are exchanged with the outside world.
There are a few important differences between mounted disks and
packet-delivery interfaces. To begin with, a disk exists as a special
file in the /dev directory, whereas a network
interface has no such entry point. The normal file operations (read,
write, and so on) do not make sense when applied to network
interfaces, so it is not possible to apply the Unix
"everything is a file" approach to them. Thus, network interfaces
exist in their own namespace and export a different set of operations.
Although you may object that applications use the
read and write system calls
when using sockets, those calls act on a software object that is
distinct from the interface. Several hundred sockets can be
multiplexed on the same physical interface.
But the most important difference between the two is that block
drivers operate only in response to requests from the kernel, whereas
network drivers receive packets asynchronously from the outside.
Thus, while a block driver is asked to send a
buffer toward the kernel, the network device asksto push incoming packets toward the kernel. The kernel interface for
network drivers is designed for this different mode of operation.
Network drivers also have to be prepared to support a number of
administrative tasks, such as setting addresses, modifying
transmission parameters, and maintaining traffic and error statistics.
The API for network drivers reflects this need, and thus looks
somewhat different from the interfaces we have seen so far.
The network subsystem of the Linux kernel is designed to be completely
protocol independent. This applies to both networking protocols (IP
versus IPX or other protocols) and hardware protocols (Ethernet versus
token ring, etc.). Interaction between a network driver and the kernel
proper deals with one network packet at a time; this allows protocol
issues to be hidden neatly from the driver and the physical
transmission to be hidden from the protocol.
This chapter describes how the network interfaces fit in with the rest
of the Linux kernel and shows a memory-based modularized network
interface, which is called (you guessed it)
snull. To simplify the discussion, the
interface uses the Ethernet hardware protocol and transmits IP
packets. The knowledge you acquire from examining
snull can be readily applied to protocols
other than IP, and writing a non-Ethernet driver is only different in
tiny details related to the actual network protocol.
This chapter doesn't talk about IP numbering schemes, network protocols,
or other general networking concepts. Such topics are not (usually) of
concern to the driver writer, and it's impossible to offer a satisfactory
overview of networking technology in less than a few hundred pages. The
interested reader is urged to refer to other books describing networking
issues.
The networking subsystem has seen many changes over the years as the
kernel developers have striven to provide the best performance
possible. The bulk of this chapter describes network drivers as they
are implemented in the 2.4 kernel. Once again, the sample code works
on the 2.0 and 2.2 kernels as well, and we cover the differences
between those kernels and 2.4 at the end of the chapter.
One note on terminology is called for before getting into network
devices. The networking world uses the term
octet to refer to a group of eight bits, which is
generally the smallest unit understood by networking devices and
protocols. The term byte is almost never encountered in this context.
In keeping with standard usage, we will use octet when talking about
networking devices.
This section discusses the design concepts that led to the
snull network interface. Although this
information might appear to be of marginal use, failing to understand
this driver might lead to problems while playing with the sample code.
The first, and most important, design decision was that the sample
interfaces should remain independent of real hardware, just like most
of the sample code used in this book. This constraint led to
something that resembles the loopback interface.
snull is not a loopback interface, however;
it simulates conversations with real remote hosts in order to better
demonstrate the task of writing a network driver. The Linux loopback
driver is actually quite simple; it can be found in
drivers/net/loopback.c.
Another feature of snull is that it
supports only IP traffic. This is a consequence of the internal
workings of the interface -- snull has
to look inside and interpret the packets to properly emulate a pair of
hardware interfaces. Real interfaces don't depend on the protocol
being transmitted, and this limitation of
snull doesn't affect the fragments of code
that are shown in this chapter.
The snull module creates two
interfaces. These interfaces are different from a simple loopback in
that whatever you transmit through one of the interfaces loops back to
the other one, not to itself. It looks like you have two external
links, but actually your computer is replying to itself.
Unfortunately, this effect can't be accomplished through IP-number
assignment alone, because the kernel wouldn't send out a packet
through interface A that was directed to its own interface B. Instead,
it would use the loopback channel without passing through
snull. To be able to establish a
communication through the snull interfaces,
the source and destination addresses need to be modified during data
transmission. In other words, packets sent through one of the
interfaces should be received by the other, but the receiver of the
outgoing packet shouldn't be recognized as the local host. The same
applies to the source address of received packets.
To achieve this kind of "hidden loopback," the
snull interface toggles the least
significant bit of the third octet of both the source and destination
addresses; that is, it changes both the network number and the host
number of class C IP numbers. The net effect is that packets sent to
network A (connected to sn0, the first interface)
appear on the sn1 interface as packets belonging to
network B.
To avoid dealing with too many numbers, let's assign symbolic names to
the IP numbers involved:
snullnet0 is the class C network that is connected
to the sn0 interface. Similarly,
snullnet1 is the network connected to
sn1. The addresses of these networks should differ
only in the least significant bit of the third octet.
local0 is the IP address assigned to the
sn0 interface; it belongs to
snullnet0. The address associated with
sn1 is
local1. local0 and
local1 must differ in the least significant bit of
their third octet and in the fourth octet.
remote0 is a host in snullnet0,
and its fourth octet is the same as that of local1.
Any packet sent to remote0 will reach
local1 after its class C address has been modified
by the interface code. The host remote1 belongs to
snullnet1, and its fourth octet is the same as that
of local0.
The operation of the snull interfaces is
depicted in Figure 14-1, in which the hostname
associated with each interface is printed near the interface name.
Here are possible values for the network numbers. Once you put these
lines in /etc/networks, you can call your
networks by name. The values shown were chosen from the range of
numbers reserved for private use.
snullnet0 192.168.0.0
snullnet1 192.168.1.0
The following are possible host numbers to put into
/etc/hosts:
192.168.0.1 local0
192.168.0.2 remote0
192.168.1.2 local1
192.168.1.1 remote1
The important feature of these numbers is that the host portion of
local0 is the same as that of
remote1, and the host portion of
local1 is the same as that of
remote0. You can use completely different numbers
as long as this relationship applies.
Be careful, however, if your computer is already connected to a
network. The numbers you choose might be real Internet or intranet
numbers, and assigning them to your interfaces will prevent
communication with the real hosts. For example, although the numbers
just shown are not routable Internet numbers, they could already be
used by your private network if it lives behind a firewall.
Whatever numbers you choose, you can correctly set up the interfaces
for operation by issuing the following commands:
ifconfig sn0 local0
ifconfig sn1 local1
case "`uname -r`" in 2.0.*)
route add -net snullnet0 dev sn0
route add -net snullnet1 dev sn1
esac
There is no need to invoke route with 2.2
and later kernels because the route is automatically added. Also, you
may need to add the netmask 255.255.255.0 parameter
if the address range chosen is not a class C range.
At this point, the "remote" end of the interface can be reached.
The following screendump shows how a host reaches
remote0 and remote1 through the
snull interface.
morgana% ping -c 2 remote0
64 bytes from 192.168.0.99: icmp_seq=0 ttl=64 time=1.6 ms
64 bytes from 192.168.0.99: icmp_seq=1 ttl=64 time=0.9 ms
2 packets transmitted, 2 packets received, 0% packet loss
morgana% ping -c 2 remote1
64 bytes from 192.168.1.88: icmp_seq=0 ttl=64 time=1.8 ms
64 bytes from 192.168.1.88: icmp_seq=1 ttl=64 time=0.9 ms
2 packets transmitted, 2 packets received, 0% packet loss
Note that you won't be able to reach any other "host" belonging to
the two networks because the packets are discarded by your computer
after the address has been modified and the packet has been
received. For example, a packet aimed at 192.168.0.32 will leave
through sn0 and reappear at sn1 with a destination address of
192.168.1.32, which is not a local address for the host computer.
As far as data transport is concerned, the
snull interfaces belong to the Ethernet
class.
snull emulates Ethernet because the vast
majority of existing networks -- at least the segments that a
workstation connects to -- are based on Ethernet technology, be it
10baseT, 100baseT, or gigabit. Additionally, the kernel offers some
generalized support for Ethernet devices, and there's no reason not to
use it. The advantage of being an Ethernet device is so strong that
even the plip interface (the interface that
uses the printer ports) declares itself as an Ethernet device.
The last advantage of using the Ethernet setup for
snull is that you can run
tcpdump on the interface to see the packets
go by. Watching the interfaces with
tcpdump can be a useful way to see how the
two interfaces work. (Note that on 2.0 kernels,
tcpdump will not work properly unless
snull's interfaces show up as
ethx. Load the driver with the
eth=1 option to use the regular Ethernet names,
rather than the default snx names.)
As was mentioned previously, snull only
works with IP packets. This limitation is a result of the fact that
snull snoops in the packets and even
modifies them, in order for the code to work. The code modifies the
source, destination, and checksum in the IP header of each packet
without checking whether it actually conveys IP information. This
quick-and-dirty data modification destroys non-IP packets. If you
want to deliver other protocols through
snull, you must modify the module's source
code.
We'll start looking at the structure of network drivers by dissecting
the snull source. Keeping the source code
for several drivers handy might help you follow the discussion and to
see how real-world Linux network drivers operate. As a place to
start, we suggest loopback.c,
plip.c, and 3c509.c, in
order of increasing complexity. Keeping
skeleton.c handy might help as well, although
this sample driver doesn't actually run. All these files live in
drivers/net, within the kernel source tree.
When a driver module is loaded into a running kernel, it requests
resources and offers facilities; there's nothing new in that. And
there's also nothing new in the way resources are requested. The
driver should probe for its device and its hardware location (I/O
ports and IRQ line) -- but without registering them -- as
described in "Installing an Interrupt Handler" in Chapter 9, "Interrupt Handling". The way a network driver is registered by its module
initialization function is different from char and block drivers.
Since there is no equivalent of major and minor numbers for network
interfaces, a network driver does not request such a number. Instead,
the driver inserts a data structure for each newly detected interface
into a global list of network devices.
Each interface is described by a struct net_device
item. The structures for sn0 and
sn1, the two snullinterfaces, are declared like this:
struct net_device snull_devs[2] = {
{ init: snull_init, }, /* init, nothing more */
{ init: snull_init, }
};
The initialization shown seems quite simple -- it sets only one
field. In fact, the net_device structure is huge,
and we will be filling in other pieces of it later on. But it is not
helpful to cover the entire structure at this point; instead, we will
explain each field as it is used. For the interested reader, the
definition of the structure may be found in
<linux/netdevice.h>.
The first struct net_device field we will look at
is name, which holds the interface name (the string
identifying the interface). The driver can hardwire a name for the
interface or it can allow dynamic assignment, which works like this:
if the name contains a %d format string, the first
available name found by replacing that string with a small integer is
used. Thus, eth%d is turned into the first
available ethn name; the first
Ethernet interface is called eth0, and the others
follow in numeric order. The snullinterfaces are called sn0 and
sn1 by default. However, if
eth=1 is specified at load time (causing the
integer variable snull_eth to be set to 1),
snull_init uses dynamic assignment, as follows:
if (!snull_eth) { /* call them "sn0" and "sn1" */
strcpy(snull_devs[0].name, "sn0");
strcpy(snull_devs[1].name, "sn1");
} else { /* use automatic assignment */
strcpy(snull_devs[0].name, "eth%d");
strcpy(snull_devs[1].name, "eth%d");
}
The other field we initialized is init, a function
pointer. Whenever you register a device, the kernel asks the driver
to initialize itself. Initialization means probing for the physical
interface and filling the net_device structure with
the proper values, as described in the following section. If
initialization fails, the structure is not linked to the global list
of network devices. This peculiar way of setting things up is most
useful during system boot; every driver tries to register its own
devices, but only devices that exist are linked to the list.
Because the real initialization is performed elsewhere, the
initialization function has little to do, and a single statement does
it:
for (i=0; i<2; i++)
if ( (result = register_netdev(snull_devs + i)) )
printk("snull: error %i registering device \"%s\"\n",
result, snull_devs[i].name);
else device_present++;
Probing for the device should be performed in the
init function for the interface (which is often
called the "probe" function). The single argument received by
init is a pointer to the device being
initialized; its return value is either 0 or a negative error code,
usually -ENODEV.
No real probing is performed for the snullinterface, because it is not bound to any hardware. When you write a
real driver for a real interface, the usual rules for probing devices
apply, depending on the peripheral bus you are using. Also, you
should avoid registering I/O ports and interrupt lines at this point.
Hardware registration should be delayed until device open time; this
is particularly important if interrupt lines are shared with other
devices. You don't want your interface to be called every time
another device triggers an IRQ line just to reply "no, it's not
mine."
The main role of the initialization routine is to fill in the
dev structure for this device. Note that for
network devices, this structure is always put together at runtime.
Because of the way the network interface probing works, the
dev structure cannot be set up at compile time in
the same manner as a file_operations or
block_device_operations structure. So, on exit
from dev->init, the dev
structure should be filled with correct values. Fortunately, the
kernel takes care of some Ethernet-wide defaults through the function
ether_setup, which fills several fields in
struct net_device.
The core of snull_init is as follows:
ether_setup(dev); /* assign some of the fields */
dev->open = snull_open;
dev->stop = snull_release;
dev->set_config = snull_config;
dev->hard_start_xmit = snull_tx;
dev->do_ioctl = snull_ioctl;
dev->get_stats = snull_stats;
dev->rebuild_header = snull_rebuild_header;
dev->hard_header = snull_header;
#ifdef HAVE_TX_TIMEOUT
dev->tx_timeout = snull_tx_timeout;
dev->watchdog_timeo = timeout;
#endif
/* keep the default flags, just add NOARP */
dev->flags |= IFF_NOARP;
dev->hard_header_cache = NULL; /* Disable caching */
SET_MODULE_OWNER(dev);
The single unusual feature of the code is setting
IFF_NOARP in the flags. This specifies that the
interface cannot use ARP, the Address Resolution Protocol. ARP is a
low-level Ethernet protocol; its job is to turn IP addresses into
Ethernet Medium Access Control (MAC) addresses. Since the "remote"
systems simulated by snull do not really
exist, there is nobody available to answer ARP requests for them.
Rather than complicate snull with the
addition of an ARP implementation, we chose to mark the interface as
being unable to handle that protocol. The assignment to
hard_header_cache is there for a similar reason: it
disables the caching of the (nonexistent) ARP replies on this
interface. This topic is discussed in detail later in this chapter in
"MAC Address Resolution".
The initialization code also sets a couple of fields
(tx_timeout and watchdog_timeo)
that relate to the handling of transmission timeouts. We will cover
this topic thoroughly later in this chapter in "Transmission Timeouts".
Finally, this code calls SET_MODULE_OWNER, which
initializes the owner field of the
net_device structure with a pointer to the module
itself. The kernel uses this information in exactly the same way it
uses the owner field of the
file_operations structure -- to maintain the
module's usage count.
We'll look now at one more struct net_device field,
priv. Its role is similar to that of the
private_data pointer that we used for char drivers.
Unlike fops->private_data, this
priv pointer is allocated at initialization time
instead of open time, because the data item pointed to by
priv usually includes the statistical information
about interface activity. It's important that statistical information
always be available, even when the interface is down, because users
may want to display the statistics at any time by calling
ifconfig. The memory wasted by allocating
priv during initialization instead of on open is
irrelevant because most probed interfaces are constantly up and
running in the system. The snull module
declares a snull_priv data structure to be used for
priv:
struct snull_priv {
struct net_device_stats stats;
int status;
int rx_packetlen;
u8 *rx_packetdata;
int tx_packetlen;
u8 *tx_packetdata;
struct sk_buff *skb;
spinlock_t lock;
};
The structure includes an instance of struct
net_device_stats, which is the standard place to hold
interface statistics. The following lines in
snull_init allocate and initialize
dev->priv:
dev->priv = kmalloc(sizeof(struct snull_priv), GFP_KERNEL);
if (dev->priv == NULL)
return -ENOMEM;
memset(dev->priv, 0, sizeof(struct snull_priv));
spin_lock_init(& ((struct snull_priv *) dev->priv)->lock);
Nothing special happens when the module is unloaded. The module
cleanup function simply unregisters the interfaces from the list after
releasing memory associated with the private structure:
void snull_cleanup(void)
{
int i;
for (i=0; i<2; i++) {
kfree(snull_devs[i].priv);
unregister_netdev(snull_devs + i);
}
return;
}
Although char and block drivers are the same regardless of whether
they're modular or linked into the kernel, that's not the case for
network drivers.
When a driver is linked directly into the Linux kernel, it doesn't
declare its own net_device structures; the
structures declared in drivers/net/Space.c are
used instead. Space.c declares a linked list of
all the network devices, both driver-specific structures like
plip1 and general-purpose eth
devices. Ethernet drivers don't care about their
net_device structures at all, because they use the
general-purpose structures. Such general eth
device structures declare ethif_probe as their
init function. A programmer inserting a new
Ethernet interface in the mainstream kernel needs only to add a call
to the driver's initialization function to
ethif_probe. Authors of
non-eth drivers, on the other hand, insert their
net_device structures in
Space.c. In both cases only the source file
Space.c has to be modified if the driver must be
linked to the kernel proper.
At system boot, the network initialization code loops through all the
net_device structures and calls their probing
(dev->init) functions by passing them a pointer
to the device itself. If the probe function succeeds, the kernel
initializes
the next available net_device structure to use that
interface. This way of setting up drivers permits incremental
assignment of devices to the names eth0,
eth1, and so on, without changing the
name field of each device.
When a modularized driver is loaded, on the other hand, it declares
its own net_device structures (as we have seen in
this chapter), even if the interface it controls is an Ethernet
interface.
The curious reader can learn more about interface initialization by
looking at Space.c and
net_init.c.
The:
The name of the device. If the name contains a %d
format string, the first available device name with the given base is
used; assigned numbers start at zero..
The assigned interrupt number. The value of
dev->irq is printed by
ifconfig when interfaces are listed. This
value can usually be set at boot or load time and modified later using
ifconfig.>.
The DMA channel allocated by the device. The field makes sense only
with some peripheral buses, like ISA. It is not used outside of the
device driver itself, but for informational purposes (in
ifconfig).
Device state. The field includes several flags. Drivers do not
normally manipulate these flags directly; instead, a set of utility
functions has been provided. These functions will be discussed
shortly when we get into driver operations.
Pointer to the next device in the global linked list. This field
shouldn't be touched by the driver.exports a number of such functions, including the following:
Sets up the fields for a LocalTalk device.
Initializes for fiber channel devices.
Configures an interface for a Fiber Distributed Data Interface (FDDI)
network.
Prepares fields for a High-Performance Parallel Interface (HIPPI)
high-speed interconnect driver..
The hardware header length, that is, the number of octets that lead
the transmitted packet before the IP header, or other protocol
information. The value of hard_header_len is 14
(ETH_HLEN) for Ethernet interfaces.
The maximum transfer unit (MTU). This field is used by the network
layer to drive packet transmission. Ethernet has an MTU of 1500
octets (ETH_DATAdevice doesn't use a physical interface, and it invents its own
hardware address.:
This flag is read-only for the driver. The kernel turns it on when the
interface is active and ready to transfer packets.
This flag states that the interface allows broadcasting. Ethernet
boards do..
This flag should be set only in the loopback interface. The kernel
checks for IFF_LOOPBACK instead of hardwiring the
lo name as a special interface.
This flag signals that the interface is connected to a point-to-point
link. It is set by ifconfig. For example,
plip and the PPP driver have it set.
This means that the interface can't perform ARP. For example,
point-to-point interfaces don't need to run ARP, which would only
impose additional traffic without retrieving useful information.
snull runs without ARP capabilities, so it
sets the flag..
This flag is set by interfaces that are capable of multicast
transmission. ether_setup sets
IFF_MULTICAST by default, so if your driver does
not support multicast, it must clear the flag at initialization time.
This flag tells the interface to receive all multicast packets. The
kernel sets it when the host performs multicast routing, only if
IFF_MULTICAST is
set. IFF_ALLMULTI is read-only for the interface.
We'll see the multicast flags used in "Multicasting" later
in this chapter.
These flags are used by the load equalization code. The interface
driver doesn't need to know about them.
These flags signal that the device is capable of switching between
multiple media types, for example, unshielded twisted pair (UTP) versus
coaxial Ethernet cables. If IFF_AUTOMEDIA is set,
the device selects the proper medium automatically.
This flag indicates that the address of this interface can change;
used with dialup devices.
This flag indicates that the interface is up and running. It is
mostly present for BSD compatibility; the kernel makes little use of
it. Most network drivers need not worry about
IFF_RUNNING. "Multicasting".:
Opens the interface. The interface is opened whenever
ifconfig activates it. The
open method should register any system resource
it needs (I/O ports, IRQ, DMA, etc.), turn on the hardware, and
increment the module usage count.
Stops the interface. The interface is stopped when it is brought down;
operations performed at open time should be reversed.
This method initiates the transmission of a packet. The full packet
(protocol headers and all) is contained in a socket buffer
(sk_buff) structure. Socket buffers are introduced
later in this chapter..
This function is used to rebuild the hardware header before a packet
is transmitted. The default function used by Ethernet devices uses ARP
to fill the packet with missing information. The
rebuild_header method is used rarely in the 2.4
kernel; hard_header is used instead.
This method is called when a packet transmission fails to complete
within a reasonable period, on the assumption that an interrupt has
been missed or the interface has locked up. It should handle the
problem and resume packet transmission.
Whenever an application needs to get statistics for the interface,
this method is called. This happens, for example, when
ifconfig or netstat
-i is run. A sample implementation for
snull is introduced in "Statistical Information" later in this chapter..
Perform interface-specific ioctl commands.
Implementation of those commands is described later in "Custom ioctl Commands". The corresponding field in struct
net_device can be left as NULL if the
interface doesn't need any interface-specific commands.
This method is called when the multicast list for the device changes
and when the flags change. See "Multicasting" for
further details and a sample implementation..
header_cache is called to fill in the
hh_cache structure with the results of an ARP
query. Almost all drivers can use the default
eth_header_cache implementation.
This method updates the destination address in the
hh_cache structure in response to a change.
Ethernet devices use eth_header_cache_update..
The minimum time (in jiffies) that should pass before the networking
layer decides that a transmission timeout has occurred and calls the
driver's tx_timeout function.
The equivalent of filp->private_data. The driver
owns this pointer and can use it at will. Usually the private data
structure includes a struct net_device_stats item.
The field is used in "Initializing Each Device", later in this chapter.
These two fields are used in handling multicast transmission.
mc_count is the count of items in
mc_list. See "Multicasting" for
further details.
The xmit_lock is used to avoid multiple
simultaneous calls to the driver's
hard_start_xmit function.
xmit_lock_owner is the number of the CPU that has obtained xmit_lock. The driver should make no
changes to these fields.
The module that "owns" this device structure; it is used to maintain
the use count for the module.
There are other fields in struct net_device, but
they are not used by network drivers.
Our driver can probe for the interface at module load time or at
kernel boot. Before the interface can carry packets, however, the
kernel must open it and assign an address to it. The kernel will open
or close an interface in response to the
ifconfig command.
When ifconfig is used to assign an address
to the interface, it performs two tasks. First, it assigns the
address by means of ioctl(SIOCSIFADDR) (Socket I/O
Control Set Interface Address). Then it sets the
IFF_UP bit in dev->flag by
means of ioctl(SIOCSIFFLAGS) (Socket I/O Control
Set Interface Flags) to turn the interface on.
As far as the device is concerned,
ioctl(SIOCSIFADDR) does nothing. No driver function
is invoked -- the task is device independent, and the kernel
performs it. The latter command
(ioctl(SIOCSIFFLAGS)), though, calls the
open method for the device.
Similarly, when the interface is shut down,
ifconfig uses
ioctl(SIOCSIFFLAGS) to clear
IFF_UP, and the stop method is
called.
Both device methods return 0 in case of success and the usual negative
value in case of error.
As far as the actual code is concerned, the driver has to perform many
of the same tasks as the char and block drivers
do. open requests any system resources it needs
and tells the interface to come up; stop shuts
down the interface and releases system resources. There are a couple
of additional steps to be performed, however.
First, the hardware address needs to be copied from the hardware
device to dev->dev_addr before the interface can
communicate with the outside world. The hardware address can be
assigned at probe time or at open time, at the driver's will. The
snull software interface assigns it from
within open; it just fakes a hardware number
using an ASCII string of length ETH_ALEN, the
length of Ethernet hardware addresses.
The open method should also start the interface's
transmit queue (allow it to accept packets for transmission) once it
is ready to start sending data. The kernel provides a function to
start the queue:
void netif_start_queue(struct net_device *dev);
The open code for
snull looks like the following:
int snull_open(struct net_device *dev)
{
MOD_INC_USE_COUNT;
/* request_region(), request_irq(), .... (like fops->open) */
/*
* Assign the hardware address of the board: use "\0SNULx", where
* x is 0 or 1. The first byte is '\0' to avoid being a multicast
* address (the first byte of multicast addrs is odd).
*/
memcpy(dev->dev_addr, "\0SNUL0", ETH_ALEN);
dev->dev_addr[ETH_ALEN-1] += (dev - snull_devs); /* the number */
netif_start_queue(dev);
return 0;
}
As you can see, in the absence of real hardware, there is little to do
in the open method. The same is true of the
stop method; it just reverses the operations of
open. For this reason the function implementing
stop is often called closeor release.
int snull_release(struct net_device *dev)
{
/* release ports, irq and such -- like fops->close */
netif_stop_queue(dev); /* can't transmit any more */
MOD_DEC_USE_COUNT;
return 0;
}
The function:
void netif_stop_queue(struct net_device *dev);
is the opposite of netif_start_queue; it marks
the device as being unable to transmit any more packets. The function
must be called when the interface is closed (in the
stop method) but can also be used to temporarily
stop transmission, as explained in the next section.
The most important tasks performed by network interfaces are data
transmission and reception. We'll start with transmission because it
is slightly easier to understand.
Whenever the kernel needs to transmit a data packet, it calls the
hard_start_transmit method to put the data on an
outgoing queue. Each packet handled by the kernel is contained in a
socket buffer structure (struct sk_buff), whose
definition is found in <linux/skbuff.h>. The
structure gets its name from the Unix abstraction used to represent a
network connection, the socket. Even if the
interface has nothing to do with sockets, each network packet belongs
to a socket in the higher network layers, and the input/output buffers
of any socket are lists of struct sk_buff
structures. The same sk_buff structure is used to
host network data throughout all the Linux network subsystems, but a
socket buffer is just a packet as far as the interface is concerned.
A pointer to sk_buff is usually called
skb, and we follow this practice both in the sample
code and in the text.
The socket buffer is a complex structure, and the kernel offers a
number of functions to act on it. The functions are described later in
"The Socket Buffers"; for now a few basic facts about
sk_buff are enough for us to write a working
driver.
The socket buffer passed to hard_start_xmitcontains the physical
packet as it should appear on the media, complete with the
transmission-level headers. The interface doesn't need to modify the
data being transmitted. skb->data points to the
packet being transmitted, and skb->len is its
length, in octets.
The snull packet transmission code is
follows; the physical transmission machinery has been isolated in
another function because every interface driver must implement it
according to the specific hardware being driven.
int snull_tx(struct sk_buff *skb, struct net_device *dev)
{
int len;
char *data;
struct snull_priv *priv = (struct snull_priv *) dev->priv;
len = skb->len < ETH_ZLEN ? ETH_ZLEN : skb->len;
data = skb->data;
dev->trans_start = jiffies; /* save the timestamp */
/* Remember the skb, so we can free it at interrupt time */
priv->skb = skb;
/* actual delivery of data is device specific, and not shown here */
snull_hw_tx(data, len, dev);
return 0; /* Our simple device cannot fail */
}
The transmission function thus performs only some sanity checks on the
packet and transmits the data through the hardware-related
function. That function (snull_hw_tx) is omitted
here since it is entirely occupied with implementing the trickery of
the snull device (including manipulating
the source and destination addresses) and has little of interest to
authors of real network drivers. It is present, of course, in the
sample source for those who want to go in and see how it works.
The hard_start_xmit function is protected from
concurrent calls by a spinlock (xmit_lock) in the
net_device structure. As soon as the function
returns, however, it may be called again. The function returns when
the software is done instructing the hardware about packet
transmission, but hardware transmission will likely not have been
completed. This is not an issue with
snull, which does all of its work using the
CPU, so packet transmission is complete before the transmission
function returns.
Real hardware interfaces, on the other hand, transmit packets
asynchronously and have a limited amount of memory available to store
outgoing packets. When that memory is exhausted (which, for some
hardware, will happen with a single outstanding packet to transmit),
the driver will need to tell the networking system not to start any
more transmissions until the hardware is ready to accept new data.
This notification is accomplished by calling
netif_stop_queue, the function introduced earlier
to stop the queue. Once your driver has stopped its queue, it
must arrange to restart the queue at some point
in the future, when it is again able to accept packets for
transmission. To do so, it should call:
void netif_wake_queue(struct net_device *dev);
This function is just like netif_start_queue,
except that it also pokes the networking system to make it start
transmitting packets again.
Most modern network interfaces maintain an internal queue with
multiple packets to transmit; in this way they can get the best
performance from the network. Network drivers for these devices
support having multiple transmisions outstanding at any given time,
but device memory can fill up whether or not the hardware supports
multiple outstanding transmission. Whenever device memory fills to
the point that there is no room for the largest possible packet, the
driver should stop the queue until space becomes available again.
Most drivers that deal with real hardware have to be prepared for that
hardware to fail to respond occasionally. Interfaces can forget what
they are doing, or the system can lose an interrupt. This sort of
problem is common with some devices designed to run on personal
computers.
Many drivers handle this problem by setting timers; if the operation
has not completed by the time the timer expires, something is wrong.
The network system, as it happens, is essentially a complicated
assembly of state machines controlled by a mass of timers. As such,
the networking code is in a good position to detect transmission
timeouts automatically.
Thus, network drivers need not worry about detecting such problems
themselves. Instead, they need only set a timeout period, which goes
in the watchdog_timeo field of the
net_device structure. This period, which is in
jiffies, should be long enough to account for normal transmission
delays (such as collisions caused by congestion on the network media).
If the current system time exceeds the device's
trans_start time by at least the timeout period,
the networking layer will eventually call the driver's
tx_timeout method. That method's job is to do
whatever is needed to clear up the problem and to ensure the proper
completion of any transmissions that were already in progress. It is
important, in particular, that the driver not lose track of any socket
buffers that have been entrusted to it by the networking code.
snull has the ability to simulate
transmitter lockups, which is controlled by two load-time parameters:
static int lockup = 0;
MODULE_PARM(lockup, "i");
#ifdef HAVE_TX_TIMEOUT
static int timeout = SNULL_TIMEOUT;
MODULE_PARM(timeout, "i");
#endif
If the driver is loaded with the parameter
lockup=n, a lockup will be simulated once every
n packets transmitted, and the
watchdog_timeo field will be set to the given
timeout value. When simulating lockups,
snull also calls
netif_stop_queue to prevent other transmission
attempts from occurring.
The snull transmission timeout handler
looks like this:
void snull_tx_timeout (struct net_device *dev)
{
struct snull_priv *priv = (struct snull_priv *) dev->priv;
PDEBUG("Transmit timeout at %ld, latency %ld\n", jiffies,
jiffies - dev->trans_start);
priv->status = SNULL_TX_INTR;
snull_interrupt(0, dev, NULL);
priv->stats.tx_errors++;
netif_wake_queue(dev);
return;
}
When a transmission timeout happens, the driver must mark the error in the
interface statistics and arrange for the device to be reset to a sane
state so that new packets can be transmitted. When a timeout happens
in snull, the driver calls
snull_interrupt to fill in the "missing"
interrupt and restarts the transmit queue with
netif_wake_queue.
Receiving data from the network is trickier than transmitting it
because an sk_buff must be allocated and handed off
to the upper layers from within an interrupt handler. The usual way
to receive a packet is through an interrupt, unless the interface is a
purely software one like snull or the
loopback interface. Although it is possible to write polling drivers,
and a few exist in the official kernel, interrupt-driven operation is
much better, both in terms of data throughput and computational
demands. Because most network interfaces are interrupt driven, we
won't talk about the polling implementation, which just exploits
kernel timers.
The implementation of snull separates the
"hardware" details from the device-independent housekeeping. The
function snull_rx is thus called after the
hardware has received the packet and it is already in the computer's
memory. snull_rx receives a pointer to the data
and the length of the packet; its sole responsibility is to send the
packet and some additional information to the upper layers of
networking code. This code is independent of the way the data pointer
and length are obtained.
void snull_rx(struct net_device *dev, int len, unsigned char *buf)
{
struct sk_buff *skb;
struct snull_priv *priv = (struct snull_priv *) dev->priv;
/*
* The packet has been retrieved from the transmission
* medium. Build an skb around it, so upper layers can handle it
*/
skb = dev_alloc_skb(len+2);
if (!skb) {
printk("snull rx: low on mem - packet dropped\n");
priv->stats.rx_dropped++;
return;
}
memcpy(skb_put(skb, len), buf, len);
/* Write metadata, and then pass to the receive level */
skb->dev = dev;
skb->protocol = eth_type_trans(skb, dev);
skb->ip_summed = CHECKSUM_UNNECESSARY; /* don't check it */
priv->stats.rx_packets++;
priv->stats.rx_bytes += len;
netif_rx(skb);
return;
}
The function is sufficiently general to act as a template for any
network driver, but some explanation is necessary before you can reuse
this code fragment with confidence.
The first step is to allocate a buffer to hold the packet. Note that
the buffer allocation function (dev_alloc_skb)
needs to know the data length. The information is used by the function
to allocate space for the buffer. dev_alloc_skbcalls kmalloc with atomic priority; it can thus
be used safely at interrupt time. The kernel offers other interfaces
to socket-buffer allocation, but they are not worth introducing here;
socket buffers are explained in detail in "The Socket Buffers",
later in this chapter.
Once there is a valid skb pointer, the packet data
is copied into the buffer by calling memcpy; the
skb_put function updates the end-of-data pointer
in the buffer and returns a pointer to the newly created space.
If you are writing a high-performance driver for an interface that can
do full bus-mastering I/O, there is a possible optimization that is
worth considering here. Some drivers allocate socket buffers for
incoming packets prior to their reception, then instruct the interface
to place the packet data directly into the socket buffer's space. The
networking layer cooperates with this strategy by allocating all
socket buffers in DMA-capable space. Doing things this way avoids the
need for a separate copy operation to fill the socket buffer, but
requires being careful with buffer sizes because you won't know in
advance how big the incoming packet is. The implementation of a
change_mtu method is also important in this
situation, since it allows the driver to respond to a change in the
maximum packet size.
The network layer needs to have some information spelled out before it
will be able to make sense of the packet. To this end, the
dev and protocol fields must be
assigned before the buffer is passed upstairs. Then we need to
specify how checksumming is to be performed or has been performed on
the packet (snull does not need to perform
any checksums). The possible policies for
skb->ip_summed are as follows:
The device has already performed checksums in hardware. An example of
a hardware checksum is the SPARC HME interface.
Checksums are still to be verified, and the task must be accomplished
by system software. This is the default in newly allocated buffers.
Don't do any checksums. This is the policy in
snull and in the loopback interface.
Finally, the driver updates its statistics counter to record that a
packet has been received. The statistics structure is made up of
several fields; the most important are rx_packets,
rx_bytes, tx_packets, and
tx_bytes, which contain the number of packets
received and transmitted and the total number of octets
transferred. All the fields are thoroughly described in "Statistical Information" later in this chpater.
The last step in packet reception is performed by
netif_rx, which hands off the socket buffer to
the upper layers.
Most hardware interfaces are controlled by means of an interrupt
handler. The interface interrupts the processor to signal one of two
possible events: a new packet has arrived or transmission of an
outgoing packet is complete. This generalization doesn't always apply,
but it does account for all the problems related to asynchronous
packet transmission. Parallel Line Internet Protocol (PLIP) and
Point-to-Point Protocol (PPP) are examples of interfaces that don't
fit this generalization. They deal with the same events, but the
low-level interrupt handling is slightly different.
The usual interrupt routine can tell the difference between a
new-packet-arrived interrupt and a done-transmitting notification by
checking a status register found on the physical device. The
snull interface works similarly, but its
status word is implemented in software and lives in
dev->priv. The interrupt handler for a network
interface looks like this:
void snull_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
int statusword;
struct snull_priv *priv;
/*
* As usual, check the "device" pointer for shared handlers.
* Then assign "struct device *dev"
*/
struct net_device *dev = (struct net_device *)dev_id;
/* ... and check with hw if it's really ours */
if (!dev /*paranoid*/ ) return;
/* Lock the device */
priv = (struct snull_priv *) dev->priv;
spin_lock(&priv->lock);
/* retrieve statusword: real netdevices use I/O instructions */
statusword = priv->status;
if (statusword & SNULL_RX_INTR) {
/* send it to snull_rx for handling */
snull_rx(dev, priv->rx_packetlen, priv->rx_packetdata);
}
if (statusword & SNULL_TX_INTR) {
/* a transmission is over: free the skb */
priv->stats.tx_packets++;
priv->stats.tx_bytes += priv->tx_packetlen;
dev_kfree_skb(priv->skb);
}
/* Unlock the device and we are done */
spin_unlock(&priv->lock);
return;
}
The handler's first task is to retrieve a pointer to the correct
struct net_device. This pointer usually comes from
the dev_id pointer received as an argument.
The interesting part of this handler deals with the "transmission
done" situation. In this case, the statistics are updated, and
dev_kfree_skb is called to return the (no longer
needed) socket buffer to the system. If your driver has temporarily
stopped the transmission queue, this is the place to restart it with
netif_wake_queue.
Packet reception, on the other hand, doesn't need any special
interrupt handling. Calling snull_rx (which we
have already seen) is all that's required.
Network connections, by definition, deal with the world outside the
local system. They are thus often affected by outside events, and
they can be transient things. The networking subsystem needs to know
when network links go up or down, and it provides a few functions that
the driver may use to convey that information.
Most networking technologies involving an actual, physical connection
provide a carrier state; the presence of the
carrier means that the hardware is present and ready to function.
Ethernet adapters, for example, sense the carrier signal on the wire;
when a user trips over the cable, that carrier vanishes, and the link
goes down. By default, network devices are assumed to have a carrier
signal present. The driver can change that state explicitly, however,
with these functions:
void netif_carrier_off(struct net_device *dev);
void netif_carrier_on(struct net_device *dev);
If your driver detects a lack of carrier on one of its devices, it
should call netif_carrier_off to inform the
kernel of this change. When the carrier returns,
netif_carrier_on should be called. Some drivers
also call netif_carrier_off when making major
configuration changes (such as media type); once the adapter has
finished resetting itself, the new carrier will be detected and
traffic can resume.
An integer function also exsists:
int netif_carrier_ok(struct net_device *dev);
This can be used to test the current carrier state (as reflected in the
device structure).
We've now discussed most of the issues related to network interfaces.
What's still missing is some more detailed discussion of the
sk_buff structure. The structure is at the core of
the network subsystem of the Linux kernel, and we now introduce both
the main fields of the structure and the functions used to act on it.
Although there is no strict need to understand the internals of
sk_buff, the ability to look at its contents can be
helpful when you are tracking down problems and when you are trying to
optimize the code. For example, if you look in
loopback.c, you'll find an optimization based on
knowledge of the sk_buff internals. The usual
warning applies here: if you write code that takes advantage of
knowledge of the sk_buff structure, you should be
prepared to see it break with future kernel releases. Still,
sometimes the performance advantages justify the additional
maintenance cost.
We are not going to describe the whole structure here, just the fields
that might be used from within a driver. If you want to see more, you
can look at <linux/skbuff.h>, where the
structure is defined and the functions are prototyped. Additional
details about how the fields and functions are used can be easily
retrieved by grepping in the kernel sources.
The fields introduced here are the ones a driver might
need to access. They are listed in no particular order.
The devices receiving and sending this buffer, respectively.
Pointers to the various levels of headers contained within the
packet. Each field of the unions is a pointer to a different type of
data structure. h hosts pointers to transport
layer headers (for example, struct tcphdr *th);
nh includes network layer headers (such as
struct iphdr *iph); and mac
collects pointers to link layer headers (such as struct
ethdr *ethernet).
If your driver needs to look at the source and destination addresses
of a TCP packet, it can find them in skb->h.th.
See the header file for the full set of header types that can be
accessed in this way.
Note that network drivers are responsible for setting the
mac pointer for incoming packets. This task is
normally handled by ether_type_trans, but
non-Ethernet drivers will have to set
skb->mac.raw directly, as shown later in "Non-Ethernet Headers".
Pointers used to address the data in the
packet. head points to the beginning of the
allocated space, data is the beginning of the valid
octets (and is usually slightly greater than head),
tail is the end of the valid octets, and
end points to the maximum address
tail can reach. Another way to look at it is that
the available buffer space is
skb->end - skb->head, and the
currently used data space is
skb->tail - skb->data.
The length of the data itself (skb->tail -
skb->data).
The checksum policy for this packet. The field is set by the driver
on incoming packets, as was described in "Packet Reception".
Packet classification used in delivering it. The driver is
responsible for setting it to PACKET_HOST (this
packet is for me), PACKET_BROADCAST,
PACKET_MULTICAST, or
PACKET_OTHERHOST (no, this packet is not for me).
Ethernet drivers don't modify pkt_type explicitly
because eth_type_trans does it for them.
The remaining fields in the structure are not particularly
interesting. They are used to maintain lists of buffers, to account
for memory belonging to the socket that owns the buffer, and so on.
Network devices that use a sock_buff act on the
structure by means of the official interface functions. Many functions
operate on socket buffers; here are the most interesting ones:
Allocate a buffer. The alloc_skb function
allocates a buffer and initializes both
skb->data and skb->tail to
skb->head. The
dev_alloc_skb function is a shortcut that calls
alloc_skb with GFP_ATOMIC
priority and reserves some space between
skb->head and skb->data.
This data space is used for optimizations within the network layer and
should not be touched by the driver.
Free a buffer. The kfree_skb call is used
internally by the kernel. A driver should use
dev_kfree_skb instead, which is intended to be
safe to call from driver context.
These inline functions update the tail and
len fields of the sk_buff
structure; they are used to add data to the end of the buffer. Each
function's return value is the previous value of
skb->tail (in other words, it points to the data
space just created). Drivers can use the return value to copy data by
invoking ins(ioaddr, skb_put(...)) or
memcpy(skb_put(...), data, len). The difference
between the two functions is that skb_put checks to
be sure that the data will fit in the buffer, whereas
__skb_put omits the check.
These functions decrement skb->data and
increment skb->len. They are similar to
skb_put, except that data is added to the
beginning of the packet instead of the end. The return value points
to the data space just created. The functions are used to add a
hardware header before transmitting a packet. Once again,
__skb_push differs in that it does not
check for adequate available space.
This function returns the amount of space available for putting data
in the buffer. If a driver puts more data into the buffer than it can
hold, the system panics. Although you might object that a
printk would be sufficient to tag the error,
memory corruption is so harmful to the system that the developers
decided to take definitive action. In practice, you shouldn't need to
check the available space if the buffer has been correctly allocated.
Since drivers usually get the packet size before allocating a buffer,
only a severely broken driver will put too much data in the buffer,
and a panic might be seen as due punishment.
Returns the amount of space available in front of
data, that is, how many octets one can "push" to
the buffer.
This function increments both data and
tail. The function can be used to reserve headroom
before filling the buffer. Most Ethernet interfaces reserve 2 bytes
in front of the packet; thus, the IP header is aligned on a
16-byte boundary, after a 14-byte Ethernet
header. snull does this as well, although the
instruction was not shown in "Packet Reception" to avoid
introducing extra concepts at that point.
Removes data from the head of the packet. The driver won't need to use
this function, but it is included here for completeness. It
decrements skb->len and increments
skb->data; this is how the hardware header
(Ethernet or equivalent) is stripped from the beginning of incoming
packets.
The kernel defines several other functions that act on socket buffers,
but they are meant to be used in higher layers of networking code, and
the driver won't need them.
An interesting issue with Ethernet communication is how to associate
the MAC addresses (the interface's unique hardware ID) with the IP
number. Most protocols have a similar problem, but we concentrate on
the Ethernet-like case here. We'll try to offer a complete description
of the issue, so we will show three situations: ARP, Ethernet headers
without ARP (like plip), and non-Ethernet
headers.
The usual way to deal with address resolution is by using ARP, the
Address Resolution Protocol. Fortunately, ARP is managed by the
kernel, and an Ethernet interface doesn't need to do anything special
to support ARP. As long as dev->addr and
dev->addr_len are correctly assigned at open
time, the driver doesn't need to worry about resolving IP numbers to
physical addresses; ether_setup assigns the
correct device methods to dev->hard_header and
dev->rebuild_header.
Although the kernel normally handles the details of address resolution
(and caching of the results), it calls upon the interface driver to
help in the building of the packet. After all, the driver knows about
the details of the physical layer header, while the authors of the
networking code have tried to insulate the rest of the kernel from
that knowledge. To this end, the kernel calls the driver's
hard_header method to lay out the packet with the
results of the ARP query. Normally, Ethernet driver writers need not
know about this process -- the common Ethernet code takes care of
everything.
Simple point-to-point network interfaces such
as plip might benefit from using Ethernet
headers, while avoiding the overhead of sending ARP packets back and
forth. The sample code in snull also falls
into this class of network devices. snullcannot use ARP because the driver changes IP addresses in packets
being transmitted, and ARP packets exchange IP addresses as well.
Although we could have implemented a simple ARP reply generator with
little trouble, it is more illustrative to show how to handle
physical-layer headers directly.
If your device wants to use the usual hardware header without running
ARP, you need to override the default
dev->hard_header method. This is how
snull implements it, as a very short
function.
int snull_header(struct sk_buff *skb, struct net_device *dev,
unsigned short type, void *daddr, void *saddr,
unsigned int len)
{
struct ethhdr *eth = (struct ethhdr *)skb_push(skb,ETH_HLEN);
eth->h_proto = htons(type);
memcpy(eth->h_source, saddr ? saddr : dev->dev_addr, dev->addr_len);
memcpy(eth->h_dest, daddr ? daddr : dev->dev_addr, dev->addr_len);
eth->h_dest[ETH_ALEN-1] ^= 0x01; /* dest is us xor 1 */
return (dev->hard_header_len);
}
The function simply takes the information provided by the kernel and
formats it into a standard Ethernet header. It also toggles a bit in
the destination Ethernet address, for reasons described later.
When a packet is received by the interface, the hardware header is
used in a couple of ways by eth_type_trans. We
have already seen this call in snull_rx:
skb->protocol = eth_type_trans(skb, dev);
The function extracts the protocol identifier
(ETH_P_IP in this case) from the Ethernet header;
it also assigns skb->mac.raw, removes the
hardware header from packet data (with skb_pull),
and sets skb->pkt_type. This last item defaults
to PACKET_HOST at skb allocation
(which indicates that the packet is directed to this host), and
eth_type_trans changes it according to the
Ethernet destination address. If that address does not match the
address of the interface that received it, the
pkt_type field will be set to
PACKET_OTHERHOST. Subsequently, unless the
interface is in promiscuous mode, netif_rx will
drop any packet of type PACKET_OTHERHOST. For this
reason, snull_header is careful to make the
destination hardware address match that of the "receiving"
interface.
If your interface is a point-to-point link, you won't want to receive
unexpected multicast packets. To avoid this problem, remember that a
destination address whose first octet has 0 as the least significant
bit (LSB) is directed to a single host (i.e., it is either
PACKET_HOST or
PACKET_OTHERHOST). The
plip driver uses 0xfc as the first octet of
its hardware address, while snull uses
0x00. Both addresses result in a working Ethernet-like point-to-point
link.
We have just seen that the hardware header contains some information
in addition to the destination address, the most important being the
communication protocol. We now describe how hardware headers can be
used to encapsulate relevant information. If you need to know the
details, you can extract them from the kernel sources or the technical
documentation for the particular transmission medium. Most driver
writers will be able to ignore this discussion and just use the
Ethernet implementation.
It's worth noting that not all information has to be provided by every
protocol. A point-to-point link such as plipor snull could avoid transferring the whole
Ethernet header without losing generality. The
hard_header device method, shown earlier as
implemented by snull_header, receives the
delivery information -- both protocol-level and hardware
addresses -- from the kernel. It also receives the 16-bit protocol
number in the type argument; IP, for example, is
identified by ETH_P_IP. The driver is expected to
correctly deliver both the packet data and the protocol number to the
receiving host. A point-to-point link could omit addresses from its
hardware header, transferring only the protocol number, because
delivery is guaranteed independent of the source and destination
addresses. An IP-only link could even avoid transmitting any hardware
header whatsoever.
When the packet is picked up at the other end of the link, the
receiving function in the driver should correctly set the fields
skb->protocol,
skb->pkt_type, and
skb->mac.raw.
skb->mac.raw is a char pointer used by the
address-resolution mechanism implemented in higher layers of the
networking code (for instance,
net/ipv4/arp.c). It must point to a machine
address that matches dev->type. The possible
values for the device type are defined in
<linux/if_arp.h>; Ethernet interfaces use
ARPHRD_ETHER. For example, here is how
eth_type_trans deals with the Ethernet header for
received packets:
skb->mac.raw = skb->data;
skb_pull(skb, dev->hard_header_len);
In the simplest case (a point-to-point link with no headers),
skb->mac.raw can point to a static buffer
containing the hardware address of this interface,
protocol can be set to ETH_P_IP,
and packet_type can be left with its default value
of PACKET_HOST.
Because every hardware type is unique, it is hard to give more
specific advice than already discussed. The kernel is full of
examples, however. See, for example, the AppleTalk driver
(drivers/net/appletalk/cops.c), the infrared
drivers (such as drivers/net/irda/smc_ircc.c), or
the PPP driver (drivers/net/ppp_generic.c).
We have seen that the ioctl system call is
implemented for sockets; SIOCSIFADDR and
SIOCSIFMAP are examples of "socket
ioctls." Now let's see how the third argument of
the system call is used by networking code.
When the ioctl system call is invoked on a
socket, the command number is one of the symbols defined in
<linux/sockios.h>, and the function
sock_ioctl directly invokes a protocol-specific
function (where "protocol" refers to the main network protocol being
used, for example, IP or AppleTalk).
Any ioctl command that is not recognized by the
protocol layer is passed to the device layer. These device-related
ioctl commands accept a third argument from user
space, a struct ifreq *. This structure is defined
in <linux/if.h>. The
SIOCSIFADDR and SIOCSIFMAP
commands actually work on the ifreq structure. The
extra argument to SIOCSIFMAP, although defined as
ifmap, is just a field of ifreq.
In addition to using the standardized calls, each interface can define
its own ioctl commands. The
plip interface, for example, allows the
interface to modify its internal timeout values via
ioctl. The ioctlimplementation for sockets recognizes 16 commands as private to the
interface: SIOCDEVPRIVATE through
SIOCDEVPRIVATE+15.
When one of these commands is recognized,
dev->do_ioctl is called in the relevant
interface driver. The function receives the same struct
ifreq * pointer that the general-purpose
ioctl function uses:
int (*do_ioctl)(struct net_device *dev, struct ifreq *ifr, int cmd);
The ifr pointer points to a kernel-space address
that holds a copy of the structure passed by the user. After
do_ioctl returns, the structure is copied back to
user space; the driver can thus use the private commands to both
receive and return data.
The device-specific commands can choose to use the fields in
struct ifreq, but they already convey a
standardized meaning, and it's unlikely that the driver can adapt the
structure to its needs. The field ifr_data is a
caddr_t item (a pointer) that is meant to be used
for device-specific needs. The driver and the program used to invoke
its ioctl commands should agree about the use of
ifr_data. For example,
pppstats uses device-specific commands to
retrieve information from the ppp interface
driver.
It's not worth showing an implementation of
do_ioctl here, but with the information in this
chapter and the kernel examples, you should be able to write one when
you need it. Note, however, that the plipimplementation uses ifr_data incorrectly and should
not be used as an example for an ioctlimplementation.
The last method a driver needs is get_stats. This
method returns a pointer to the statistics for the device. Its
implementation is pretty easy; the one shown works even when several
interfaces are managed by the same driver, because the statistics are
hosted within the device data structure.
struct net_device_stats *snull_stats(struct net_device *dev)
{
struct snull_priv *priv = (struct snull_priv *) dev->priv;
return &priv->stats;
}
The real work needed to return meaningful statistics is distributed
throughout the driver, where the various fields are updated. The
following list shows the most interesting fields in struct
net_device_stats.
These fields hold the total number of incoming and outgoing packets
successfully transferred by the interface.
The number of bytes received and transmitted by the interface. These
fields were added in the 2.2 kernel.
The number of erroneous receptions and transmissions. There's no end
of things that can go wrong with packet transmission, and the
net_device_stats structure includes six counters
for specific receive errors and five for transmit errors. See
<linux/netdevice.h> for the full list. If
possible, your driver should maintain detailed error statistics,
because they can be most helpful to system administrators trying to
track down a problem.
The number of packets dropped during reception and
transmission. Packets are dropped when there's no memory available for
packet data. tx_dropped is rarely used.
The number of collisions due to congestion on the medium.
The number of multicast packets received.
It is worth repeating that the get_stats method
can be called at any time -- even when the interface is
down -- so the driver should not release statistic information when
running the stop method.
A multicast packet is a network packet meant to be
received by more than one host, but not by all hosts. This
functionality is obtained by assigning special hardware addresses to
groups of hosts. Packets directed to one of the special addresses
should be received by all the hosts in that group. In the case of
Ethernet, a multicast address has the least significant bit of the
first address octet set in the destination address, while every device
board has that bit clear in its own hardware address.
The tricky part of dealing with host groups and hardware addresses is
performed by applications and the kernel, and the interface driver
doesn't need to deal with these problems.
Transmission of multicast packets is a simple problem because they
look exactly like any other packets. The interface transmits them over
the communication medium without looking at the destination
address. It's the kernel that has to assign a correct hardware
destination address; the hard_header device
method, if defined, doesn't need to look in the data it arranges.
The kernel handles the job of tracking which multicast addresses are
of interest at any given time. The list can change frequently, since
it is a function of the applications that are running at any given
time and the user's interest. It is the driver's job to accept the
list of interesting multicast addresses and deliver to the kernel any
packets sent to those addresses. How the driver implements the
multicast list is somewhat dependent on how the underlying hardware
works. Typically, hardware belongs to one of three classes, as far as
multicast is concerned:
Interfaces that cannot deal with multicast. These interfaces either
receive packets directed specifically to their hardware address (plus
broadcast packets), or they receive every packet. They can receive
multicast packets only by receiving every packet, thus potentially
overwhelming the operating system with a huge number of
"uninteresting" packets. You don't usually count these interfaces
as multicast capable, and the driver won't set
IFF_MULTICAST in
dev->flags.
Point-to-point interfaces are a special case, because they always
receive every packet without performing any hardware filtering.
Interfaces that can tell multicast packets from other packets
(host-to-host or broadcast). These interfaces can be instructed to
receive every multicast packet and let the software determine if this
host is a valid recipient. The overhead introduced in this case is
acceptable, because the number of multicast packets on a typical
network is low.
Interfaces that can perform hardware detection of multicast addresses.
These interfaces can be passed a list of multicast addresses for which
packets are to be received, and they will ignore other multicast
packets. This is the optimum case for the kernel, because it doesn't
waste processor time dropping "uninteresting" packets received by
the interface.
The kernel tries to exploit the capabilities of high-level interfaces
by supporting at its best the third device class, which is the most
versatile. Therefore, the kernel notifies the driver whenever the
list of valid multicast addresses is changed, and it passes the new
list to the driver so it can update the hardware filter according to
the new information.
Support for multicast packets is made up of several items: a device
method, a data structure and device flags.
This device method is called whenever the list of machine addresses
associated with the device changes. It is also called when
dev->flags is modified, because some flags
(e.g., IFF_PROMISC) may also require you to
reprogram the hardware filter. The method receives a pointer to
struct net_device as an argument and returns
void. A driver not interested in implementing this
method can leave the field set to NULL.
This is a linked list of all the multicast addresses associated with
the device. The actual definition of the structure is introduced at
the end of this section.
The number of items in the linked list. This information is somewhat
redundant, but checking mc_count against 0 is a
useful shortcut for checking the list.
Unless the driver sets this flag in dev->flags,
the interface won't be asked to handle multicast packets. The
set_multicast_list method will nonetheless be
called when dev->flags changes, because the
multicast list may have changed while the interface was not active.
This flag is set in dev->flags by the networking
software to tell the driver to retrieve all multicast packets from the
network. This happens when multicast routing is enabled. If the flag
is set, dev->mc_list shouldn't be used to filter
multicast packets.
This flag is set in dev->flags when the
interface is put into promiscuous mode. Every packet should be
received by the interface, independent of
dev->mc_list.
The last bit of information needed by the driver developer is the
definition of struct dev_mc_list, which lives in
<linux/netdevice.h>.
struct dev_mc_list {
struct dev_mc_list *next; /* Next address in list */
__u8 dmi_addr[MAX_ADDR_LEN]; /* Hardware address */
unsigned char dmi_addrlen; /* Address length */
int dmi_users; /* Number of users */
int dmi_gusers; /* Number of groups */
};
Because multicasting and hardware addresses are independent of the
actual transmission of packets, this structure is portable across
network implementations, and each address is identified by a string of
octets and a length, just like dev->dev_addr.
The best way to describe the design of
set_multicast_list is to show you some
pseudocode.
The following function is a typical implementation of the function in
a full-featured (ff) driver. The driver is
full featured in that the interface it controls has a complex hardware
packet filter, which can hold a table of multicast addresses to be
received by this host. The maximum size of the table is
FF_TABLE_SIZE.
All the functions prefixed with ff_ are
placeholders for hardware-specific operations.
void ff_set_multicast_list(struct net_device *dev)
{
struct dev_mc_list *mcptr;
if (dev->flags & IFF_PROMISC) {
ff_get_all_packets();
return;
}
/* If there's more addresses than we handle, get all multicast
packets and sort them out in software. */
if (dev->flags & IFF_ALLMULTI || dev->mc_count > FF_TABLE_SIZE) {
ff_get_all_multicast_packets();
return;
}
/* No multicast? Just get our own stuff */
if (dev->mc_count == 0) {
ff_get_only_own_packets();
return;
}
/* Store all of the multicast addresses in the hardware filter */
ff_clear_mc_list();
for (mc_ptr = dev->mc_list; mc_ptr; mc_ptr = mc_ptr->next)
ff_store_mc_address(mc_ptr->dmi_addr);
ff_get_packets_in_multicast_list();
}
This implementation can be simplified if the interface cannot store a
multicast table in the hardware filter for incoming packets. In that
case, FF_TABLE_SIZE reduces to 0 and the last four
lines of code are not needed.
As was mentioned earlier, even interfaces that can't deal with
multicast packets need to implement the
set_multicast_list method to be notified about
changes in dev->flags. This approach could be
called a "nonfeatured" (nf) implementation. The
implementation is very simple, as shown by the following code:
void nf_set_multicast_list(struct net_device *dev)
{
if (dev->flags & IFF_PROMISC)
nf_get_all_packets();
else
nf_get_only_own_packets();
}
Implementing IFF_PROMISC is important, because
otherwise the
user won't be able to run tcpdump or any
other network analyzers. If the interface runs a point-to-point link,
on the other hand, there's no need to implement
set_multicast_list at all, because users
receive every packet anyway.
Version 2.3.43 of the kernel saw a major rework of the networking
subsystem. The new "softnet" implementation was a great improvement
in terms of performance and clean design. It also, of course, brought
changes to the network driver interface -- though fewer than one
might have expected.
First of all, Linux 2.3.14 renamed the network device structure, which
had always been struct device, to struct
net_device. The new name is certainly more appropriate,
since the structure was never meant to describe devices in general.
Prior to version 2.3.43, the functions
netif_start_queue,
netif_stop_queue, and
netif_wake_queue did not exist. Packet
transmission was, instead, controlled by three fields in the
device structure, and sysdep.himplements the three functions using the three fields when compiling
for 2.2 or 2.0.
This variable indicated that the interface was ready for operations;
it was normally set to 1 in the driver's openmethod. The current implementation is to call
netif_start_queue instead.
interrupt was used to indicate that the device was
servicing an interrupt -- accordingly, it was set to 1 at the
beginning of the interrupt handler and to 0 before returning. It was
never a substitute for proper locking, and its use has been replaced
with internal spinlocks.
When nonzero, this variable indicated that the device could handle no
more outgoing packets. Where a 2.4 driver will call
netif_stop_queue, older drivers would set
tbusy to 1. Restarting the queue required setting
tbusy back to 0 and calling
mark_bh(NET_BH).
Normally, setting tbusy was sufficient to ensure
that the driver's hard_start_xmit method would
not be called. However, if the networking system decided that a
transmitter lockup must have occurred, it would call that method
anyway. There was no tx_timeout method before
softnet was integrated. Thus, pre-softnet drivers had to explicitly
check for a call to hard_start_xmit when
tbusy was set and react accordingly.
The type of the name field in struct
device was different. The 2.2 version was simply
char *name;
Thus, the storage for the interface name had to be allocated
separately, and name assigned to point to that
storage. Usually the device name was stored in a static variable
within the driver. The %d notation for dynamically
assigned interface names was not present in 2.2; instead, if the name
began with a null byte or a space character, the kernel would allocate
the next eth name. The 2.4 kernel still implements
this behavior, but its use is deprecated. Starting with 2.5, only the
%d format is likely to be recognized.
The owner field (and the
SET_MODULE_OWNER macro) were added in kernel
2.4.0-test11, just before the official stable release. Previously,
network driver modules had to maintain their own use counts.
sysdep.h defines an empty
SET_MODULE_OWNER for kernels that do not have it;
portable code should also continue to manage its use count manually
(in addition to letting the networking system do it).
The link state functions (netif_carrier_on and
netif_carrier_off) did not exist in the 2.2
kernel. The kernel simply did without that information in those days.
The 2.1 development series also saw its share of changes to the
network driver interface. Most took the form of small changes to
function prototypes, rather than sweeping changes to the network code
as a whole.
Interface statistics were kept in a structure called struct
1enet_statistics, defined in
<linux/if_ether.h>. Even non-Ethernet
drivers used this structure. The field names were all the same as the
current struct net_device_stats, but the
rx_bytes and tx_bytes fields
were not present.
The 2.0 kernel handled transmitter lockups in the same way as 2.2
did. There was, however, an additional function:
void dev_tint(struct device *dev);
This function would be called by the driver after a lockup had been
cleared to restart the transmission of packets.
A couple of functions had different prototypes.
dev_kfree_skb had a second, integer argument that
was either FREE_READ for incoming packets (i.e.,
skbs allocated by the driver) or
FREE_WRITE for outgoing packets
(skbs allocated by the networking code). Almost all
calls to dev_kfree_skb in network driver code
used FREE_WRITE. The nonchecking versions of the
skb functions (such as
__skb_push) did not exist;
sysdep.h in the sample code provides emulation
for these functions under 2.0.
The rebuild_header method had a different set of
arguments:
int (*rebuild_header) (void *eth, struct device *dev,
unsigned long raddr, struct sk_buff *skb);
The Linux kernel also made heavier use of
rebuild_header; it did most of the work that
hard_header does now. When
snull is compiled under Linux 2.0, it
builds hardware headers as follows:
int snull_rebuild_header(void *buff, struct net_device *dev, unsigned long dst,
struct sk_buff *skb)
{
struct ethhdr *eth = (struct ethhdr *)buff;
memcpy(eth->h_source, dev->dev_addr, dev->addr_len);
memcpy(eth->h_dest, dev->dev_addr, dev->addr_len);
eth->h_dest[ETH_ALEN-1] ^= 0x01; /* dest is us xor 1 */
return 0;
}
The device methods for header caching were also significantly
different in this kernel. If your driver needs to implement these
functions directly (very few do), and it also needs to work with the
2.0 kernel, see the definitions in
<linux/netdevice.h> to see how things were
done in those days.
If you look at the source for almost any network driver in the kernel,
you will find some boilerplate that looks like this:
#ifdef HAVE_DEVLIST
/*
* Support for an alternate probe manager,
* which will eliminate the boilerplate below.
*/
struct netdev_entry netcard_drv =
{cardname, netcard_probe1, NETCARD_IO_EXTENT, netcard_portlist};
#else
/* Regular probe routine defined here */
Interestingly, this code has been around since the 1.1 development
series, but we are still waiting for the promised alternate probe
manager. It is probably safe to not worry about being prepared for
this great change, especially since ideas for how to implement it will
likely have changed in the intervening years.
This section provides a reference for the concepts introduced in this
chapter. It also explains the role of each header file that a driver
needs to include. The lists of fields in the
net_device and sk_buff
structures, however, are not repeated here.
This header hosts the definitions of struct
net_device and struct net_device_stats,
and includes a few other headers that are needed by network drivers.
Register and unregister a network device.
This macro will store a pointer to the current module in the device
structure (or in any structure with an owner field,
actually); it is used to enable the networking subsystem to manage the
module's use count..
This function can be called (including at interrupt time) to notify
the kernel that a packet has been received and encapsulated into a
socket buffer.
Included by netdevice.h, this file declares the
interface flags (IFF_ macros) and struct
ifmap, which has a major role in the
ioctl implementation for network drivers.
The first two functions may be used to tell the kernel whether a
carrier signal is currently present on the given interface.
netif_carrier_ok will test the carrier state as
reflected in the device structure.
Included by netdevice.h,
if_ether.h defines all the
ETH_ macros used to represent octet lengths (such
as the address length) and network protocols (such as IP). It also
defines the ethhdr structure.
The definition of struct sk_buff and related
structures, as well as several inline functions to act on the
buffers. This header is included by netdevice.h.
These functions handle the allocation and freeing of socket buffers.
Drivers should normally use the dev_ variants,
which are intended for that purpose.
These functions add data to an skb;
skb_put puts the data at the end of the
skb, while skb_push puts it at
the beginning. The regular versions perform checking to ensure that
adequate space is available; double-underscore versions leave those
tests out..
skb_pull will "remove" data from an
skb by adjusting the internal pointers.
This function sets most device methods to the general-purpose
implementation for Ethernet drivers. It also sets
dev->flags and assigns the next available
ethx name to dev->name if the
first character in the name is a blank space or the null character.
When an Ethernet interface receives a packet, this function can be
called to set skb->pkt_type. The return value is
a protocol number that is usually stored in
skb->protocol.
This is the first of 16 ioctl commands that can
be implemented by each driver for its own private use. All the network
ioctl commands are defined in
sockios.h.
Back to: Table of Contents
Linux is a registered trademark of Linus Torvalds | http://lwn.net/Kernel/LDD2/ch14.lwn | crawl-002 | refinedweb | 13,404 | 53.41 |
Insu — Instagram Bot.
Making of INSU Part 2
In the first part, we had conversed about the whole principle sort of stuff. Now we have to add a loop and other things like user input, etc. (Want to read first then click here.)
At the bottom of this page, you will get the full code and a GitHub link to this project also.
Let’s Begin with user input.
import time
import pyautogui as pg
from pyfiglet import Figlet
f = Figlet(font='doom')
print(f.renderText('InSu'))
print("------------------------------Namaste------------------------------------")
print("----------------(Insu(v0.5) Instagram Bot by likenull)-------------------")
The above code is just for embellishment.
count = 0
numlikes = int(input("How many post to like: "))
print(str(numlikes) + " posts to like")
print()
hastag = str(input("Hashtag to use:"))
print("Using hashtag: " + str(hastag))
print()
print("Launching(slow mode)...")
Here, we have taken two inputs 1st is “numlikes” in which users have to provide how many posts does user wants to like.
And the second one is “hastag” in which users have to write which hashtag the user want to go with.
Then there is some printing stuff that will let the user know their inputs.
And we have also introduced here “count” which we will use in the loop function.
Now the below code is the code we had created in the previous post which insu will use to browse Instagram to like posts.
time.sleep(2)
pg.click(236, 98)
pg.write(hastag)
print()
pg.PAUSE = 3.25
pg.press('down')
pg.PAUSE = 2.0
pg.press('enter')
pg.press('enter')
pg.PAUSE = 3.6
pg.press('pagedown')
#Step 2 post click
pg.PAUSE = 2.0
pg.click(98, 518)
pg.PAUSE = 3.0
#alinging pointer to the middle Point(x=339, y=400)
print("Liking recent Post by tag.....")
print()
pg.moveTo(332, 400)
pg.Pause = 2.0
pg.doubleClick(332, 400)
print("First post liked")
print()
As you can see above code contains sleep which we have employed to let the page load first if you got a more lagging internet speed then you can increase them or if you have a fast internet speed then you can reduce them. The overhead code will take us to the first “recent post” to like using the hashtag user have fed, and will also like the first post and then it will print “First post liked”.
Now let’s utilise the while loop to like the number of posts described by the user.
print("************************************************************************")
pg.Pause = 3.5
pg.press('right')
count = 1
while (count < numlikes):
count = count + 1
pg.doubleClick(332, 400)
pg.Pause = 4.0
pg.press('right')
print("\rPosts liked: "+str(count), end='')
pg.Pause = 0.1else:
print("********************************************************************")
print()
print(str(numlikes) +" Post liked. By tag" + str(hastag))#print(screensize)
#print(pointer)
Here we have used “pg.press(‘right’)” to press right on the computer(keyboard output) so that the next post will appear to like. and we are using pause also to provide some time to Instagram to load the next post. And “count” is updated before liking every single post so the this will loop will run until “count” is smaller than numlikes.
print("\rPosts liked: "+str(count), end='')
The above code doesn’t work everywhere, this code changes the previous data of the number of liked posts. but this doesn’t work in idle and in IDE’s this only work when you use “python.exe” I mean you have to use python in cmd to make it work. But this doesn’t influence the function it just makes it look wired but if you run this in Cmd then you can see this printing method works fine.
If you don’t want to use this print method then you can just use “print(“Post liked =” + str(count))”
this will give no error but will print the number of posts liked in a new line whenever a post is liked.
Now that’s all for the coding part if you are facing errors like insu scrolling page at the wrong moment, and if your code is double-clicking the wrong time. Then you can modify the sleep value to resolve them.
And I prefer you to first go to Instagram and clear searches and then search for the hashtag you want to like then let it load. And after this go back to the Instagram home page and then start Insu. This will help you because your browser would have saved the cache of that page and will load faster.
Conclusion: Insu has both pros and cons. For instance:-
Cons:
1.A little bit complex to code.
2.Differs screen to screen.
3.Less variety of functions.
Pros:
1.Easy to use.
2.Instagram/Browser wouldn’t that bot was used. (because we have provided mouse and keyboard outputs to browser the website like a human.)
3.No uncertainty with Instagram updates.
4.No use of complex APIs and extensions which may fall apart in the future.
5.And was fun to code.
Thank You, guys.
And if you are facing any problem or want my help then you can mail me at contactlikenull@gmail.com or you can use Github and Instagram to contact me.
Here is the full code and GitHub project link.
Originally published at by codelikenull. | https://likenull.medium.com/insu-instagram-bot-1c14c27555dd?source=read_next_recirc---------0---------------------0058ccf9_d49a_4b6d_9599_02523ff88b4d---------- | CC-MAIN-2022-21 | refinedweb | 886 | 83.15 |
How to use the OCC source codes & libraries with VC++ 6.0
Hi all,
I'm new to both VC++ and OpenCascade. I would like to know how the OCC source files & libraries can be used in VC++ 6.0.
For starters, I want to convert an IGES file to a BRep shape - the OCC files for this are "IGESToBrep_Reader.cxx" & "IGESToBrep_Reader.hxx". My question is how to run these files in VC++? What kind of a project do I need to create in VC++ (wind32, console, mfc or dll)? Also each of these files seems to have many more #include headers in their code, and each of these other headers have more headers and so on. So will I need to add all these .hxx files to the source files folder in my project?
I really hope someone helps me out as I'm running out of time for this project !
Thanks a bunch !!
OpenCASCADE builds as a set of dynamic link libraries so the first step is to build the DLL's. The VC6 workspace files are located in OpenCASCADE/ros/adm/win32. Load each one and build both the Release and Debug configurations using the "All" project. I suggest building in the following order:
+ FoundationClasses
+ ModelingData
+ ModelingAlgorithms
+ Visualization
+ ApplicationFramework
+ DataExchange
+ Draw
+ WOK
Once the DLL's are built you can hook them into your project by including the associated .lib files for whichever parts of OCC you need.
You should probably build some of the sample applications (particularly the ones that most closely match what you need) first so you can get an idea how to use OCC.
Hope this helps. Good luck,
Chris
Hey Chris,
Thanks a lot for your quick reply. I'll try out what you described and get back to you again if I need more help. I appreciate your help !
-Karthik
Hi Chris,
I build the OCC workspace files in the same order that you suggested. Like I said earlier, I want to convert an IGES file to an OCC model/Brep file. So, I created a new project in VC++. I created a new c++ source file and copied the example code given in the IGES manual (page 26) for translating from IGES to an OCC model into it. The code reads:
.
}
I got 19 linking errors (error LNK2001).
I also tried another approach - created a new VC++ project and added all the source files and header files in the "OpenCascade\win32\ros\src\IGESToBRep" folder. But I still keep getting LNK2001 errors.
Could you please tell me what could be wrong.
Thanks !
-Karthik
You need to add the OCC library info to your VC++ project. In the "Project Settings" dialog select the "Link" tab. Choose "General" category and add the required .lib files to the Object/library modules list. Next choose "Input" category and add $(CASROOT)\win32\lib in the Additional library path field.
The trick is figuring out what .lib files are needed. I used a freeware utility called DLL Export Viewer () to get a list of all exported functions in the OCC DLL's. Using that list and the linker error information you should be able to determine what libraries are required for your project.
Cheers,
Chris
Thanks for your help so far Chris.
Here's what I have done over the past few days:
I created a new "Win32 Console Application" project in VC++6.0. I added all the .cxx files from the folder 'C:\Opencascade\win32\ros\src\IGESToBRep' to my project. I added all the OCC libraries to my project. I also added a new source file containing just the 'int main() function (since it's a console application).
When I execute the project after building, all I get is a black console window which says "Press any key to continue". Now, like I mentioned earlier, I want to convert an IGES file to a BREP file and then get dimensions from the BREP file. How do I know what IGES file is being loaded into memory and being converted to a BREP file? When I execute the project, it doesn't ask me which IGES file I want to convert to BREP.
Thanks a bunch once again !
-Karthik
Just after I posted my message, I tried something different. I created a new console project and pasted the following code in my source file:
#include "IGESControl_Reader.hxx"
#include "TColStd_HSequenceOfTransient.hxx"
#include "TopoDS_Shape.hxx"
int main()
{
IGESControl_Reader myIgesReader;
Standard_Integer nIgesFaces,nTransFaces;
myIgesReader.ReadFile ("C:\Documents and Settings\Karthik\Desktop\iges_sample.
return (0);
}
The project builds and executes fine. But, it says - "File not found: C:\Documents and Settings\Karthik\Desktop\iges_sample.igs"
Then I tried giving just the file name (iges_sample.igs) instead of the whole path in the ReadFile function, but it still doesn't find the file.
Do you know where I could change the default directory that the program searches?
Thanks again !
-Karthik
First of all, get your C/C++ basics right! The method of specifying the file paths in C++ is different than in Windows. Just change the '\' to '//' and the file should open fine. Next, to display the read and translated file, you need to set an AIS context for it and use the associated Display method. I personally use the OCAF wizard to start my project rather than a Console Win32 Application. If you will use the wizard, it gives an option of including a Trihedron. Including it will help you trace what needs to be done to display the object (if you are going to work with the wizard now)! | https://www.opencascade.com/comment/11971 | CC-MAIN-2020-16 | refinedweb | 932 | 75.2 |
Answered by:
SPCalendarView
Good afternoon everybody,
I have to customize a SP Calendar. I need to create a new item, edit and delete as well. Also, Later, I will work with other customization. To fix thsi problem,I would like to use the SPCalendarView control. But I do not kwnow if that control will help me to get what I need.
I am working with SharePoint 2010 and Visual Studio 2010. I never worked with that control. Since I am working in a local PC which does not have installed SharePoint server, I can not see SPCalendarControl in the toolbox. I wonder if I need to include any dll to my project. I already include Microsoft.SharePoint.dll.
Please give me a clue how to start working with my problem.
Thanks for any advise or help.
Jhonny M.
Jhonny Marcelo
- Moved by Mike Walsh FIN Saturday, August 06, 2011 6:50 AM "I am working with SharePoint 2010 and Visual Studio 2010." so it's not a Pre-SP 2010; SPD 2007 question is it? (From:SharePoint - Design and Customization (pre-SharePoint 2010))
Question
Answers
Hi,
Sharepoint control are included in the namespace of "using Microsoft.SharePoint.WebControls" which in the Microsoft.SharePoint.dll. If you added the Microsoft.SharePoint.dll successfully. One method is that you can create the SPCalendarView in code behind in you sharepoint project
like:
Using namespace:
using Microsoft.SharePoint; using Microsoft.SharePoint.WebControls;<br/>
protected override void CreateChildControls() { base.CreateChildControls(); AddCalendar(); } private void AddCalendar() { SPCalendarView calView = new SPCalendarView(); calView.DataSource = GetCalendarItems; calView.DataBind(); Controls.Add(calView); } private SPCalendarItemCollection GetCalendarItems() { SPCalendarItemCollection items = new SPCalendarItemCollection(); SPCalendarItem item = new SPCalendarItem(); item.StartDate = DateTime.Now; item.EndDate = DateTime.Now.AddHours(1); item.hasEndDate = true; item.Title = "First calendar item"; item.DisplayFormUrl = "/myurl"; item.Description = "This is a testing item"; item.IsAllDayEvent = false; item.IsRecurrence = false; items.Add(item); return items; }
Another approach is to add the SPCalendarView Control in toolbox so that we can drag and drop it like other asp.net controls. But first you need to confirm that the Microsoft.SharePoint.dll has been installed in GAC. The path is "C:\WINDOWS\assembly".
If the dll doesn't exists. You can install it by using gacutil tool in VS command prompt:
gacutil.exe -if "<yourfolder>/Microsoft.SharePoint.dll".
After that, you can right click the toolbox in VS 2010->Choose Items, you can find the SPCalendarView control in .NET Framwork Components tag. Then click "OK", the control will be added in the toolbox.
Hope this can help.
- Proposed as answer by SharepointDummy Tuesday, August 09, 2011 1:40 PM
- Marked as answer by Shimin Huang Monday, August 15, 2011 9:07 AM | https://social.msdn.microsoft.com/Forums/office/en-US/ea37b8f2-cae9-4850-a6c2-47f6c1768148/spcalendarview?forum=sharepointdevelopmentprevious | CC-MAIN-2016-40 | refinedweb | 445 | 53.27 |
:14 PM
I find myself fighting with the device-tree. What is the fastest way to generate the device tree?
I've tried:
petalinux-build -b device-tree
This takes about a minute on my computer. I've also tried:
petalinux-build -c device-tree -x build
But that seems to take about the same amount of time.
I found this post on Stackoverflow:
The dtc aproach doesn't work, it appears I need to pre-process the device tree files into a single file (using GCC???).
01-09-2019 05:41 PM
Not sure if this will help you, but... if the .dts and .dtsi files located within the kernel source tree, then you can use the kernel's main Makefile to rebuild the .dtb. For example for arch/arm/boot/dts/zynq-zc702.dts you would run:
make ARCH=arm zynq-zc702.dtb
The above is done in the source code of the kernel (or wherever it was built). It will run gcc to combine multiple files, and then runs dtc to generate the .dtb. Takes less than 1 second on my machine.
01-10-2019 07:10 AM
Thanks for the reply. That doesn't work with Petalinux unfortunately.
01-10-2019 07:32 AM
you can create it manually. See the WIKI below:
01-10-2019 07:50 AM
I'm not sure I follow. That page indicates you can use the DTC to compile your DTS to a DTB file:
dtc -I dts -O dtb -o my_dts/system-top.dtb my_dts/system-top.dts
The problem is the DTS/DTSI files that Petalinux generates has non-DTC compliant text in it. For example,t he ZCU104 system-top-dts file looks like this:
/* * CAUTION: This file is automatically generated by Xilinx. * Version: * Today is: Thu Jan 10 14:31:11 2019 */ /dts-v1/; #include "zynqmp.dtsi" #include "zynqmp-clk-ccf.dtsi" #include "zcu104-revc.dtsi" #include "pl.dtsi" #include "pcw.dtsi" / { chosen { bootargs = "earlycon clk_ignore_unused"; stdout-path = "serial0:115200n8"; }; aliases { ethernet0 = &gem3; i2c0 = &i2c1; i2c1 = &sensor_iic; serial0 = &uart0; serial1 = &uart1; spi0 = &qspi; }; memory { device_type = "memory"; reg = <0x0 0x0 0x0 0x7ff00000>; }; }; #include "system-user.dtsi"
Those #includes are not valid syntax. Some program needs to pre-process those includes to pull them in (like a C compiler pre-processor output). Further, my system-user.dtsi file contains includes from C header files:
#include <dt-bindings/media/xilinx-vip.h> ... xlnx,video-format = <XVIP_VF_MONO_SENSOR>;
This needs to be converted to a number and not a symbolic value. When I run the DTC on the top level DTS file I get the following errors:
nlbutts@ubuntu16:~/projects/maza_linux$ ./build/tmp/work/zcu104_zynqmp-xilinx-linux/device-tree/xilinx+gitAUTOINC+b7466bbeee-r0/recipe-sysroot-native/usr/bin/dtc -I dts -O dtb -o system.dtb -i./components/plnx_workspace/device-tree/device-tree ./components/plnx_workspace/device-tree/device-tree/system-top.dts Error: ./components/plnx_workspace/device-tree/device-tree/system-top.dts:9.1-9 syntax error FATAL ERROR: Unable to parse input tree n
I tried to use GCC -E, but the DTC syntax uses #, which causes problems.
01-10-2019 08:55 AM - edited 01-10-2019 09:01 AM
You're on the right track... and yes, you can call the "dtc" command directly. But as you also noted, the #includes need to be pre-processed. This is done using the C pre-processor (gcc -E is a way of running just the pre-processor).
Figuring out the exact right options to pass (include paths, etc) can be a bit fiddly. That's why I mentioned using the kernel's existing Makefile, which allows you to just do "make NAME.dtb" and it will do the gcc -E followed by dtc commands for you.
Doing it manually is certainly possible. If the system-top.dts is in the same place as the #includes it relies on, then you can just do the following:
cd my_dts gcc -E -nostdinc -undef -D__DTS__ -x assembler-with-cpp -o system-top.dts.tmp system-top.dts dtc -I dts -O dtb -o system-top.dtb system-top.dts.tmp
If the #includes are located in other places, you will need to add one or more -I flags to the gcc command. Each one specifies an additional directory to search for includes.
01-10-2019 09:16 AM
@rfs613 nice. Ill update the wiki to include this. However, in the OSL flow, there is no sytem-user.dtsi, but still useful info.
So, for reference:
cd <plnx_proj_dir>\components\plnx_workspace\device-tree\device-tree
gcc -I ../../../../project-spec/meta-user/recipes-bsp/device-tree/files -E -nostdinc -undef -D__DTS__ -x assembler-with-cpp -o system-top.dts.tmp system-top.dts
dtc -I dts -O dtb system-top.dtb system-top.dts.tmp
You can then verify, by converting back:
dtc -I dtb -O dts dump.dts system-top.dts.tmp
In my case, the only thing in the system-user.dtsi was:
&gem3 {
local-mac-address = [00 0a 35 00 22 02];
};
So, check if this was applied to the gem node:
ethernet@ff0e0000 {
compatible = "cdns,zynqmp-gem\0cdns,gem";
status = "okay";
interrupt-parent = <0x04>;
interrupts = <0x00 0x3f 0x04 0x00 0x3f 0x04>;
reg = <0x00 0xff0e0000 0x00 0x1000>;
clock-names = "pclk\0hclk\0tx_clk\0rx_clk\0tsu_clk";
#address-cells = <0x01>;
#size-cells = <0x00>;
#stream-id-cells = <0x01>;
iommus = <0x09 0x877>;
power-domains = <0x11>;
clocks = <0x03 0x1f 0x03 0x34 0x03 0x30 0x03 0x34 0x03 0x2c>;
phy-handle = <0x12>;
pinctrl-names = "default";
pinctrl-0 = <0x13>;
phy-mode = "rgmii-id";
xlnx,ptp-enet-clock = <0x00>;
local-mac-address = [00 0a 35 00 22 02];
phy@c {
reg = <0x0c>;
ti,rx-internal-delay = <0x08>;
ti,tx-internal-delay = <0x0a>;
ti,fifo-depth = <0x01>;
ti,rxctrl-strap-worka;
phandle = <0x12>;
};
};
01-18-2019 09:24 AM
Hi
From Linux kernel directory
"make arch=ARM CROSS_COMPILE=arm-xilinx-linux-gnueabi- dtbc"
this provides .dtb files for corrspending platform boards , which is intiated initailly with defconfig
Ex. For xilinx boards, zc702.dtb, zedboard.dtb etc.. will be created.
you can take whatever the board you want,
because here .dtsi represents SoC , .dts represents board file
when we say make ARCH=arm dtbc, it will create for all boards .dts files
Provide kudos if post is helpful
Thanks & Regards
Satish G | https://forums.xilinx.com/t5/Embedded-Linux/Fastest-way-to-generate-device-tree/m-p/928209 | CC-MAIN-2019-43 | refinedweb | 1,047 | 58.28 |
PMAP(9) BSD Kernel Manual PMAP(9)
pmap - machine dependent interface to the MMU
#include <machine/pmap.h>
The architecture-dependent pmap module describes how the physical mapping is done between the user-processes and kernel virtual addresses and the physical addresses of the main memory, providing machine-dependent trans- lation and access tables that are used directly or indirectly by the memory-management hardware. The pmap layer can be viewed as a big array of mapping entries that are indexed by virtual address to produce a phy- sical address and flags. These flags describe the page's protection, whether the page has been referenced or modified and other characteris- tics. The pmap interface is consistent across all platforms and hides the way page mappings are stored.
void pmap_init(void); The pmap_init() function is called from the machine-independent uvm(9) initialization code, when the MMU is enabled.
Modified/referenced information is only tracked for pages managed by uvm(9) (pages for which a vm_page structure exists). Only managed map- pings of those pages have modified/referenced tracking. The use of un- managed mappings should be limited to code which may execute in interrupt context (such as malloc(9)) or to enter mappings for physical addresses which are not managed by uvm(9). This allows pmap modules to avoid block- ing interrupts when manipulating data structures or holding locks. Un- managed mappings may only be entered into the kernel's virtual address space. The modified/referenced bits must be tracked on a per-page basis, as they are not attributes of a mapping, but attributes of a page. There- fore, even after all mappings for a given page have been removed, the modified/referenced bits for that page must be preserved. The only time the modified/referenced bits may be cleared is when uvm(9) explicitly calls the pmap_clear_modify() and pmap_clear_reference() functions. These functions must also change any internal state necessary to detect the page being modified or referenced again after the modified/referenced state is cleared. Mappings entered by pmap_enter() are managed, mappings entered by pmap_kenter_pa() are not.
int pmap_enter(pmap_t pmap, vaddr_t va, paddr_t pa, vm_prot_t prot, int flags); void pmap_kenter_pa(vaddr_t va, paddr_t pa, vm_prot_t prot); void pmap_remove(pmap_t pmap, vaddr_t sva, paddr_t eva); void pmap_kremove(vaddr_t va, vsize_t size); The pmap_enter() function creates a managed mapping for physical page pa at the specified virtual address va in the target physical map pmap with protection specified by prot: VM_PROT_READ The mapping must allow reading. VM_PROT_WRITE The mapping must allow writing. VM_PROT_EXECUTE The page mapped contains instructions that will be exe- cuted by the processor. The flags argument contains protection bits (the same bits used in the prot argument) indicating the type of access that caused the mapping to be created. This information may be used to seed modified/referenced in- formation for the page being mapped, possibly avoiding redundant faults on platforms that track modified/referenced information in software. Oth- er module must panic. The access type provided in the flags argument will never exceed the pro- tection specified by prot. The pmap_enter() function is called by the fault routine to establish a mapping for the page being faulted in. If pmap_enter() is called to enter a mapping at a virtual address for which a mapping already exists, the previous mapping must be invalidated. pmap_enter() is sometimes called to change the protection for a pre-existing mapping, or to change the "wired" attribute for a pre-existing mapping. The pmap_kenter_pa() function creates an unmanaged mapping of physical address pa at the specified virtual address va with the protection speci- fied by prot. The pmap_remove() function removes the range of virtual addresses sva to eva from pmap, assuming proper alignment. pmap_remove() is called during an unmap operation to remove low-level machine dependent mappings. The pmap_kremove() function removes an unmanaged mapping at virtual ad- dress size. A call to pmap_update() must be made after pmap_kenter_pa() or pmap_kremove() to notify the pmap layer that the mappings need to be made correct.
void pmap_unwire(pmap_t pmap, vaddr_t va); void pmap_protect(pmap_t pmap, vaddr_t sva, vaddr_t eva, vm_prot_t prot); void pmap_page_protect(struct vm_page *pg, vm_prot_t prot); The pmap_unwire() function clears the wired attribute for a map/virtual- address pair. The mapping must already exist in pmap. The pmap_protect() function sets the physical protection on range sva to eva, in pmap. The pmap_protect() function is called during a copy-on-write operation to write protect copy-on-write memory, and when paging out a page to remove all mappings of a page. The pmap_page_protect() function sets the permis- sion for all mapping to page pg. The pmap_page_protect() function is called before a pageout operation to ensure that all pmap references to a page are removed. PHYSICAL PAGE-USAGE INFORMATION boolean_t pmap_is_modified(struct vm_page *pg); boolean_t pmap_clear_modify(struct vm_page *pg); boolean_t pmap_is_referenced(struct vm_page *pg); boolean_t pmap_clear_reference(struct vm_page *pg); The pmap_is_modified() and pmap_clear_modify() functions read/set the modify bits on the specified physical page pg. The pmap_is_referenced() and pmap_clear_reference() functions read/set the reference bits on the specified physical page pg. The pmap_is_referenced() and pmap_is_modified() functions are called by the pagedaemon when looking for pages to free. The pmap_clear_referenced() and pmap_clear_modify() functions are called by the pagedaemon to help identification of pages that are no longer in demand.
void pmap_copy_page(struct vm_page *src, struct vm_page *dst); void pmap_zero_page(struct vm_page *page); The pmap_copy_page() function copies the content of the physical page src to physical page dst. The pmap_zero_page() function fills page with zeroes.
pmap_t pmap_create(void); void pmap_reference(pmap_t pmap); void pmap_destroy(pmap_t pmap); The pmap_create() function creates an instance of the pmap structure. The pmap_reference() function increments the reference count on pmap. The pmap_destroy() function decrements the reference count on physical map pmap and retires it from service if the count drops to zero, assuming it contains no valid mappings.
void pmap_steal_memory(vsize_t size, vaddr_t *vstartp, vaddr_t *vendp); vaddr_t pmap_growkernel(vaddr_t maxkvaddr); void pmap_update(pmap_t pmap); void pmap_collect(pmap_t pmap); void pmap_virtual_space(vaddr_t *vstartp, vaddr_t *vendp); void pmap_copy(pmap_t dst_pmap, pmap_t src_pmap, vaddr_t dst_addr, vsize_t len, vaddr_t src_addr); Wired memory allocation before the virtual memory system is bootstrapped is accomplished by the pmap_steal_memory() function. After that point, the kernel memory allocation routines should be used. The pmap_growkernel() function can preallocate kernel page tables to a specified virtual address. The pmap_update() function notifies the pmap module to force processing of all delayed actions for all pmaps. The pmap_collect() function informs the pmap module that the given pmap is not expected to be used for some time, giving the pmap module a chance to prioritize. The initial bounds of the kernel virtual address space are returned by pmap_virtual_space(). The pmap_copy() function copies the range specified by src_addr and src_len from src_pmap to the range described by dst_addr and dst_len in dst_map. pmap_copy() is called during a fork(2) operation to give the child process an initial set of low-level mappings.
fork(2), uvm(9)
The 4.4BSD pmap module is based on Mach 3.0. The introduction of uvm(9) left the pmap interface unchanged for the most part.
Ifdefs must be documented. pmap_update() should be mandatory. MirOS BSD #10-current September 21, 2001. | http://www.mirbsd.org/htman/i386/man9/pmap_steal_memory.htm | CC-MAIN-2014-10 | refinedweb | 1,202 | 52.29 |
How to: Create a Web Client Solution
The Web Client Solution Templates unfold a Visual Studio solution you can use as a starting point for your Web client application. The solution includes recommended practices and techniques; it is the basis for the procedures and automated guidance included in the Web Client Software Factory.
Visual Studio 2010 supports two types of Web project models: the Web Site Project model and the Web Application project model. The software factory includes solution templates for both Web project modules.
This topic describes how to unfold a Web Client Solution template.
Prerequisites
The Web Client Solution template requires the following (included with the Web Client Software Factory):
- The Composite Web Application Block
- Enterprise Library 5.0
Steps
The following procedure describes how to unfold the Web Client Solution template.
To unfold a Web Client Solution template
- In Visual Studio, point to New on the File menu, and then click Project.
- In the left pane of the New Project dialog box, expand Guidance Packages, and then click Web Client Software Factory 2010, as shown in Figure 1.
Figure 1The Web Client Solution project template
- In the Templates pane, select the template for the type of Web project model and development language that you want.
- (Optional) Change the name of the solution in the Name box and the location of the solution in the Location box.
- Click OK.
The Web Client Solution template references the Create Solution recipe. The Guidance Automation Extensions framework calls the recipe when it unfolds the template. The Create Solution recipe starts a wizard to gather information that it uses to customize the generated source code. Figure 2 illustrates the first page of the wizard.
Figure 2The Create Web Solution recipe wizard
- (Optional) Modify the location of the Composite Web Application Block and Enterprise Library assemblies. The names of the required assemblies appear under Required application block assemblies. The default location is the Microsoft Practices Library folder of the Web Client Software Factory, in case it is installed. (The wizard validates that all required assemblies exist at the specified location; any required assemblies that are not at that location are displayed in red.)
- Enter the root namespace for your application. This value appears as the first part of every namespace in the generated solution.
- Click Finish. The recipe unfolds the Web Client Solution template.
Outcome
You will have a Web client solution you can use as starting point for building and testing your Web client application. The Web client solution includes a Web site with a business module named Shell. This module is associated with the root folder of the Web site. This means that all pages in the root Web site folder belong to the Shell module.
The Shell module registers the following global services:
- SiteMapBuilderService. Business modules use this service to register site map nodes. The Shell module uses this service to register the Home site map node.
- EnterpriseLibraryAuthorizationService. This service uses the Enterprise Library Security Application Block to provide authorization.
Figure 3 illustrates the solution structure of the Web client solution.
Figure 3
The Web Client Solution template generates a functional Web site with a default home page. If you run the application, you will see Figure 4.
Figure 4
Next Steps
The following are typical tasks that you perform after you create a Web client solution:
- Define visual styles. The Web Client Solution template includes an ASP.NET theme named Default. You can update this theme or create your own application-specific theme.
- Create global Web pages. Frequently, a Web site contains global pages that are accessible to all users. For example, a page that displays Help information, a page that users use to log on to the site, or a page that displays error information. You can use the Add Page (with presenter) recipe to create these global views. For information about how to run the recipe, see How to: Add a Page with a Presenter.
- Add business modules. Business modules are units of development and deployment that typically include a combination of related Web pages, page flows, business logic, and services. With modules, you can encapsulate a set of concerns of your application and deploy them together. The following are examples of business modules:
- A module that contains a specific application feature area, such as reports
- A module that contains use cases around a specific back-end system, such as loan processing
To create a business module, run the Add Business Module recipe on a solution folder. For information about how to run the recipe, see How to: Create a Business Module.
- Add foundational modules. Foundational modules encapsulate infrastructure services and do not contain Web pages. An example of a foundational module is a module that contains services for logging and authorization.
To create a foundational module, run the Add Foundational Module recipe on a solution folder. For information about how to execute the recipe, see How to: Create a Foundational Module. | https://msdn.microsoft.com/en-us/library/ff709850.aspx | CC-MAIN-2017-34 | refinedweb | 825 | 55.44 |
You probably don't need Babel
Dan Dascalescu
・1 min read
Starting with version 8.5.0 (released in Sep 2017), Node.js supports ES modules natively, if you pass the
--experimental-modules flag and use the .mjs extension for all the files involved. This means we no longer need a transpiler like Babel!
lib.mjs
export const hello = 'Hello world!';
index.mjs
import { hello } from './lib'; console.log(hello);
Run as:
node --experimental-modules index.mjs
That's it! You've written an ECMAScript module and used it, without Babel or any transpilers.
How to publish native ES modules
To publish an ES module to NPM so that it can be imported directly, without Babel, simply point the main field in your
package.json to the
.mjs file, but omit the extension:
{ "name": "mjs-example", "main": "index" }
That’s the only change. By omitting the extension, Node will look first for an mjs file if run with
--experimental-modules. Otherwise it will fall back to the .js file, so your existing transpilation process to support older Node versions will work as before — just make sure to point Babel to the .mjs file(s).
Here’s the source for a native ES module with backwards compatibility for Node < 8.5.0 that I published to NPM. You can use it right now, without Babel or anything else.
Install the module:
yarn add local-iso-dt # or, npm install local-iso-dt
Create a test file
test.mjs:
import { localISOdt } from 'local-iso-dt'; console.log(localISOdt(), 'Starting job...');
Run node (v8.5.0+) with the --experimental-modules flag:
node --experimental-modules test.mjs
Conclusion
It’s very easy to add native ES module support to your Node.js packages. Just rename your ES6+ files to .mjs and update the main entry in
package.json, omitting the extension. This way your modules can be used directly in Node v8.5.0+ with the
--experimental-modules flag.
While support is experimental right now (Feb 2018), it’s unlikely to change significantly and Node plans to drop the flag requirement with v10.
Keep your transpilation script for backwards compatibility and feel free to fork my example native ES module repo.
Further reading
- Using ES modules natively in Node.js
- Setting up multi-platform npm packages
- StackOverflow question (credits to Alexander O’Mara)
Are you a multi-passionate developer?
When I started on the path towards being a developer, I did not realize how many ...
As of node v12 the information in this article is no longer correct; please see the announcement from node.
That's like the one thing I never used babel for.
That's (?:Webpack|Rollup|Parcel)'s job. | https://dev.to/dandv/why-you-dont-really-need-babel-4k2h | CC-MAIN-2020-10 | refinedweb | 448 | 60.61 |
Opened 3 years ago
Closed 3 years ago
#18415 closed Uncategorized (duplicate)
FormWizard's hash check occasionally fails due to pickle.dumps returning varying values for same inputs
Description
Background: in django.contrib.formtools.utils.security_hash, the data being hashed is normalized and pickled, and an MD5 hash is taken of that data. When the next page of the wizard is submitted, the hash of the re-submitted data is checked to ensure the user did not tamper with the data.
The problem is that the security_hash function will occasionally return a different value for identical inputs. This is due to pickle.dumps (specifically the cpickle version) returning dissimilar serialized versions for the same input. This can be observed with a simple test:
from cPickle import dumps print "equal: {}".format(str(12345) == "12345") print "equal: {}".format(dumps(str(12345)) == dumps("12345"))
This test outputs:
equal: True equal: False
This is not a bug in cpickle, as the pickle documentation explicitly [mentions] that the pickle function will not necessarily return the same output for a given input.
Impact: Users who have not tampered with forms will get shunted back to a previous form page, potentially with no explanation. As a developer, this can be quite tricky to debug, and the solution in my case was to write our own hashing function that doesn't rely on pickle.
Change History (1)
comment:1 Changed 3 years ago by claudep
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Resolution set to duplicate
- Status changed from new to closed
This is a duplicate of #18340 | https://code.djangoproject.com/ticket/18415 | CC-MAIN-2015-48 | refinedweb | 263 | 52.19 |
Managed are .NET assemblies you create and compile outside of Unity, into a dynamically linked library (DLL) with tools such as Visual Studio.
This is a different process from standard C# scriptsA piece of code that allows you to create your own Components, trigger game events, modify Component properties over time and respond to user input in any way you like. More info
See in Glossary, which Unity stores as source files in the Assets folder in your Unity project. Unity compiles standard C# scripts whenever they change, whereas DLLs are pre-compiled and don’t change. You can add a compiled .dll file to your project and attach the classes it contains to GameObjectsThe fundamental object in Unity scenes, which can represent characters, props, scenery, cameras, waypoints, and more. A GameObject’s functionality is defined by the Components attached to it. More info
See in Glossary in the same way as standard scripts.
For more information about managed code in C#, see Microsoft’s What is managed code? documentation.
Managed plug-ins contain only .NET code, which means they can’t access any features that the .NET libraries don’t support. However, managed code is accessible to the standard .NET tools that Unity uses to compile scripts.
When you work with DLLs in Unity, you must complete more steps than when you work with scripts. However, there are situations where you might find it helpful to create and add a .dll file to your Unity project instead, for example:
This page explains a general method you can use to create managed plug-insA managed .NET assembly that is created with tools like Visual Studio for use in Unity. More info
See in Glossary, as well as how you can create managed plug-ins and set up a debug session using Visual Studio.
To create a managed plug-in, you need to create a DLL. To do this, you need a suitable compiler, such as:
Not all compilers that produce .NET code are compatible with Unity, so you should test the compiler with some available code before doing significant work with it. The method you use to create a DLL depends on if the DLL contains Unity API code:
C:\Program Files\Unity\Hub\Editor\<version-number>\Editor\Data\Managed\UnityEngine
Unity.appfile on your computer. The path to the Unity DLLs on macOS is:
/Applications/Unity/Hub/Editor/<version-number>/Unity.app/Contents/Managed/UnityEngine
Unity.app
UnityEnginefolder contains the .dll files for a number of modules. Reference them to make them available to your script. Some namespaces also require a reference to a compiled library from a Unity project (for example,
UnityEngine.UI). Locate this in the project folder’s directory:
~\Library\ScriptAssemblies
If the DLL does not contain Unity API code, or if you’ve already made the Unity DLLs available, follow your compiler’s documentation to compile a .dll file.The exact options you use to compile the DLL depend on the compiler you use. As an example, the command line for the Roslyn compiler,
csc, might look like this on macOS:
csc /r:/Applications/Unity/Hub/Editor/<version-number>/Unity.app/Contents/Managed/UnityEngine.dll /target:library /out:MyManagedAssembly.dll /recurse:*.cs
In this example:
/roption to specify a path to a library to include in the build, in this case, the
UnityEnginelibrary.
/targetoption to specify the type of build you require; “library” signifies a DLL build.
/outto specify the name of the library, which in this case is “MyManagedAssembly.dll”.
/recursemethod to add all the files ending in “.cs’’ in your current working directory and any subfolders. The resulting .dll file appears in the same folder as the source files.
After you’ve compiled the DLL, you can drag the .dll file into the Unity project like any other asset. You can then:
This section explains:
DLLTestas the name).
MyUtilitiesin the Solution browser.
using System; using UnityEngine; namespace DLLTest { public class MyUtilities { public int c; public void AddValues(int a, int b) { c = a + b; } public static int GenerateRandom(int min, int max) { System.Random rand = new System.Random(); return rand.Next(min, max); } } }
To set up a debugging session for a DLL in Unity:
<project folder>/bin/Debug/DLLTest.dll) into the Assets folder.)); } }
Unity displays the output of the code from the DLL in the Console window
Unsafe C# code is code that is able to access memory directly. It is not enabled by default because the compiler can’t verify that it won’t introduce security risks.
You might want to use unsafe code to:
To enable support for compiling unsafe C# code go to Edit > Project Settings > Player > Other Settings and enable Allow Unsafe Code.
For more information, see Microsoft’s documentation of unsafe code. | https://docs.unity3d.com/2020.3/Documentation/Manual/UsingDLL.html | CC-MAIN-2022-40 | refinedweb | 795 | 57.27 |
This blog runs on home-grown software, so it usually lags in technology behind the usual blog software. I’ve been pinging weblogs.com and blo.gs for a long time, and figured I could skip the other services because I didn’t want to track down how to do each one.
Today I discovered Ping-o-Matic, which is a meta-pinging service: you ping it, and it pings everyone else. Handy. But it has an XML-RPC interface, and that’s one of the things I had never spent the time to learn how to do. Well, it couldn’t be easier. The Python xmlrpclib module makes the whole thing totally transparent:
import xmlrpclib
remoteServer = xmlrpclib.Server("")
ret = remoteServer.weblogUpdates.ping(
"Ned Batchelder's blog",
""
)
print ret['message']
So now I replaced two HTTP get pings with one XML-RPC ping, and I’m reaching more services. Sweet!
Crazy! I searched an XML-RPC and PHP and there are no information. It's not so Handy you say i think! Damn...
Add a comment: | https://nedbatchelder.com/blog/200406/pingomatic_and_xmlrpc.html | CC-MAIN-2018-13 | refinedweb | 176 | 67.76 |
This is a C++ Program for counting the total number of internal nodes present in a given Binary Search Tree.
We will be given a Binary Search Tree and we have to create a C++ program which counts the total number of non-leaf nodes i.e. Internal Nodes present in it using recursion. An internal node is one which has at least one children.
Case 1. Balanced Tree: When the weight on both the sides of the root node is same.
25 / \ 19 29 / \ / \ 17 20 27 55
Output: 3
Case 2. Right Skewed Tree: When the nodes at every level have just a right child.
1 \ 2 \ 3 \ 4 \ 5
Output: 4
Case 3. Tree having just one node
15
Output: 0
We can easily find the number of internal nodes present in any tree using recursion. An internal node is a node whose left or the right child is not NULL. We just need to check this single condition to determine whether the node is a leaf node or a non leaf (internal) node.
Here is source code of the C++ Program to count the total number of internal nodes present in a given Binary Search Tree. The program is successfully compiled and tested using Codeblocks gnu/GCC compiler on windows 10. The program output is also shown below.
/* C++ Program to find the number of internal nodes in a Tree */
#include <iostream>
using namespace std;
struct node
{
int info;
struct node *left, *right;
};
int count = 0;
class BST
{
public:
/*
* Function to create new nodes
*/
struct node *createnode(int key)
{
struct node *newnode = new node;
newnode->info = key;
newnode->left = NULL;
newnode->right = NULL;
return(newnode);
}
int internalnodes(struct node *newnode)
{
if(newnode != NULL)
{
internalnodes(newnode->left);
if((newnode->left != NULL) || (newnode->right != NULL))
{
count++;
}
internalnodes(newnode->right);
}
return count;
}
};
int main()
{
/* Creating first Tree. */
BST t1,t2,t3;
struct node *newnode = t1.createnode(25);
newnode->left = t1.createnode(19);
newnode->right = t1.createnode(29);
newnode->left->left = t1.createnode(17);
newnode->left->right = t1.createnode(20);
newnode->right->left = t1.createnode(27);
newnode->right->right = t1.createnode(55);
/* Sample Tree 1. Balanced Tree
* 25
* / \
* 19 29
* / \ / \
* 17 20 27 55
*/
cout<<"Number of internal nodes in first Tree are "<<t1.internalnodes(newnode);
cout<<endl;
count = 0;
/* Creating second tree */
struct node *node = t2.createnode(1);
node->right = t2.createnode(2);
node->right->right = t2.createnode(3);
node->right->right->right = t2.createnode(4);
node->right->right->right->right = t2.createnode(5);
/* Sample Tree 2. Right Skewed Tree (Unbalanced).
* 1
* \
* 2
* \
* 3
* \
* 4
* \
* 5
*/
cout<<"\nNumber of internal nodes in second tree are "<<t2.internalnodes(node);
cout<<endl;
count = 0;
/* Creating third Tree. */
struct node *root = t3.createnode(15);
/* Sample Tree 3. Tree having just one root node.
* 15
*/
cout<<"\nNumber of internal nodes in third tree are "<<t3.internalnodes(root);
return 0;
}
In this program we have used recursion to find the total number of internal nodes present in a tree.
2. A internal Node is one whose left or the right child is not NULL. We have created a function called internalnodes() which takes in root of the tree as a parameter and returns the total number of internal nodes it has.
3. The basic idea is to traverse the tree using any traversal so as to visit each and every node and check the condition for internal node for each node, that is what we have done in internalnodes() function.
4. In the internalnodes() function we have used the inorder traversal, by first traversing the left subtree, then instead of printing the root->data as a second step of inorder traversal, we have checked the internal node condition and then at last we have traversed the right subtree by passing root->right as a parameter.
Number of internal nodes in first Tree are 3 Number of internal nodes in second tree are 4 Number of internal nodes in third tree are 0
Sanfoundry Global Education & Learning Series – 1000 C++ Programs.
Here’s the list of Best Reference Books in C++ Programming, Data Structures and Algorithms. | https://www.sanfoundry.com/cplusplus-program-count-internal-nodes-binary-search-tree/ | CC-MAIN-2020-29 | refinedweb | 681 | 66.64 |
C Standard Library Functions
C Standard library functions or simply C Library functions are inbuilt functions in C programming.
The prototype and data definitions of the functions are present in their respective header files, and must be included in your program to access them.
For example: If you want to use
printf() function, the header file
<stdio.h> should be included.
#include <stdio.h> int main() { // If you use printf() function without including the <stdio.h> // header
file, this program will show an error. printf("Catch me if you can."); }
There is at least one function in any C program, i.e., the
main() function (which is also a library function). This function is automatically called when your program starts.
Advantages of using C library functions
There are many library functions available in C programming to help you write a good and efficient program. But, why should you use it?
Below are the 4 most important advantages of using standary.
It saves valuable time and your code may not always be the most efficient.
3. The functions are portable
With ever changing real world needs, your application is expected to work every time, everywhere.
And, these library functions help you in that they do the same thing on every computer.
This saves time, effort and makes your program portable.
Use Of Library Function To Find Square root
Suppose, you want to find the square root of a number.
You can always write your own piece of code to find square root but, this process is time consuming and it might not be the most efficient process to find square root.
However, in C programming you can find the square root by just using
sqrt() function which is defined under header file
"math.h"
#include <stdio.h> #include <math.h> int main() { float num, root; printf("Enter a number to find square root."); scanf("%f", &num); // Computes the square root of num and stores in root. root = sqrt(num); printf("Square root of %.2f=%.2f", num, root); return 0; }
List of Standard Library Functions Under Different Header Files in C Programming | https://www.programiz.com/c-programming/library-function | CC-MAIN-2016-50 | refinedweb | 350 | 75.2 |
0,2
The number of perfect matchings in a triangular grid of n squares (n = 1, 4, 9, 16, 25, ...). - Roberto E. Martinez II, Nov 05 2001
a(n) is the number of subdiagonal paths from (0, 0) to (n, n) consisting of steps East (1, 0), North (0, 1) and Northeast (1, 1) (sometimes called royal paths). - David Callan, Mar 14 2004
Twice A001003 (except for the first term).
a(n) is the number of dissections of a regular (n+4)-gon by diagonals that do not touch the base. (A diagonal is a straight line joining two nonconsecutive vertices and dissection means the diagonals are noncrossing though they may share an endpoint. One side of the (n+4)-gon is designated the base.) Example: a(1)=2 because a pentagon has only 2 such dissections: the empty one and the one with a diagonal parallel to the base. - David Callan, Aug 02 2004
From Jonathan Vos Post, Dec 23 2004: (Start)
The only prime in this sequence is 2. The semiprimes (intersection with A001358) are a(2) = 6, a(3) = 22, a(4) = 394, a(9) = 206098 and a(215), and correspond 1-to-1 with prime super-Catalan numbers, also called prime little Schröder numbers (intersection of A001003 and A000040), which are listed as A092840 and indexed as A092839.
The 3-almost prime large Schröder numbers a(7) = 8558, a(11) = 5293446, a(17) = 111818026018, a(19) = 3236724317174, a(21) = 95149655201962 (intersection of A006318 and A014612) correspond 1-to-1 with semiprime super-Catalan numbers, also called semiprime little Schröder numbers (intersection of A001003 and A001358), which are listed as A101619 and indexed as A101618. These relationships all derive from the fact that a(n) = 2*A001003(n).
Eric W. Weisstein comments that the Schröder numbers bear the same relationship to the Delannoy numbers (A001850) as the Catalan numbers (A000108) do to the binomial coefficients. (End)
a(n) is the number of lattice paths from (0, 0) to (n+1, n+1) consisting of unit steps north N = (0, 1) and variable-length steps east E = (k, 0), with k a positive integer, that stay strictly below the line y = x except at the endpoints. For example, a(2) = 6 counts 111NNN, 21NNN, 3NNN, 12NNN, 11N1NN, 2N1NN (east steps indicated by their length). If the word "strictly" is replaced by "weakly", the counting sequence becomes the little Schröder numbers, A001003 (offset). - David Callan, Jun 07 2006
a(n) is the number of dissections of a regular (n+3)-gon with base AB that do not contain a triangle of the form ABP with BP a diagonal. Example: a(1) = 2 because the square D-C | | A-B has only 2 such dissections: the empty one and the one with the single diagonal AC (although this dissection contains the triangle ABC, BC is not a diagonal). - David Callan, Jul 14 2006
a(n) is the number of (colored) Motzkin n-paths with each upstep and each flatstep at ground level getting one of 2 colors and each flatstep not at ground level getting one of 3 colors. Example: With their colors immediately following upsteps/flatsteps, a(2) = 6 counts U1D, U2D, F1F1, F1F2, F2F1, F2F2. - David Callan, Aug 16 2006
a(n) is the number of separable permutations, i.e., permutations avoiding 2413 and 3142 (see Shapiro and Stephens). - Vincent Vatter,
Triangle A144156 has row sums equal to A006318 with left border A001003. - Gary W. Adamson, Sep 12 2008
a(n) is also the number of order-preserving and order-decreasing partial transformations (of an n-chain). Equivalently, it is the order of the Schröder monoid, PC sub n. - Abdullahi Umar, Oct 02 2008
Sum_{n >= 0} a(n)/10^n - 1 = [9-sqrt(41)]/2. - Mark Dols (markdols99(AT)yahoo.com), Jun 22 2010
1/sqrt(41) = sum_{n >= 0} Delannoy number(n)/10^n. - Mark Dols (markdols99(AT)yahoo.com), Jun 22 2010
a(n) is also the dimension of the space Hoch(n) related to Hochschild two cocyles. - Ph. Leroux (ph_ler_math(AT)yahoo.com), Aug 24 2010
Let W = (w(n, k)) denote the augmentation triangle (as at A193091) of A154325; then w(n, n) = A006318(n). - Clark Kimberling, Jul 30 2011
Conjecture: For each n > 2, the polynomial sum_{k = 0}^n a(k)*x^{n-k} is irreducible modulo some prime p < n*(n+1). - Zhi-Wei Sun, Apr 07 2013
From Jon Perry, May 24 2013: (Start)
Consider a Pascal triangle variant where T(n, k) = T(n, k-1) + T(n-1, k-1) + T(n-1, k), i.e., the order of performing the calculation must go from left to right (A033877). This sequence is the rightmost diagonal.
Triangle begins:
1
1 2
1 4 6
1 6 16 22
1 8 30 68 90
(End)
a(n) is the number of permutations avoiding 2143, 3142 and one of the patterns among 246135, 254613, 263514, 524361, 546132. - Alexander Burstein, Oct 05 2014
a(n) is the number of semi-standard Young tableaux of shape n x 2 with consecutive entries. That is, j \in P and 1<=i<=j imply i \in P. - Graham H. Hawkes, Feb 15 2015
M. Aigner, Enumeration via ballot numbers, Discrete Math., 308 (2008), 2544-2563.
D. Andrica and E. J. Ionascu, On the number of polynomials with coefficients in [n], An. St. Univ. Ovidius Constanta, 2013, to appear.
M. D. Atkinson and T. Stitt, Restricted permutations and the wreath product, Discrete Math., 259 (2002), 19-36.
Barcucci, E.; Del Lungo, A.; Pergola, E.; and Pinzani, R.; Some permutations with forbidden subsequences and their inversion number. Discrete Math. 234 (2001), no. 1-3, 1-15.
Paul Barry, On Integer-Sequence-Based Constructions of Generalized Pascal Triangles, Journal of Integer Sequences, Vol. 9 (2006), Article 06.2.4.
P. Barry, Riordan-Bernstein Polynomials, Hankel Transforms and Somos Sequences, Journal of Integer Sequences, Vol. 15 2012, #12.8.2.
O. Bodini, A. Genitrini, F. Peschanski and N.Rolin, Associativity for binary parallel processes, CALDAM 2015.
S. Brlek, E. Duchi, E. Pergola and S. Rinaldi, On the equivalence problem for succession rules, Discr. Math., 298 (2005), 142-154.
William Y. C. Chen and Carol J. Wang, Noncrossing Linked Partitions and Large (3, 2)-Motzkin Paths, Discrete Math., 312 (2012), 1918-1922.
L. Comtet, Advanced Combinatorics, Reidel, 1974, p. 81, #21, (4), q_n.
D. E. Davenport, L. W. Shapiro and L. C. Woodson, The Double Riordan Group, The Electronic Journal of Combinatorics, 18(2) (2012), #P33.
Deng, Eva Y. P.; Dukes, Mark; Mansour, Toufik; and Wu, Susan Y. J.; Symmetric Schröder paths and restricted involutions. Discrete Math. 309 (2009), no. 12, 4108-4115. See p. 4109.
E. Deutsch, A bijective proof of an equation linking the Schroeder numbers, large and small, Discrete Math., 241 (2001), 235-240.
C. Domb and A. J. Barrett, Enumeration of ladder graphs, Discrete Math. 9 (1974), 341-358.. Dziemianczuk, Generalizing Delannoy numbers via counting weighted lattice paths, INTEGERS, 13 (2013), #A54.
Egge, Eric S., Restricted signed permutations counted by the Schröder numbers. Discrete Math. 306 (2006), 552-563. [Many applications of these numbers.]
S. Getu et al., How to guess a generating function, SIAM J. Discrete Math., 5 (1992), 497-499.
S. Gire, Arbres, permutations a motifs exclus et cartes planaire: quelques problemes algorithmiques et combinatoires, Ph.D. Thesis, Universite Bordeaux I, 1993.
N. S. S. Gu, N. Y. Li and T. Mansour, 2-Binary trees: bijections and related issues, Discr. Math., 308 (2008), 1209-1221.
Guruswami, Venkatesan, Enumerative aspects of certain subclasses of perfect graphs. Discrete Math. 205 (1999), 97-117.
Silvia Heubach and Toufik Mansour, Combinatorics of Compositions and Words, CRC Press, 2010.
D. E. Knuth, The Art of Computer Programming, Vol. 1, Section 2.2.1, Problem 11.
D. Kremer, Permutations with forbidden subsequences and a generalized Schröder number, Discrete Math. 218 (2000) 121-130.
Kremer, Darla and Shiu, Wai Chee; Finite transition matrices for permutations avoiding pairs of length four patterns. Discrete Math. 268 (2003), 171-183. MR1983276 (2004b:05006). See Table 1.
G. Kreweras, Sur les hiérarchies de segments, Cahiers Bureau Universitaire Recherche Opérationnelle, Cahier 20, Inst. Statistiques, Univ. Paris, 1973.
Laradji, A. and Umar, A. Asymptotic results for semigroups of order-preserving partial transformations. Comm. Algebra 34 (2006), 1071-1075. - Abdullahi Umar, Oct 11 2008
L. Moser and W. Zayachkowski, Lattice paths with diagonal steps, Scripta Math., 26 (1961), 223-229.
L. Shapiro and A. B. Stephens, Bootstrap percolation, the Schröder numbers and the N-kings problem, SIAM J. Discrete Math., Vol. 4 (1991), pp. 275-280.
N. J. A. Sloane and Simon Plouffe, The Encyclopedia of Integer Sequences, Academic Press, 1995 (includes this sequence).
R. P. Stanley, Enumerative Combinatorics, Cambridge, Vol. 2, 1999; see page 178 and also Problems 6.39 and 6.40.
Fung Lam, Table of n, a(n) for n = 0..2000 (terms 0..100 by T. D. Noe)
A. Asinowski, G. Barequet, M. Bousquet-Mélou, T. Mansour, R. Pinter, Orders induced by segments in floorplans and (2-14-3,3-41-2)-avoiding permutations, arXiv:1011.1889 [math.CO].
C. Banderier and D. Merlini, Lattice paths with an infinite set of jumps, FPSAC02, Melbourne, 2002.
E. Barcucci, A. Del Lungo, E. Pergola and R. Pinzani, Permutations avoiding an increasing number of length-increasing forbidden subsequences
E. Barcucci, E. Pergola, R. Pinzani and S. Rinaldi, ECO method and hill-free generalized Motzkin paths
Paul Barry, Laurent Biorthogonal Polynomials and Riordan Arrays, arXiv preprint arXiv:1311.2292, 2013
Arkady Berenstein, Vladimir Retakh, Christophe Reutenauer and Doron Zeilberger, The Reciprocal of Sum_{n >= 0} a^n b^n for non-commuting a and b, Catalan numbers and non-commutative quadratic equations, arXiv preprint arXiv:1206.4225, 2012. - From N. J. A. Sloane, Nov 28 2012
J. Bloom, A. Burstein, Egge triples and unbalanced Wilf-equivalence, arXiv preprint arXiv:1410.0230, 2014
O. Bodini, A. Genitrini and F. Peschanski, The Combinatorics of Non-determinism, In proc. IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS'13), Leibniz International Proceedings in Informatics, pp 425-436, 2013.
Miklós Bóna, Cheyne Homberger, Jay Pantone, and Vince Vatter, Pattern-avoiding involutions: exact and asymptotic enumeration, arxiv:1310.7003, 2013.
M. Bremner, S. Madariaga, Lie and Jordan products in interchange algebras, arXiv preprint arXiv:1408.3069, 2014
M. Bremner, S. Madariaga, Permutation of elements in double semigroups, arXiv preprint arXiv:1405.2889, 2014
R. Brignall, S. Huczynska and V. Vatter, Simple permutations and algebraic generating functions
Marie-Louise Bruner and Martin Lackner, On the Likelihood of Single-Peaked Preferences, arXiv preprint, 2015.
Alexander Burstein, Sergi Elizalde and Toufik Mansour, Restricted Dumont permutations, Dyck paths and noncrossing partitions, arXiv math.CO/0610234. [Theorem 3.5]
A. Burstein, J. Pantone, Two examples of unbalanced Wilf-equivalence, arXiv:1402.3842, 2014.
D. Callan, An application of a bijection of Mansour, Deng, and Du, arXiv preprint arXiv:1210.6455, 2012.
F. Chapoton, F. Hivert, J.-C. Novelli, A set-operad of formal fractions and dendriform-like sub-operads, arXiv preprint arXiv:1307.0092, 2013
F. Chapoton, S. Giraudo, Enveloping operads and bicoloured noncrossing configurations, arXiv preprint arXiv:1310.4521, 2013
W. Y. C. Chen, L. H. Liu and C. J. Wang, Linked Partitions and Permutation Tableaux, arXiv preprint arXiv:1305.5357, 2013
J. Cigler, Hankel determinants of some polynomial sequences, 2012.
M. Ciucu, Perfect matchings of cellular graphs, J. Algebraic Combin., 5 (1996) 87-103.
S. Crowley, Integral Transforms of the Harmonic Sawtooth Map, The Riemann Zeta Function, Fractal Strings, and a Finite Reflection Formula, arXiv preprint arXiv:1210.5652, 2012.
S. Crowley, Mellin and Laplace Integral Transforms Related to the Harmonic Sawtooth Map and a Diversion Into The Theory Of Fractal Strings, 2012.
R. De Castro, A. L. Ramírez and J. L. Ramírez, Applications in Enumerative Combinatorics of Infinite Weighted Automata and Graphs, arXiv preprint arXiv:1310.2449, 2013
B. Drake, An inversion theorem for labeled trees and some limits of areas under lattice paths (Example 1.6.7), A dissertation presented to the Faculty of the Graduate School of Arts and Sciences of Brandeis University.
S.-P. Eu and T.-S. Fu, A simple proof of the Aztec diamond problem
Luca Ferrari and Emanuele Munarini, Enumeration of edges in some lattices of paths, arXiv preprint arXiv:1203.6792, 2012. - From N. J. A. Sloane, Oct 03 2012
P. Flajolet and R. Sedgewick, Analytic Combinatorics, 2009; see page 474.
Olivier Gérard, Illustration of initial terms
Étienne Ghys, Quand beaucoup de courbes se rencontrent — Images des Mathématiques, CNRS, 2009.
Étienne Ghys, Intersecting curves, Amer. Math. Monthly, 120 (2013), 232-242.
Samuele Giraudo, Operads from posets and Koszul duality, arXiv preprint, 2015.
Li Guo and Jun Pei, Averaging algebras, Schröder numbers and rooted trees, arXiv preprint arXiv:1401.7386, 2014, 2014
INRIA Algorithms Project, Encyclopedia of Combinatorial Structures 159
S. Kamioka, Laurent biorthogonal polynomials, q-Narayana polynomials and domino tilings of the Aztec diamonds, arXiv preprint arXiv:1309.0268, 2013
Sergey Kitaev and Jeffrey Remmel, Simple marked mesh patterns, arXiv preprint arXiv:1201.1323, 2012
Nate Kube and Frank Ruskey, Sequences That Satisfy a(n-a(n))=0, Journal of Integer Sequences, Vol. 8 (2005), Article 05.5.5.
Laradji, A. and Umar, A. Combinatorial results for semigroups of order-preserving partial transformations, Journal of Algebra 278, (2004), 342-359.
Laradji, A. and Umar, A. Combinatorial results for semigroups of order-decreasing partial transformations, J. Integer Seq. 7 (2004), 04.3.8
Philippe Leroux, Hochschild two-cocycles and the good triple (As,Hoch,Mag^\infty), arXiv:0806.4093
Peter Luschny, The Lost Catalan Numbers And The Schröder Tableaux.
J.-C. Novelli and J.-Y. Thibon, Hopf algebras and dendriform structures arising from parking functions, Fundamenta Mathematicae 193 (2007), no. 3, 189-241.
P. Peart and W.-J. Woan, Generating Functions via Hankel and Stieltjes Matrices, J. Integer Seqs., Vol. 3 (2000), #00.2.1.
E. Pergola and R. A. Sulanke, Schröder Triangles, Paths and Parallelogram Polyominoes, J. Integer Sequences, 1 (1998), #98.1.7.
Markus Saers, Dekai Wu and Chris Quirk, On the Expressivity of Linear Transductions, The 13th Machine Translation Summit.]
R. A. Sulanke, Moments of generalized Motzkin paths, J. Integer Sequences, Vol. 3 (2000), #00.1.
R. A. Sulanke, Moments, Narayana numbers and the cut and paste for lattice paths
R. A. Sulanke, Bijective recurrences concerning Schröder paths, Electron. J. Combin. 5 (1998), Research Paper 47, 11 pp.
Zhi-Wei Sun, On Delannoy numbers and Schröder numbers, Journal of Number Theory, Volume 131, Issue 12, December 2011, Pages 2387-2397; doi:10.1016/j.jnt.2011.06.005; arXiv 1009.2486.
Zhi-Wei Sun, Conjectures involving combinatorial sequences, arXiv preprint arXiv:1208.2683, 2012. - N. J. A. Sloane, Dec 25 2012. - N. J. A. Sloane, Dec 28 2012
Paul Tarau, On Type-directed Generation of Lambda Terms, preprint, 2015.
V. K. Varma and H. Monien, Renormalization of two-body interactions due to higher-body interactions of lattice bosons, arXiv preprint arXiv:1211.5664, 2012. - N. J. A. Sloane, Jan 03 2013
Yi Wang and Bao-Xuan Zhu, Proofs of some conjectures on monotonicity of number-theoretic and combinatorial sequences, arXiv preprint arXiv:1303.5595, 2013
M. S. Waterman, Home Page (contains copies of his papers)
Eric Weisstein's World of Mathematics, Schröder Number
J. West, Generating trees and the Catalan and Schröder numbers, Discrete Math. 146 (1995), 247-262.
J. Winter, M. M. Bonsangue and J. J. M. M. Rutten, Context-free coalgebras, 2013.
S.-n. Zheng and S.-l. Yang, On the-Shifted Central Coefficients of Riordan Matrices, Journal of Applied Mathematics 2014, Article ID 848374.
Index entries for "core" sequences
G.f.: (1-x-(1-6*x+x^2)^(1/2))/(2*x).
a(n) = 2*hypergeom([ -n+1, n+2], [2], -1). - Vladeta Jovovic, Apr 24 2003
For n > 0, a(n) = (1/n)*sum(k = 0, n, 2^k*C(n, k)*C(n, k-1)). - Benoit Cloitre, May 10 2003
The g.f. satisfies (1-x)A(x)-xA(x)^2 = 1. - Ralf Stephan, Jun 30 2003
For the asymptotic behavior see A001003 (remembering that A006318 = 2*A001003). - N. J. A. Sloane, Apr 10 2011
Row sums of A088617 and A060693. a(n) = sum (k = 0..n, C(n+k, n)*C(n, k)/k+1). - Philippe Deléham, Nov 28 2003
With offset 1 : a(1) = 1, a(n) = a(n-1) + sum(i = 1, n-1, a(i)*a(n-i)). - Benoit Cloitre, Mar 16 2004
a(n) = sum(k = 0, n, A000108(k)*binomial(n+k, n-k)). - Benoit Cloitre, May 09 2004
a(n) = Sum_{k = 0..n} A011117(n, k). - Philippe Deléham, Jul 10 2004
a(n) = (CentralDelannoy[n+1] - 3 CentralDelannoy[n])/(2n) = (-CentralDelannoy[n+1] + 6 CentralDelannoy[n] - CentralDelannoy[n-1])/2 for n>=1 where CentralDelannoy is A001850. - David Callan,
A123164(n+1) - A123164(n) = (2n+1)a (n >= 0);
and 2*A123164(n) = (n+1)a(n) - (n-1)a(n-1) (n > 0). - Abdullahi Umar, Oct 11 2008
Define the general Delannoy numbers d(i, j) as in A001850. Then a(k) = d(2*k, k) - d(2*k, k-1) and a(0) = 1, sum[{(-1)^j}*{d(n, j) + d(n-1, j-1)}*a(n-j)] = 0, j = 0, 1, ..., n. - Peter E John, Oct 19 2006
Given an integer t >= 1 and initial values u = [a_0, a_1, ..., a_{t-1}], we may define an infinite sequence Phi(u) by setting a_n = a_{n-1} + a_0*a_{n-1} + a_1*a_{n-2} + ... + a_{n-2}*a_1 for n >= t. For example, Phi([1]) is the Catalan numbers A000108. The present sequence is (essentially) Phi([2]). - Gary W. Adamson, Oct 27 2008
G.f.: 1/(1-2x/(1-x/(1-2x/(1-x/(1-2x/(1-x/(1-2x/(1-x/(1-2x/(1-x.... (continued fraction). - Paul Barry, Dec 08 2008
G.f.: 1/(1-x-x/(1-x-x/(1-x-x/(1-x-x/(1-x-x/(1-... (continued fraction). - Paul Barry, Jan 29 2009
a(n) ~ ((3+2*sqrt(2))^n)/(n*sqrt(2*Pi*n)*sqrt(3*sqrt(2)-4))*(1-(9*sqrt(2)+24)/(32*n)+...). - G. Nemes (nemesgery(AT)gmail.com), Jan 25 2009
Logarithmic derivative yields A002003. - Paul D. Hanna, Oct 25 2010
a(n) = the upper left term in M^(n+1), M = the production matrix:
1, 1, 0, 0, 0, 0,...
1, 1, 1, 0, 0, 0,...
2, 2, 1, 1, 0, 0,...
4, 4, 2, 1, 1, 0,...
8, 8, 8, 2, 1, 1,...
... - Gary W. Adamson, Jul 08 2011
a(n) is the sum of top row terms in Q^n, Q = an infinite square production matrix as follows:
1, 1, 2, 0, 0, 0,...
1, 1, 1, 2, 0, 0,...
1, 1, 1, 1, 2, 0,...
1, 1, 1, 1, 1, 2,...
... - Gary W. Adamson, Aug 23 2011
From Tom Copeland, Sep 21 2011: (Start)
With F(x) = (1-3*x-sqrt(1-6*x+x^2))/(2*x) an o.g.f. (nulling the n = 0 term) for A006318, G(x) = x/(2+3*x+x^2) is the compositional inverse.
Consequently, with H(x) = 1/ (dG(x)/dx) = (2+3*x+x^2)^2 / (2-x^2),
a(n)=(1/n!)*[(H(x)*d/dx)^n] x evaluated at x = 0, i.e.,
F(x) = exp[x*H(u)*d/du] u, evaluated at u = 0. Also, dF(x)/dx = H(F(x)). (End)
a(n-1) = number of ordered complete binary trees with n leaves having k internal vertices colored black, the remaining n - 1 - k internal vertices colored white, and such that each vertex and its rightmost child have different colors ([Drake, Example 1.6.7]). For a refinement of this sequence see A175124. - Peter Bala, Sep 29 2011
Recurrence: (n-2)*a(n-2) - 3*(2*n-1)*a(n-1) + (n+1)*a(n) = 0. - Vaclav Kotesovec, Oct 05 2012
G.f.: A(x) = (1 - x - sqrt(1-6x+x^2))/(2*x)= (1 - G(0))/x; G(k) = 1 + x - 2*x/G(k+1); (continued fraction, 1-step). - Sergei N. Gladkovskii, Jan 04 2012
G.f.: A(x) = (1 - x - sqrt(1-6x+x^2))/(2*x)= (G(0)-1)/x; G(k)= 1 - x/(1 - 2/G(k+1)); (continued fraction, 2-step). - Sergei N. Gladkovskii, Jan 04 2012
a(n+1) = a(n) + sum (a(k)*(n-k): k = 0..n). - Reinhard Zumkeller, Nov 13 2012
G.f.: 1/Q(0) where Q(k) = 1 + k*(1-x) - x - x*(k+1)*(k+2)/Q(k+1); (continued fraction). - Sergei N. Gladkovskii, Mar 14 2013
a(-1-n) = a(n). - Michael Somos, Apr 03 2013
G.f.: 1/x - 1 - U(0)/x, where U(k)= 1 - x - x/U(k+1) ; (continued fraction). - Sergei N. Gladkovskii, Jul 16 2013
G.f.: (2 - 2*x - G(0))/(4*x), where G(k)= 1 + 1/( 1 - x*(6-x)*(2*k-1)/(x*(6-x)*(2*k-1) + 2*(k+1)/G(k+1) )); (continued fraction). - Sergei N. Gladkovskii, Jul 16 2013
a(n) = 1/(n+1) (Sum_{j=0..n} C(n+j,j)*C(n+j+1,j+1)*(Sum_{k=0..n-j} (-1)^k*C(n+j+k,k))). - Graham H. Hawkes, Feb 15 2015
a(n) = hypergeom([-n, n+1], [2], -1). - Peter Luschny, Mar 23 2015
a(n) = sqrt(2) * LegendreP(n,-1,3) where LegendreP is the associated Legendre function of the first kind (in Maple's notation). - Robert Israel, Mar 23 2015
a(3) = 22 since the top row of Q^n = (6, 6, 6, 4, 0, 0, 0,...); where 22 = (6 + 6 + 6 + 4).
G.f. = 1 + 2*x + 6*x^2 + 22*x^3 + 90*x^4 + 394*x^5 + 1806*x^6 + 8858*x^7 + 41586*x^8 + ...
Order := 24: solve(series((y-y^2)/(1+y), y)=x, y); # then A(x)=y(x)/x
BB:=(-1-z-sqrt(1-6*z+z^2))/2: BBser:=series(BB, z=0, 24): seq(coeff(BBser, z, n), n=1..23); # Zerinvary Lajos, Apr 10 2007
A006318_list := proc(n) local j, a, w; a := array(0..n); a[0] := 1;
for w from 1 to n do a[w] := 2*a[w-1]+add(a[j]*a[w-j-1], j=1..w-1) od; convert(a, list)end: A006318_list(22); # Peter Luschny, May 19 2011
A006318 := n-> add(binomial(n+k, n-k) * binomial(2*k, k)/(k+1), k=0..n): seq(A006318(n), n=0..22); # Johannes W. Meijer, Jul 14 2013
seq(simplify(hypergeom([-n, n+1], [2], -1)), n=0..100); # Robert Israel, Mar 23 2015
a[0] = 1; a[n_Integer] := a[n] = a[n - 1] + Sum[a[k]*a[n - 1 - k], {k, 0, n - 1}]; Array[a[#] &, 30]
InverseSeries[Series[(y - y^2)/(1 + y), {y, 0, 24}], x] (* then A(x) = y(x)/x - Len Smiley, Apr 11 2000 *)
CoefficientList[Series[(1 - x - (1 - 6x + x^2)^(1/2))/(2x), {x, 0, 30}], x] (* Harvey P. Dale, May 01 2011 *)
a[ n_] := 2 Hypergeometric2F1[ -n + 1, n + 2, 2, -1]; (* Michael Somos, Apr 03 2013 *)
a[ n_] := With[{m = If[ n < 0, -1 - n, n]}, SeriesCoefficient[(1 - x - Sqrt[ 1 - 6 x + x^2])/(2 x), {x, 0, m}]]; (* Michael Somos, Jun 10 2015 *)
(PARI) {a(n) = if( n<0, n = -1-n); polcoeff( (1 - x - sqrt( 1 - 6*x + x^2 + x^2 * O(x^n))) / 2, n+1)}; /* Michael Somos, Apr 03 2013 */
(PARI) {a(n) = if( n<1, 1, sum( k=0, n, 2^k * binomial( n, k) * binomial( n, k-1)) / n)};
(Sage) # Generalized algorithm of L. Seidel
def A006318_list(n) :
D = [0]*(n+1); D[1] = 1
b = True; h = 1; R = []
for i in range(2*n) :
if b :
for k in range(h, 0, -1) : D[k] += D[k-1]
h += 1;
else :
for k in range(1, h, 1) : D[k] += D[k-1]
R.append(D[h-1]);
b = not b
return R
A006318_list(23) # Peter Luschny, Jun 02 2012
(Haskell)
a006318 n = a004148_list !! n
a006318_list = 1 : f [1] where
f xs = y : f (y : xs) where
y = head xs + sum (zipWith (*) xs $ reverse xs)
-- Reinhard Zumkeller, Nov 13 2012
(Python)
from gmpy2 import divexact
A006318 = [1, 2]
for n in range(3, 10**3):
....A006318.append(divexact(A006318[-1]*(6*n-9)-(n-3)*A006318[-2], n))
# Chai Wah Wu, Sep 01 2014
Apart from leading term, twice A001003. Cf. A025240.
Sequences A085403, A086456, A103137, A112478 are essentially the same sequence.
Main diagonal of A033877.
Cf. A088617, A060693. Row sums of A104219. Bisections give A138462, A138463.
Cf. A144156. - Gary W. Adamson, Sep 12 2008
Cf. A002003. - Paul D. Hanna, Oct 25 2010
Row sums of A175124.
Cf. A004148.
Sequence in context: A049134 A086456 * A155069 A103137 A165546 A053617
Adjacent sequences: A006315 A006316 A006317 * A006319 A006320 A006321
nonn,easy,core,nice
N. J. A. Sloane
More terms from David W. Wilson
Edited by Charles R Greathouse IV, Apr 20 2010
approved | https://oeis.org/A006318 | CC-MAIN-2015-32 | refinedweb | 4,185 | 68.36 |
The Most Important Code Metrics You’ve Never Heard Of
Editorial Note: I originally wrote this post for the NDepend blog. Head on over and check out the original. If software architecture interests you or you aspire to that title, there’s a pretty focused set of topics that will interest you.
Oh how I hope you don’t measure developer productivity by lines of code. As Bill Gates once ably put it, “measuring software productivity by lines of code is like measuring progress on an airplane by how much it weighs.” No doubt, you have other, better reasoned metrics that you capture for visible progress and quality barometers. Automated test coverage is popular (though be careful with that one). Counts of defects or trends in defect reduction are another one. And of course, in our modern, agile world, sprint velocity is ubiquitous.
But today, I’d like to venture off the beaten path a bit and take you through some metrics that might be unfamiliar to you, particularly if you’re no longer technical (or weren’t ever). But don’t leave if that describes you — I’ll help you understand the significance of these metrics, even if you won’t necessarily understand all of the nitty-gritty details.
Perhaps the most significant factor here is that the metrics I’ll go through can be tied, relatively easily, to stakeholder value in projects. In other words, I won’t just tell you the significance of the metrics in terms of what they say about the code. I’ll also describe what they mean for people invested in the project’s outcome.
Type Rank
It’s possible that you’ve heard of the concept of Page Rank. If you haven’t, page rank was, for a long time, the method by which Google determined which sites on the internet were most important. This should make intuitive sense on some level. Amazon has a high page rank — if it went down, millions of lives would be disrupted, stocks would plummet, and all sorts of chaos would ensure. The blog you created that one time and totally meant to add to over the years has a low page rank — no one, yourself included, would notice if it stopped working.
It turns out that you can actually reason about pieces of code in a very similar way. Some bits of code in the code base are extremely important to the system, with inbound and outbound dependencies. Others exist at the very periphery or are even completely useless (see the section on dead code). Not all code is created equally. This scheme for ranking code by importance is called “Type Rank” (at least at the level of type granularity — methods can also be ranked).
You can use Type Rank to create a release riskiness score. All you’d really need to do is have a build that tabulated which types had been modified and what their type rank was, and this would create a composite index of release riskiness. Each time you were gearing up for deployment, you could look at the score. If it were higher than normal, you’d want to budget extra time and money for additional testing efforts and issue remediation strategies.
Cohesion
Cohesion of modules in a code base can loosely be described as “how well is the code base organized?” To put it a bit more concretely, cohesion is the idea that things with common interest are grouped together while unrelated things are not. A cohesive house would have specialized rooms for certain purposes: food preparation, food consumption, family time, sleeping, etc. A non-cohesive house would have elements of all of those things strewn about all over the house, resulting in a scenario where a broken refrigerator fan might mean you couldn’t sleep or work at your desk due to noise.
Keeping track of the aggregate cohesiveness score of a codebase will give you insight into how likely your team is to look ridiculous in the face of an issue. Code bases with low cohesion are ones in which unrelated functionality is bolted together inappropriately, and this sort of thing results in really, really odd looking bugs that can erode your credibility.
Imagine speaking on your team’s behalf and explaining a bug that resulted in a significant amount of client data being clobbered. When pressed for the root cause, you had to look the person asking directly in the eye and say, “well, that happened because we changed the font of the labels on the login page.”
You would sound ridiculous. You’d know it. The person you were talking to would know it. And you’d find your credibility quickly evaporating. Keeping track of cohesion lets you keep track of the likelihood of something like that.
Dependency Cycles
So far, I’ve talked about managing risk as it pertains to defects: the risk of encountering them on release, and the risk of encountering weird or embarrassing ones. I’m going to switch gears, now, and talk about the risk of being caught flat-footed, unable to respond to a changing environment or a critical business need.
Dependency cycles in your code base represent a form of inappropriate coupling. These are situations where two or more things are mutually dependent in an architectural world where it is far better for dependencies to flow one way. As a silly but memorable example, consider the situation of charging your phone, where your phone depends on your house’s electrical system to be charged. Would you hire an electrician to come in and create a situation where your house’s electricity depended on the presence of your charging phone?
All too often, we do this in code, and it creates situations as ludicrous as the phone-electrical example would. When the business asks, “how hard would it be to use a different logging framework,” you don’t want the answer to be, “we’d basically have to rewrite everything from scratch.” That makes as much sense as not being able to take your phone with you anywhere because your appliances would stop working.
So, keep an eye out for dependency cycles. These are the early warning light indicators that you’re heading for something like this.
Dead Code
One last thing to keep an eye out for is dead code. Dead code is code that can never possibly be called during the running application’s lifecycle. It just sits in your codebase taking up space to no good end.
That may sound benign, but every line of code in your code base carries a small, cognitive maintenance weight. The more code there is, the more results come back in text searches of the code base, the more files there are to lose and confuse developers, and the more general friction is encountered when working with the system. This has a very real cost in the labor required to maintain the code.
Use Wisely
These are metrics about which fewer people know, so the industry isn’t rife with stories about people gaming them, the way it is with something like unit test coverage. But that doesn’t mean they can’t be gamed. For instance, it’s possible to have a nightmarish code base without any actual dead code — perversely, dead code could be eliminated by finding everything useless in the code base and implementing calls to it.
The metrics I’ve outlined today, if you make them big and visible to all, should serve as a conversation starter. Why did we introduce a dependency cycle? Should we be concerned about the lack of cohesion in modules? Use them in this fashion, and your group can save real money and produce better output. Use them in the wrong fashion, and they’ll be just another ineffective management bludgeon straight out of a Dilbert comic.
I’m surprised you highlighted these metrics, without mentioning tools or techniques for gathering them. Two of your metrics, cohesion and dependency cycles, can be simultaneously measured with Cumulative Component Dependency (CCD) or Average Component Dependency (ACD). Those metrics were introduced in “Large Scale Architecture” by John Lakos, long ago…I am very sad that practically nobody has heard of them today.
The original post is on the site of a static analysis tool vendor, so the tool/technique is sort of implied 🙂
(I realize that wouldn’t necessarily translate here to my blog, though — but that’s why there was no mention)
“The blog you created that one time and totally meant to add to over the years”… (sigh)
🙂
Would Type Rank give the main() method of a program a low rank as it is at the periphery?
Off the cuff, I don’t think so, but I’d have to check. (Don’t have my development rig with me at the moment).
Software Engineering undergrad here – I’ve gotten a lesson in coupling vs cohesion in *all* of my classes. Dependency cycles have been mentioned sparingly however.
Frankly, I’m impressed that any of that is mentioned. The distinction doesn’t surprise me, necessarily, though. Coupling/cohesion can bite you in a codebase of any size. The cycles among namespaces, however, don’t really rear their ugly head until the project has grown and you’re looking to reorganize, months or years later.
I was surprised by the metrics you mention, however they make a lot of sense. It’s hard when you have non-technical managers who still insist lines of code are a valid metric.
/shudders/
I keep thinking, it’s {insert current year}, no one is still worried about LOC as a productivity metric anymore. And I’m always wrong.
That is because non-technicals think that production = more so they translate this into “more code”. They have to be educated that this is only ever relevant to a production process where there is a finished product involved. Software is never a finished product.
How do measure type rank?
I believe it’s proprietary, so I’m not sure. | https://daedtech.com/important-code-metrics-youve-never-heard/ | CC-MAIN-2019-30 | refinedweb | 1,680 | 61.06 |
Celery 1.0.6 (stable) documentation
A click counter should be easy, right? Just a simple view that increments a click in the DB and forwards you to the real destination.
This would work well for most sites, but when traffic starts to increase, you are likely to bump into problems. One database write for every click is not good if you have millions of clicks a day.
So what can you do? In this tutorial we will send the individual clicks as messages using carrot, and then process them later with a celery periodic task.
Celery and carrot is excellent in tandem, and while this might not be the perfect example, you’ll at least see one example how of they can be used to solve a task.
The model is simple, Click has the URL as primary key and a number of clicks for that URL. Its manager, ClickManager implements the increment_clicks method, which takes a URL and by how much to increment its count by.
clickmuncher/models.py:
from django.db import models from django.utils.translation import ugettext_lazy as _ class ClickManager(models.Manager): def increment_clicks(self, for_url, increment_by=1): """Increment the click count for an URL. >>> Click.objects.increment_clicks("", 10) """ click, created = self.get_or_create(url=for_url, defaults={"click_count": increment_by}) if not created: click.click_count += increment_by click.save() return click.click_count class Click(models.Model): url = models.URLField(_(u"URL"), verify_exists=False, unique=True) click_count = models.PositiveIntegerField(_(u"click_count"), default=0) objects = ClickManager() class Meta: verbose_name = _(u"URL clicks") verbose_name_plural = _(u"URL clicks")
The model is normal django stuff, nothing new there. But now we get on to the messaging. It has been a tradition for me to put the projects messaging related code in its own messaging.py module, and I will continue to do so here so maybe you can adopt this practice. In this module we have two functions:
send_increment_clicks
This function sends a simple message to the broker. The message body only contains the URL we want to increment as plain-text, so the exchange and routing key play a role here. We use an exchange called clicks, with a routing key of increment_click, so any consumer binding a queue to this exchange using this routing key will receive these messages.
process_clicks
This function processes all currently gathered clicks sent using send_increment_clicks. Instead of issuing one database query for every click it processes all of the messages first, calculates the new click count and issues one update per URL. A message that has been received will not be deleted from the broker until it has been acknowledged by the receiver, so if the receiver dies in the middle of processing the message, it will be re-sent at a later point in time. This guarantees delivery and we respect this feature here by not acknowledging the message until the clicks has actually been written to disk.
Note: This could probably be optimized further with some hand-written SQL, but it will do for now. Let’s say it’s an exercise left for the picky reader, albeit a discouraged one if you can survive without doing it.
On to the code...
clickmuncher/messaging.py:
from carrot.connection import DjangoBrokerConnection from carrot.messaging import Publisher, Consumer from clickmuncher.models import Click def send_increment_clicks(for_url): """Send a message for incrementing the click count for an URL.""" connection = DjangoBrokerConnection() publisher = Publisher(connection=connection, exchange="clicks", routing_key="increment_click", exchange_type="direct") publisher.send(for_url) publisher.close() connection.close() def process_clicks(): """Process all currently gathered clicks by saving them to the database.""" connection = DjangoBrokerConnection() consumer = Consumer(connection=connection, queue="clicks", exchange="clicks", routing_key="increment_click", exchange_type="direct") # First process the messages: save the number of clicks # for every URL. clicks_for_url = {} messages_for_url = {} for message in consumer.iterqueue(): url = message.body clicks_for_url[url] = clicks_for_url.get(url, 0) + 1 # We also need to keep the message objects so we can ack the # messages as processed when we are finished with them. if url in messages_for_url: messages_for_url[url].append(message) else: messages_for_url[url] = [message] # Then increment the clicks in the database so we only need # one UPDATE/INSERT for each URL. for url, click_count in clicks_for_urls.items(): Click.objects.increment_clicks(url, click_count) # Now that the clicks has been registered for this URL we can # acknowledge the messages [message.ack() for message in messages_for_url[url]] consumer.close() connection.close()
This is also simple stuff, don’t think I have to explain this code to you. The interface is as follows, if you have a link to you would want to count the clicks for, you replace the URL with:
and the count view will send off an increment message and forward you to that site.
clickmuncher/views.py:
from django.http import HttpResponseRedirect from clickmuncher.messaging import send_increment_clicks def count(request): url = request.GET["u"] send_increment_clicks(url) return HttpResponseRedirect(url)
clickmuncher/urls.py:
from django.conf.urls.defaults import patterns, url from clickmuncher import views urlpatterns = patterns("", url(r'^$', views.count, name="clickmuncher-count"), )
Processing the clicks every 30 minutes is easy using celery periodic tasks.
clickmuncher/tasks.py:
from celery.task import PeriodicTask from clickmuncher.messaging import process_clicks from datetime import timedelta class ProcessClicksTask(PeriodicTask): run_every = timedelta(minutes=30) def run(self, \*\*kwargs): process_clicks()
We subclass from celery.task.base.PeriodicTask, set the run_every attribute and in the body of the task just call the process_clicks function we wrote earlier.
There are still ways to improve this application. The URLs could be cleaned so the URL and is the same. Maybe it’s even possible to update the click count using a single UPDATE query?
If you have any questions regarding this tutorial, please send a mail to the mailing-list or come join us in the #celery IRC channel at Freenode: | https://docs.celeryproject.org/en/1.0-archived/tutorials/clickcounter.html | CC-MAIN-2019-39 | refinedweb | 963 | 50.12 |
plugin is enabled on the Settings/Preferences | Plugins page, tab Installed, see Managing plugins for details.
Create.
Generate.
Install React in an empty RubyMine project
In this case, you will have to configure the build pipeline yourself as described in Building a React application below. Learn more about adding React to a project from the React official website.
Create an empty RubyMine project
Click Create New Project on the Welcome screen or select from the main menu. The New Project dialog opens.
In the left-hand pane, choose RubyMine and download the required dependencies.' or Run 'yarn install' in the popup:
You can use npm, Yarn 1, or Yarn 2, see npm and Yarn for details.
Project security
A webpack configuration file from external sources may contain some potentially malicious code that can cause problems when RubyMine executes the configuration on opening a JavaScript file with
import statements. For the sake of security, when you open a React project with webpack, RubyMine analyzes it, resolves the located data, and displays a warning that lets you decide whether the project is trustworthy or not.
If you click Skip, RubyMine disables analysis of the webpack configuration in the current project. As a result, RubyMine might not resolve some of the imports in the project or add imports that don't use resolution rules configured in the webpack configuration.
Learn more from Webpack and Project security.
Code completion
RubyMine provides code completion for React APIs and JSX in JavaScript code. Code completion works for React methods, React-specific attributes, HTML tags and component names, React events, component properties, and so on. Learn more from the React official website.
To get code completion for React methods and React-specific attributes, you need to have the react.js library file somewhere in your project. Usually the library is already in your node_modules folder.
Complete React methods, attributes, and events
By default, the code completion popup is displayed automatically as you type. For example:
={} or quotes
"".
By default, curly braces are inserted. You can have RubyMine always add quotes or choose between quotes or braces based on the type from a TypeScript definition file (d.ts) file. To change the default setting, open the Settings/Preferences dialog Ctrl+Alt+S, go to and select the applicable option from the Add for JSX attributes list.
Completion also works for JavaScript expressions inside curly braces. This applies to all the methods and functions that you have defined:
Complete HTML tags and component names
RubyMine provides code completion for HTML tags and component names that you have defined inside methods in JavaScript or inside other components:
Completion also works for imported components with ES6 style syntax:
Complete.
Transfer._11<<
Navigate.
To view component definition, press Ctrl+Shift+I.
To view quick documentation for a component, press Ctrl+Q. Learn more from JavaScript documentation look-up.
Lint.. With ESLint, you can also use JavaScript Standard Style as well as lint your TypeScript code.
To have ESLint properly understand React JSX syntax, you need eslint-plugin-react. With this plugin, you are warned, for example, when the display name is not set for a React component, or when some dangerous JSX properties are used:.
In the
pluginsobject, add
react.
In the
rulesobject, you can list ESLint built-in rules that you would like to enable, as well as rules available via the react plugin.{ "parser": "babel-eslint", "env": { "browser": true, "es6": true, "jest": true }, "rules": { "arrow-parens": ["error", "as-needed", { "requireForBlockBody": true }], "react/jsx-props-no-spreading": "off", "react/jsx-sort-props": ["error", { "reservedFirst": ["key"] }], "react/require-default-props": "off", "react/sort-prop-types": "error", "react/state-in-constructor": ["error", "never"], "semi-spacing": "warn" }, "overrides": [ { "files": [ "sample/**", "test/**" ], "rules": { "import/no-unresolved": "off" } } ] }
Learn more about ESLint and
react plugin configuration from the ESLint official website.
Code refactoring in.
Rename a state value
When you rename a state value, RubyMine suggests renaming the corresponding setter (the function that updates this state value in a React useState hook).
Place the caret within the name of the state value and press Shift+F6 or selectfrom the main menu of from the context menu.
Specify the new value name and press Enter. The focus moves to the setter where the new name of the value is suggested. Press Enter to accept the suggestion..
Run and debug.
Build a React application
You need to set up the build process if you installed React in an existing RubyMine project. Learn about various ways to configure a build pipeline for your React application from React official website.
Test a React application
You can run and debug Jest tests in React applications created with create-react-app. Before you start, make sure the react-scripts package is added to the dependencies object of your package.json.
You can run and debug Jest tests right from the editor, or from the Project tool window, or via a run/debug configuration, see Jest for details.
Run a test from the editor
Click
or
in the gutter and select Run <test_name> from the list.
You can also see whether a test has passed or failed right in the editor, thanks to the test status icons
and
in the gutter.. | https://www.jetbrains.com/help/ruby/react.html | CC-MAIN-2021-43 | refinedweb | 868 | 54.12 |
Please help so I do not loose any sleep over this!
I'd really like to understand this once and for all ;-)
My program looks like this:
[gcc -dumpversion = 3.23]
/* C++ */
#include <iostream>
int main(void) {
cout << "Hello World!" << endl;
invokation:invokation:Quote:}
h:\djgpp>gcc -x c++ hw.cpp -o hw.exe
(is this okay?)
Outcome:
hw.cpp: In function `int main()':
hw.cpp:7: `cout' undeclared (first use this function)
hw.cpp:7: (Each undeclared identifier is reported only once for each
function it appears in.)
hw.cpp:7: `endl' undeclared (first use this function)
WHAT is wrong here?
I have \djgpp\lang\cxx\.... as extracted
I have \djgpp\lib\gcc-lib\... etc; as extracted
and files from both GCC and GPP archives are carefully
copied over...
why am I still have problems?
Which version of cxxfilt.exe is supposed to be in
\DJGPP\bin ?
I have the 1st one encountered in there (prior to GPP).
Apparently ordinary 'C' programs are compiling fine.
//RadSurfer//
THANKS! | http://www.verycomputer.com/12_cca08a8cdde43bc2_1.htm | CC-MAIN-2019-39 | refinedweb | 168 | 80.17 |
A quickly brief history of my Homework assignment.
Using the code examples that you created, sort the following items: Rocket J. Squirrel, Bullwinkle J. Moose, Boris Badenov, Natasha Fatale, Fearless Leader, Mr. Big, Cloyd, Gidney, Metal-Munching Moon Mice, Capt. Peter "Wrongway" Peachfuzz, Edgar, and Chauncy
Your program should perform the following tasks.
1. Read the data in from the keyboard into the array interactively.
2. Print the array.
3. Sort the array in ascending order using the bubble sort;
4. Print the array;
5. Sort the array in descending order using the bubble sort;
6. Print the array.
Further specification:
You must use a separate method for each task.
There must be a method for loading the data into the array.
There must be one and ONLY one method for printing the contents of the array.
There must be a method for sorting the array in ascending order.
There must be a method for sorting the array in descending order.
If you do not use the bubble sort in the methods which sort the data, do not bother handing your code in to me.
The list of names is long. Save yourself some aggravation with these two suggestions:
1. Use the nextLine() method to read the String data from the keyboard into the array.
2. Do NOT enter all the data each time you test. Start by entering only one character instead of
each full name. Enter all of the entire names when you are sure all of your sorting and printing
algorithms are running correctly.
import java.util.*; public class lab7 { public static void main (String[] args) { String x = " "; String x1 = ""; x1 = storedList(x); printArray(x1); } public static String storedList( String stored) { Scanner scan = new Scanner(System.in); String[] list = new String[12]; System.out.println( "Hello user, I will sort your list in asscending and descending order? "); System.out.println( "Please enter your string." ); for( int i = 0; i <=11; i++) { list[i] = scan.nextLine(); } return stored; } public static void printArray(String[] displayed) { String[] list = new String[12]; for (int i = 0; i <= 11; i++) { System.out.print( list[i] + " " ); } } }
What I'm having issues is passing arguments into each method. I was able to figure out how to make one method to accept my array. But I'm having trouble creating a method to print the array. I keep receiving this compiler error. Please refer to screen shot. PLEASE help!
complier_errro.jpg
Thanks Again! | http://www.javaprogrammingforums.com/whats-wrong-my-code/26214-need-help-creating-method-array.html | CC-MAIN-2014-15 | refinedweb | 408 | 68.97 |
Simple Recurrent Neural Network
Re-submission Note: I originally submitted an RNN post but realized I made some major mistakes (I'm learning as I go). This is almost a complete re-do. One of the issues was that the RNN was not training properly, and I have not been able to get it to reliably train with my own implementation of gradient descent, so here I will calculate the gradients and hand those off to a scipy optimizer to find the weights.
I'm assuming you already know how to build a simple neural network (e.g. to solve XOR) and train it using backpropagation. I have a previous post covering backpropagation/gradient descent and at the end of that tutorial I build and train a neural network to solve the XOR problem, so I recommend making sure you understand that because I am basing the RNNs I demonstrate here off of that. I also assume you have a functional understanding of Python/numpy.
This blog is my journey into learning the fundamentals of machine learning and other quantitative principles and applications and is generally in chronological order of my learning. After I successfully learned how to make feedforward neural networks and train them, I really wanted to learn how to make recurrent neural networks (RNNs). I understood that they were for temporal/sequential data and thus they could learn relationships through time. But I could not for the life of me figure out how to make the jump from a feedforward neural net to an RNN until I watched this youtube video: (which I highly suggest you watch) by Jeff Heaton. Then I understood that RNNs can be implemented almost exactly like an ordinary feedforward neural network. I will re-explain some of the contents of that video here as I build a simple recurrent (Elman) neural network to solve a temporal version of the XOR problem (my favorite toy problem). I will also show you how to do basic time series/sequence prediction with a mini-mini-char-RNN implementation.
We're going to build a simple recurrent neural network to solve a sequential/temporal version of the XOR problem.
Just as a reminder, here is the truth table for XOR.
So normally, in a feedforward neural network, we would feed each training example as a tuple $(x_1, x_2)$ and we would expect an output $h(x)$ that closely matches $y$ if the network has been trained. As review, here's what our ordinary feedforward XOR architecture looks like:
In an RNN, we're going to add in the time dimension. But how? Well we simply reformat our training data to be in a time-dependent sequence.
Here's our new (temporal) training data:
Where $x ... x_n$ represents our training data, $y...y_n$ are the corresponding expected values, and $t ... t_n $ represents our time steps. I arranged a sequence of bits [0 0 1 1 0] such that we can XOR the current bit and the previous bit to get the result. For every-time step our RNN is going to make output the XOR of the previous 2 bits, so notice that after the first bit $y=?$ because there is no previous bit to XOR, so we just ignore what the RNN outputs. But for $x=0, t=1$ we see that $y=0$ because XOR(0,0)=0. Also notice how $time$ is in discrete, integer, steps. Some algorithms may actually have continous time implementation and that's something I'll likely explore in a future post. Let's take another look at our sequential data written horizontally as numpy code:
X = np.matrix('[0;0;1;1;0]') Y = np.matrix('[0;0;1;0;1]') #first bit should be ignored, just arbitrarily putting 0
So what do we do with our sequential XOR data and what does our neural network look like? Well, we're simply going to feed each (one at a time) $x$ value into our neural network and expect one output value at a time. Instead of having 2 input units (excluding bias), we only need one now:
What's that loop and $t-1$ thing? Well it means we're going to take our output from the hidden layer at time $t_n$ and feed it back into our hidden layer as additional input at $t_{n+1}$ (the next time step), or we could rephrase that to say that our hidden layer input at $t_n$ includes the output of the hidden layer from $t_{n-1}$ (the previous time step).
You might be wondering how this is any more useful than an ordinary feedforward NN, and the answer is it's not really. For a problem like XOR, I can't think of a reason why you'd ever want to use an RNN over a feedforward. We're just using it here because it's familiar and I'm a reductionist. But after we get this down, we'll move onto something where RNNs really shine: sequence prediction (in our case, predicting the next character in a sequence).
An Elman network is in the class of "simple recurrent neural networks" (presumably because they really are simple, with no frills) and it's the type of RNN we're going to build to solve our temporal XOR problem. Here's what it looks like when applied to our XOR problem:
where $\theta_1$ refers to the weights between the input layer and the hidden layer (a 6x4 matrix) and $\theta_2$ refers to our weights in between the hidden layer and our output layer (a 5x1 matrix).
Okay so everything should make sense here except those 4 units labeled $C_1 - C_4$. Those are called context units in the parlance of simple RNNs. These context units are additional input units that feed the output from $t_{n-1}$'s hidden layer back into $t_n$'s hidden layer. They're treated exactly like a normal input unit, with adjustable weights. (At $t = 0$ there is no history to remember, so we have to initialize our network's context units with something, generally 0s.) Notice that we have the same number of context units as we do hidden units, that's by design and is simply the architecture of an Elman network.
So what we've done here by adding context units that feed the previous time step's state into the current time step is to turn that diagram with the t-1 loop into essentially an ordinary feedforward neural network. And since it's a feedforward neural network, we can train it exactly like we do with a feed forward XOR neural network: backpropagation (it often get's called backpropagation through time but it's just a different name for the same thing).
Let's walk through the flow of how this works in the feedforward direction for 2 time steps.
- 1. $t=0$. Start with $x_1 = 0$ (first element in our list), intialize $C_1 - C_4$ to input 0s.
- 2. Feed those inputs (from bottom to top, $x_1, c_4, c_3, c_2, c_1, B_1$): [0,0,0,0,0,1] into the hidden layer (of course we multiply by $\theta_1$).
- 3. The hidden layer outputs $a_4, a_3, a_2, a_1, B_2$. We'll then store these values (except bias, $B_2$) in another temporary vector for the next time step.
- 4. Then our output unit uses the hidden layer outputs to produce the final output, $g(x)$
- 5. $t=1$ (next time step). So still $x_1 = 0$ (second element in our list), intialize $C_1 - C_4$ to the stored outputs of $H_1 - H_4$ from the last time we ran the network forward.
- 6. Feed those inputs (from bottom to top, $x_1, c_4, c_3, c_2, c_1, B_1$): [0, $H_4^{t-1}, H_3^{t-1}, H_2^{t-1}, H_1^{t-1}$, 1] into the hidden layer.
- 7. The hidden layer outputs $a_4, a_3, a_2, a_1, B_2$. We'll then store these values in the temporary vector for the next time step.
- 8. Then our output unit uses the hidden layer outputs to produce the final output, $g(x)$
Important Notes: As mentioned before, we treat the context units just like ordinary input units, that means they have weighted connections between them and the hidden layer, but their input does not go through any activation function nor do we manipulate those values in anyway before we feed them back in the next time step.
So as mentioned before, when I originally posted this article I attemped to train it using ordinary backpropagation/gradient descent (with momentum), and it was not reliably working. So rather than posting some code that may or may not work for you, I'm going to use scipy's optimize functions to help out the training (and even then it has issues converging sometimes). RNNs are infamously difficult to train compared to NNs. (We'll graph the cost function to see why later.)
If you have taken Andrew Ng's machine learning course, then you should be familiar with Matlab's 'fminunc' (and `fmincg`) optimizer. We're going to use scipy's version, `fmin_tnc` (I'll explain how it works later). Let me just walk through the major points of the following implementation
- I have a cost function defined in a separate file which accepts an 'unrolled' theta vector, so in the cost function we have to assign theta1 and theta2 by slicing the long thetaVec. This cost function returns the cost ('J') and the gradient (an unrolled vector containing theta1_grad and theta2_grad).
- In the main code to follow, we give scipy's `fmin_tnc` our cost function and some initial weights and it quickly finds an optimal set of weights. `fmin_tnc` will return the optimal weights as an unrolled vector.
- After we define theta1 and theta2 from the optimal weights returned, we run the network forward on a different sequence of bits to see if it really learned how to XOR the sequence one step at a time.
import numpy as np from sigmoid import sigmoid from scipy import optimize import cost_xorRNN as cr #I defined the cost function in a separate file X = np.matrix('[0;0;1;1;0]') #training data Y = np.matrix('[0;0;1;0;1]') #expect y values for every pair in the sequence of X numIn, numHid, numOut = 1, 4, 1 #initial, randomized weights: theta1 = np.matrix( 0.5 * np.sqrt ( 6 / ( numIn + numHid) ) * np.random.randn( numIn + numHid + 1, numHid ) ) theta2 = np.matrix( 0.5 * np.sqrt ( 6 / ( numHid + numOut ) ) * np.random.randn( numHid + 1, numOut ) ) #we're going to concatenate or 'unroll' theta1 and theta2 into a 1-dimensional, long vector thetaVec = np.concatenate((theta1.flatten(), theta2.flatten()), axis=1) #give the optimizer our cost function and our unrolled weight vector opt = optimize.fmin_tnc(cr.costRNN, thetaVec, args=(X, Y), maxfun=5000) #retrieve the optimal weights optTheta = np.array(opt[0]) #reconstitute our original 2 weight vectors theta1 = optTheta[0:24].reshape(6, 4) theta2 = optTheta[24:].reshape(5, 1) def runForward(X, theta1, theta2): m = X.shape[0] #forward propagation hid_last = np.zeros((numHid, 1)) #context units results = np.zeros((m, 1)) #to save the output for j in range(m):#for every input element context = hid_last x_context = np.concatenate((X[j,:], context)) a1 = np.matrix(np.concatenate((x_context, np.matrix('[1]'))))#add bias, context units to input layer z2 = theta1.T * a1 a2 = np.concatenate((sigmoid(z2), np.matrix('[1]'))) #add bias, output hidden layer hid_last = a2[0:-1, 0] z3 = theta2.T * a2 a3 = sigmoid(z3) results[j] = a3 return results Xt = np.matrix('[1;0;0;1;1;0]') #test it out on some new data print(np.round(runForward(Xt, theta1, theta2).T))
[[ 0. 1. 0. 1. 0. 1.]]
Cool! It worked. Remember, ignore the first bit of the output, it can't XOR just 1 digit. The rest of the sequence [1 0 1 0 1] matches with XOR of each pair of bits along the sequence. You might have to run this code a couple of times before it works because even when using a fancy optimizer, this thing is hard to train. Also try changing how we initialize the weights. Unfortunately scipy's `fmin_tnc` doesn't seem to work as well as Matlab's `fmincg` (I originally wrote this in Matlab and ported to Python; `fmincg` trains it alot more reliably) and I'm not sure why (email me if you know).
Also note I imported "sigmoid" which is a separate file that only contains the sigmoid function and 'cost_xorRNN' which is the cost function.. I'll reproduce both below so you can run everything on your own.
#sigmoid.py import numpy as np def sigmoid(x): return np.matrix(1.0 / (1.0 + np.exp(-x)))
#cost_xorRNN.py import numpy as np from sigmoid import sigmoid def costRNN(thetaVec, *args): X = args[0] Y = args[1] numIn, numHid, numOut = 1, 4, 1 #reconstitute our theta1 and theta2 from the unrolled thetaVec theta1 = thetaVec[0:24].reshape(numIn + numHid + 1, numHid) theta2 = thetaVec[24:].reshape(numHid + 1, numOut) #initialize our gradient vectors theta1_grad = np.zeros((numIn + numHid + 1, numHid)) theta2_grad = np.zeros((numHid + 1, numOut)) #this will keep track of the output from the hidden layer hid_last = np.zeros((numHid, 1)) m = X.shape[0] J = 0 #cost output results = np.zeros((m, 1)) #to store the output of the network #this is to find the gradients: for j in range(m): #for every training element #y = X[j+1,:] #expected output, the next element in the sequence y = Y[j] context = hid_last x_context = np.concatenate((X[j], context)) #add the context units to our input layer a1 = np.matrix(np.concatenate((x_context, np.matrix(' #Backpropagation::: #calculate delta errors d3 = (a3 - y) d2 = np.multiply((theta2 * d3), np.multiply(a2, (1 - a2))) #accumulate gradients theta1_grad = theta1_grad + (d2[0:numHid, :] * a1.T).T theta2_grad = theta2_grad + (d3 * a2.T).T #calculate the network cost for n in range(m): a3n = results[n].T yn = Y[n].T J = J + (-yn.T * np.log(a3n) - (1-yn).T * np.log(1-a3n)) #cross-entropy cost function J = (1/m) * J grad = np.concatenate((theta1_grad.flatten(), theta2_grad.flatten()), axis=1) #unroll our gradients return J, grad
Everything should look fairly familiar if you've gone through my post on gradient descent and backpropagation, or already have a decent handle on building an XOR-capable feedforward network, but let me walk through the important/new parts of the code.
1. Every training iteration, we temporarily save the hidden layer outputs in `hid_last` and then at the start of the next training iteration, we initialize our context units to what we stored in `hid_last`.
context = hid_last
2. We have 4 context units, we add/concatenate them with our 1 input unit $X_1$ (and the bias of course), so our total input layer contains 6 units. This means our `theta1` is a 6x4 matrix (6 inputs projecting to 4 hidden units). Our hidden layer has 4 hidden units + 1 bias, so `theta2` is a 5x1 matrix. Other than these manipulations, the network is virtually identical to an ordinary feedforward network.
3. In case the 'unrolling' of matrices is unclear... When we unroll theta1 and theta2 into a single vector, `thetaVec`, we simply flatten those vectors into a 1 dimensional sequence and concatenate them. So `theta1` is a 6x4 matrix (24 total elements) which we flatten to a 24 element vector, and we likewise flatten `theta` (5x1 = 5 elements) to a 5 element vector, then concatenate them in order to produce a 29 element vector, `thetaVec`. Thus the first 24 elements of this vector are `theta1` and the last 5 arre `theta2`, so we can rebuild our original vectors by slicing up `thetaVec` and using `.reshape()` to give us matrices of the proper dimensions.
4. Let's discuss the scipy optimizer.
opt = optimize.fmin_tnc(cr.costRNN, thetaVec, args=(X, Y), maxfun=2000)
Scipy's optimizer `fmin_tnc` just wants the reference to our cost function (i.e we're passing the object itself, not calling the function, hence we don't do `cr.costRNN(...)`. But if we do that, how do we pass in the arguments it expects? Well `fmin_tnc` will assume that the first argument of our cost function is supposed to be the unrolled theta vector and thus the 2nd argument to `fmin_tnc` is `thetaVec` which we randomly initialize. The optmizer will iteratively modify and improve the thetaVec we originally pass in.
But wait, our cost function also expects `X` and `Y` parameters! We defined the second argument in our cost function to be `*args` which essentially allows us to accept a tuple of arguments there, and that's what `fmin_tnc` is going to do. We give `fmin_tnc` an `args=()` parameter which is a tuple of additional arguments to pass into our cost function. In our case, we just want to pass in our X and Y vectors.
The 4th parameter we give to `fmin_tnc` is `maxfun=5000` which refers to the maximum number of times the optimizer is allowed to call our cost function. It isn't necessary to set this, but I decided to set it to be higher than default to allow it to hopefully find a better optimum.
What does `fmin_tnc` return to us? It returns 3 items by default in an array. The first is the only thing we really care about, our optimal weights stored in an unrolled vector. Hence I retrieve it with this line: `optTheta = np.array(opt[0])` The other 2 return values are the number of times it called our cost function, and a return code string. You can see the documentation here:
import matplotlib.pyplot as plt import numpy as np import cost_xorRNN as cr %matplotlib inline thetaVec_f = np.linspace(-1.0, 10.0, 100) thetaVec_all = np.array([ -18.37619967, 124.9886293 , 0.69066491, -2.38403005, -2.3863598 , 34.07749817, -4.0086386 , -99.19477153, 5.28132817, 154.89424477, 17.32554579, -64.2570698 , 16.34582581, -20.79296525, -21.30831168, -15.76185224, 4.64747081, -65.70656672, 13.59414862, -53.70279419, 113.13004224, -33.56398667, 0.7257491 , -9.27982256, -18.29977063, 129.48720956, -37.57674034, -30.04523486, -90.35656788]) thetaVec_sample = [np.concatenate((thetaVec_all[0:2], np.array([theta_]), thetaVec_all[3:]), axis=0) for theta_ in np.nditer(thetaVec_f)] Xs = np.matrix('[0;0;1;1;0]') Ys = np.matrix('[0;0;1;0;1]') zs = np.array([cr.costRNN(np.array(theta_s).T, *(Xs, Ys))[0] for theta_s in thetaVec_sample]).flatten() ax = plt.subplot(111) #ax.set_yscale('log') #Try uncommenting this to see a different perspective ax.set_ylabel('Cost') ax.set_xlabel('thetaVec[2]') plt.scatter(thetaVec_f, zs) plt.show()
You don't really need to understand the code behind this graph. Essentially what I'm doing is taking the set of optimal weights returned from the optimization function, and then changing the third weight (arbitrarily chosen) in the unrolled weight vector to between the values of -1 to -10 and calculating the cost for each new set of weights (but of the 29 total weights, only one individual weight is changed). So we're only looking at the cost as a function of 1 individual weight. The graph looks different if you look at a different weight, but the point is, this is not a nice cost surface. If our initial weight lands somewhere left of 6, then we'll probably be able to gradient descent down to the minimum, but if it lands to the right, we'll probably get stuck in that local minimum. Now imagine all 29 weights in our network having a cost surface like this and you can see how it gets ugly. The take-away here is that RNNs have a lot of local optima that make it really difficult to train with the typical methods we use in feedforward networks. Ideally, we want a cost function that is smooth and convex.
What's "mini, mini, mini char-RNN" ? If you're familiar with Karpathy's charRNN () then you'll have an idea. We're going to build the simple RNN that predicts the next character in a short word like "hello" as presented on his blog. We're just going to modify the RNN we built above with a few key changes:
1) There is no longer a distinct Y vector of expected values. Our expected values are the next character in the sequence. So if we feed our RNN 'hell' we expect it to return 'ello' to complete the word we trained it on. So $y = X[j+1, :]$.
2) Since we have only have 4 characters in our "vocabularly", we'll represent them as binary vectors of length 4. I.e. our binary encoding (arbitrarily assigned) is:
h = [0 0 1 0], e = [0 1 0 0], l = [0 0 0 1], o = [1 0 0 0]
3) We're going to expand the hidden layer from 4 to 10. Seems to make training faster.
4) Thus the input layer will now contain: 4 inputs + 10 context units + 1 bias = 11 total. And the output will contain 4 units since each letter is a vector of length 4.
As before, I'll reproduce the code below (two separate files: RNNoptim.py and cost_charRNN.py; but you could put it all in one file if you want) and explain the important points.
#cost_charRNN.py OUR COST FUNCTION FILE import numpy as np from sigmoid import sigmoid def costRNN(thetaVec, *args): X = np.matrix(np.array(args)) numIn, numHid, numOut = 4, 10, 4 numInTot = numIn + numHid + 1 theta1 = thetaVec[0:(numInTot * numHid)].reshape(numInTot, numHid) theta2 = thetaVec[(numInTot * numHid):].reshape(numHid+1, numOut) theta1_grad = np.zeros((numInTot, numHid)) theta2_grad = np.zeros((numHid + 1, numOut)) hid_last = np.zeros((numHid, 1)) m = X.shape[0] J = 0 results = np.zeros((m, numOut)) for j in range(m-1): #for every training element #y = X[j+1,:] #expected output, the next element in the sequence y = X[j+1, :] context = hid_last x_context = np.concatenate((X[j, :], context.T), axis=1) a1 = np.matrix(np.concatenate((x_context, np.matrix('[1]')), axis=1)).reshape(numOut,) #Backpropagation::: #calculate delta errors d3 = (a3.T - y) d2 = np.multiply((theta2 * d3.T), np.multiply(a2, (1 - a2))) #accumulate gradients theta1_grad = theta1_grad + (d2[0:numHid, :] * a1.T).T theta2_grad = theta2_grad + (a2 * d3) for n in range(m-1): a3n = results[n, :].T.reshape(numOut, 1) yn = X[n+1, :].T J = J + (-yn.T * np.log(a3n) - (1-yn).T * np.log(1-a3n)) J = (1/m) * J grad = np.concatenate((theta1_grad.flatten(), theta2_grad.flatten()), axis=1) return J, grad
That's our cost function file. It accepts an unrolled theta vector and the input data and returns the cost and the gradients. It is virtually the same as before besides the changed layer architecture and the fact that our $y$ (expected output) is just $X[j+1]$
import numpy as np from sigmoid import sigmoid from scipy import optimize #Vocabulary h,e,l,o #Encoding: h = [0,0,1,0], e = [0,1,0,0], l = [0,0,0,1], o = [1,0,0,0] X = np.matrix('0,0,1,0; 0,1,0,0; 0,0,0,1; 0,0,0,1; 1,0,0,0') numIn, numHid, numOut = 4, 10, 4 numInTot = numIn + numHid + 1 theta1 = np.matrix( 1 * np.sqrt ( 6 / ( numIn + numHid) ) * np.random.randn( numIn + numHid + 1, numHid ) ) theta2 = np.matrix( 1 * np.sqrt ( 6 / ( numHid + numOut ) ) * np.random.randn( numHid + 1, numOut ) ) thetaVec = np.concatenate((theta1.flatten(), theta2.flatten()), axis=1) opt = optimize.fmin_tnc(costRNN, thetaVec, args=(X), maxfun=5000) optTheta = np.array(opt[0]) theta1 = optTheta[0:(numInTot * numHid)].reshape(numInTot, numHid) theta2 = optTheta[(numInTot * numHid):].reshape(numHid+1, numOut) def runForward(X, theta1, theta2): m = X.shape[0] #forward propagation hid_last = np.zeros((numHid, 1)) #context units results = np.zeros((m, numOut)) for j in range(m):#for every input element context = hid_last x_context = np.concatenate((X[j,:], context.T), axis=1) a1 = np.matrix(np.concatenate((x_context, np.matrix('[1]')), axis=1)).T#add bias, context units to input layer z2 = theta1.T * a1 a2 = np.concatenate((sigmoid(z2), np.matrix('[1]'))) #add bias, output hidden layer hid_last = a2[0:-1, 0] #ignore bias z3 = theta2.T * a2 a3 = sigmoid(z3) results[j, :] = a3.reshape(numOut,) return results #This spells 'hell' and we expect it to return 'ello' as it predicts the next character for each input Xt = np.matrix('0,0,1,0; 0,1,0,0; 0,0,0,1; 0,0,0,1') print(np.round(runForward(Xt, theta1, theta2)))
[[ 0. 1. 0. 0.] [ 0. 0. 0. 1.] [ 0. 0. 0. 1.] [ 1. 0. 0. 0.]]
That's cool. Do you remember our encoding? h = [0,0,1,0], e = [0,1,0,0], l = [0,0,0,1], o = [1,0,0,0]
So we gave it 'hell' and it returned 'ello' ! That means, when it received the first character, [0,0,1,0] ("h"), it returned [ 0. 1. 0. 0.] ("e"). It knew what letter is supposed to come next! Neat.
Again, this is virtually identical to the network we built for XOR just that our input layer accepts binary 4 element vectors and returns 4 element vectors representing characters. We also increased the hidden layer size to 10.
If you want to take this farther, try increasing the number of characters you can encode by increasing the input and output layers. I tried (not shown here) up to 11 element vectors, using the letters "e t a o i n s h r d" (which are the top 10 highest frequency letters in english) and the space character. With just those 11 characters you can encode alot of words, even sentences. While I got it to work on individual words, training became increasingly difficult for longer input sequences (I tried expanding the hidden layer). My guess is that it just doesn't have enough 'memory' to remember more than a couple of characters back, therefore it won't be able to learn the character sequences of a long word or sentence. Hence why anyone doing 'real' character prediction (like the Karpathy charRNN) uses a much more sophisticated RNN, an LSTM network.
I also attempted to build a character generator from this code, so that we train it on a word or small sentence and then tell it to generate some sequence of characters based on a seed/starting character or word. That didn't work well enough to present here, but if I get it working, I'll make a new post.
Do note that I wrote this code for readability, thus it doesn't follow best coding practices like DRY.
Also, like I mentioned before, I had to resort to using a scipy optimizer to help train the network rather than use my own implementation of gradient descent like I did with the normal feedforward XOR network in my previous post. I suppose I was experiencing the exploding/vanishing gradient problem and I just didn't have a sophisticated enough gradient descent implementation. If you have any expertise to lend here then please email me (outlacedev@gmail.com). And please email me if you spot any errors.
- (Elman Network Tutorial)
- | http://outlace.com/rnn.html | CC-MAIN-2018-17 | refinedweb | 4,549 | 63.7 |
.
Basic NAT
Basic NAT (as defined in RFC 2663) performs just the IP address translation (one inside host to one IP address in the NAT pool). The moment the inside host starts a session through the NAT, it becomes fully exposed to the outside world.
When using static basic NAT (statically defined inside-to-outside IP address mapping), the inside host is exposed all the time.
Summary: Basic NAT provides no security.
Stateless NAT
Some IPv6-to-IPv4 (or 4-to-6) NAT algorithms are stateless – IPv6 address is calculated from the IPv4 using an algorithm (or device configuration). From the security standpoint, stateless NAT is no different from static basic NAT (read: useless).
Network Address Port Translation (NAPT)
NAPT (also known as PAT) keeps a list of established sessions and uses that list to perform address and port translation of inbound and outbound packets. If an unknown packet arrives from the inside interface, a new entry is created, if an unknown packet arrives from the outside interface, it’s dropped.
There is no “standard” NAPT behavior. RFC 4787 describes various NAPT parameters; the ones most important to the security-related discussion are the Address and Port Mapping behaviors.
With the Endpoint independent mapping, the NAT translation table contains just the inside IP address and TCP/UDP port (default behavior on most low-end devices). As soon as the inside host opens a session through NAT, anyone can send TCP or UDP packets to the source port used by that host.
Cisco IOS usually implements Address and Port-Dependent Mapping – the NAT translation table contains full 5-tuple (source/destination address/port and the L4 protocol).
NAPT device using address and port-dependent mapping seems to behave like a stateful firewall, but does not inspect the contents of the TCP/UDP session and does not check the validity of TCP headers. Its behavior is almost identical to reflexive ACL feature.
Summary: NAPT does provide some packet filtering functionality. Static NAPT is identical to a simple packet filter (whatever is translated by the static NAPT rules is permitted).
Other considerations
While we definitely need firewalls and/or packet filters at the network edge, most of today’s attacks work on application-layer, using SQL injection or “Advanced Persistent Threats” like sending an Excel or PDF file with a 0-day exploit to a click-happy user. For more details, please listen to the Packet Pushers Podcast Show 56 and Show 61.
Finally, I will not discuss the absurdity of the security-by-obscurity argument (Let's secure the network by hiding internal addresses with NAT). Please don’t even mention it in the comments.
Related posts by categories
Please read our Blog Commenting Policy before writing a comment.
12 comments:
Constructive courteous comments are most welcome. Anonymous trolling will be removed with prejudice.
sorry for Slovenian joke, but I had to :) :) :)
cheers, Jan
Once in a while I read similar subjects claiming that NAT is not security feature. The most useful & widely use NAT is NAPT. And You also came to conclusion that NAPT is basic FW. (at least, that's what I understood :))
For Enterprise networks, without a doubt FW is needed, but for home network with few computers, where budget & knowledge is limited, NAPT was quite useful FW.
I recently established dual stack @home and realized that suddenly I need FW too.
NAT is a "security" feature in another way: for SMEs that don't have PI address space, it prevents your existing ISPs from holding you hostage at renewal time. Renumbering even a 500-device network is expensive, and using private space and NAT makes that cost orders of magnitude smaller. So in IPv4 land, NAT "secures" choice of transit providers for those without PI address space.
I managed a few painful renumberings during the late 1990s, and ISPs (especially incumbent telcos) used to use that renumbering cost as a lever during negotations (which made you want to stop doing business with them even more!).
As a security architect I find it interesting that the same type of discussion does not occur with split DNS. Both NAT and Split DNS are ways of breaking / avoiding the need for a single consistent address space or namespace across interconnected networks. This is often what needs to be done at the border between different security domains. I see NAT and Split DNS as design components that are used within Security Gateway environments to mean the requirements for interconnectivity; that does not make them security controls by themselves.
"Finally, I will not discuss the absurdity of the security-by-obscurity argument (Let's secure the network by hiding internal addresses with NAT). Please don’t even mention it in the comments."
I'm not very bright so please excuse me. Please please please explain why.
You are bright, or you wouldn't be asking questions about networking security ;)
It's a fair question, blog post coming in early January.
Did you ever post this? If so, can you add a link or a date. Thanks
NAT has been invented to resolve lack of real IP addresses, this feature of address translation is used now as a security option.
Translation of: One to one, pool to one, pool to pool is used for both real or private addresses.
Listen man its simple. We won't adopt IPv6 until we have the same control and hiding of topology of NAT IPv4. No amount of fussy IETF talk will make that change. Give us NAT or get to work on IPv7. Don't like it? well thats too bad... Reality is what it is. Like prostitution and Cannabis, somethings just will never go away no matter how much grumbling the powers that be do. NAT we want it. I don't want my friggin printer, phone, router, switch and 3 terminals to have public addresses. I dont want people to see if I have 3 or 3000 computers. I want to control each machine, each port and switch ISPs ten times a day with no firewall editing.
have you heard of ULAs and temporary addressing in IPv6? Your LAN devices don't have to be globally routable or reachable.
Olduser hit it on the nose here about the need for NAT.
Define security and tell me why NAPT is not a security feature? NAT makes you vulnerable to everything? Who taught you that? You know how many computers are saved from the blaster worm because they are behind a cheap router?
The post should read. "NAT adds to your security by.. points 1,2,3. NAT is not a complete security solution because of points 1,2,3."
Sounds like a consultant wrote this article.
IMHO nat is only of use if one assumes the network layer is the only attack vector. I've not seen it been of much use against somewhat more evolved attacks.
Despite what's documented here i've not been convinced of it's usability.
IPv6 is a dragon with many heads, for sure. | https://blog.ipspace.net/2011/12/is-nat-security-feature.html?showComment=1324368422631 | CC-MAIN-2019-39 | refinedweb | 1,174 | 64.3 |
Opened 11 years ago
Closed 11 years ago
Last modified 9 years ago
#1736 closed defect (fixed)
getting it to work under Python 2.5
Description
First my plugin did not work at all (it did not show uo in WebAdmin -> Plugins) Then I figured that b/c I am using Python 2.5 I need to change the import statement si replaced in pagetoodt.py
#import cElementTree as ElementTree
with
from xml.etree.ElementTree import Element, ElementTree # python 2.5
note that I am a total Python newbie... at least the plugin shows up in the WebAdmin -> Plugins menu now and it is activated... .but now I have the same problem as described above.
But now I get this error:04, in send_converted content, selector) File "/var/lib/python-support/python2.5/trac/mimeview/api.py", line 384, in convert_content output = converter.convert_content(req, mimetype, content, ck) File "build/bdist.linux-i686/egg/pagetoodt/pagetoodt.py", line 30, in convert_content archive = zipfile.ZipFile(self.template_filename, 'r') File "zipfile.py", line 339, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 2] No such file or directory: u'/var/www/trac/pwc/attachments/wiki/PageToOdtStyles/empty.odt'
Attachments (0)
Change History (5)
comment:1 Changed 11 years ago by
comment:2 Changed 11 years ago by
This didn't actually appear to be checked in?
comment:3 Changed 11 years ago by
comment:4 Changed 11 years ago by
If this makes the plugin stable and work on 0.11, let's add the 0.11 tag to the plugin page to advertise 0.11 support.
comment:5 Changed 11 years ago by
Wow, wrong ticket, sorry I was really off in another world on that one. This ticket is adding Python 2.5 support.
here is the fix (tested and verified) to get the plugin working with Python 2.5
replace in pagetoodt.py
with
Don't forget to
now it should work | https://trac-hacks.org/ticket/1736 | CC-MAIN-2018-17 | refinedweb | 325 | 60.11 |
Overview 2^2 + 2^1 + 2^-2. However, even simple numbers like 0.1 cannot be represented exactly. This becomes obvious when converting to BigDecimal as it will preserve the value actually represented without rounding.
new BigDecimal(0.1)= 0.1000000000000000055511151231257827021181583404541015625 BigDecimal.valueOf(0.1)= 0.1
Using the constructor obtains the value actually represented, using valueOf gives the same rounded value you would see if you printed the double
When a number is parsed, it is rounded to the closest represented value. This means that there is a number slightly less than 0.5 which will be rounded to 0.5 because it is the closest represented value.
The following does a brute force search for the smallest value which rounded becomes 1.0
public static final BigDecimal TWO = BigDecimal.valueOf(2); public static void main(String... args) { int digits = 80; BigDecimal low = BigDecimal.ZERO; BigDecimal high = BigDecimal.ONE; for (int i = 0; i <= 10 * digits / 3; i++) { BigDecimal mid = low.add(high).divide(TWO, digits, RoundingMode.HALF_UP); if (mid.equals(low) || mid.equals(high)) break; if (Math.round(Double.parseDouble(mid.toString())) > 0) high = mid; else low = mid; } System.out.println("Math.round(" + low + ") is " + Math.round(Double.parseDouble(low.toString()))); System.out.println("Math.round(" + high + ") is " + Math.round(Double.parseDouble(high.toString()))); }
On Java 7 you get the following result.
Math.round(0.49999999999999997224442438437108648940920829772949218749999999999999999999999999) is 0 Math.round(0.49999999999999997224442438437108648940920829772949218750000000000000000000000000) is 1
What is surprising is that in Java 6 you get the follow.
Math.round(0.49999999999999991673327315311325946822762489318847656250000000000000000000000000) is 0 Math.round(0.49999999999999991673327315311325946822762489318847656250000000000000000000000001) is 1
Where do these numbers come from?
The Java 7 value is the mid point between 0.5 and the previous represent value. Above this mid point, the value is rounded to 0.5 when parsed.
The Java 6 value is the mid point between value value before 0.5 and the value before that.
Value 0.5 is 0.5 The previous value is 0.499999999999999944488848768742172978818416595458984375 ... and the previous is 0.49999999999999988897769753748434595763683319091796875 The mid point between 0.5 and 0.499999999999999944488848768742172978818416595458984375 is 0.4999999999999999722444243843710864894092082977294921875 ... and the mid point between 0.499999999999999944488848768742172978818416595458984375 and 0.49999999999999988897769753748434595763683319091796875 is 0.4999999999999999167332731531132594682276248931884765625
Why is the Java 6 value smaller
In the Java 6 Javadoc Math.round(double) is defined as
(long)Math.floor(a + 0.5d)
The problem with this definition is that 0.49999999999999994 + 0.5 has a rounding error which results in the value 1.0.
In the Java 7 Javadoc Math.round(double) it simply states:
Returns the closest long to the argument, with ties rounding up.
So how does Java 7 fix this?
The source code for Java 7’s Math.round looks like
public static long round(double a) { if (a != 0x1.fffffffffffffp-2) // greatest double value less than 0.5 return (long)floor(a + 0.5d); else return 0; }
The result for the largest value less than 0.5 is hard coded.
So what is 0x1.fffffffffffffp-2?
It is a hexi-decimal presentation of the floating point value. It is rarely used, but it is precise as all values can be represented without error (to a limit of 53 bits).
Related Links
Bug ID: 6430675 Math.round has surprising behavior for 0x1.fffffffffffffp-2
Why does Math.round(0.49999999999999994) return 1
Reference: Why Math.round(0.499999999999999917) rounds to 1 on Java 6 from our JCG partner Peter Lawrey at the Vanilla Java blog. | http://www.javacodegeeks.com/2012/04/why-mathround0499999999999999917-rounds.html | CC-MAIN-2014-52 | refinedweb | 567 | 63.86 |
Is there any way to automate this in my spreadsheet? [closed]
Thank you for all your guidance! Learning new things everyday.
Thank you for all your guidance! Learning new things everyday.
A slightly less complex solution might be to select the range you want to work with and use a standard filter - Data | More Filters | Standard Filter
You might want to use a named range if there is a lot of rows to work with. You mention ten million items; of course you won't be able to add that many rows to a Calc sheet as the maximum number of rows is 1048576.
If this answer helped you, please accept it by clicking the check mark ✔ to the left and, karma permitting, upvote it. That will help other people with the same question.
Here is a sample of what I would do, just put "x" next to number and bush button. For Button to work, set macro’s at Medium….In Tools/Option, LibreOffice/Security..Macro Security.. Medium…
C:\fakepath\Just push the button.ods
I'm not the best person to answer this since I don't have much experience with Office stuff. So keep in mind: there might be better ways, e.g. a macro; hopefully someone will write answers on that.
But I've been lately twiddling with scripting LO Calc, and I figured I could share some of what I learned, and answer your question.
LibreOffice supports scirpting through UNO API. There're various language backends to it, here I'm using Python. You may need to install some python package for
import uno line to work (e.g. on Fedora it's libreoffice-pyuno package).
Here's a code that does what you asked for:
#!python import uno I_COL_TO_READ_FROM = 0 # the column with numbers I_COL_MARKS = 1 # the column with "X"es I_COL_TO_WRITE_TO = 2 # the empty column to write new numbers to MARK = 'X' # run libreoffice as: # soffice --calc --accept="socket,host=localhost,port=2002;urp;StarOffice.ServiceManager" def connectToLO(): # get the uno component context from the PyUNO runtime localContext = uno.getComponentContext() desktop = smgr.createInstanceWithContext( "com.sun.star.frame.Desktop",ctx) return desktop.CurrentComponent # the "unused rectangle" by default is 1048576×1024, which probably isn't something # you might be interested in def getUsedRectangle(sheet): cursor = sheet.createCursor() cursor.gotoEndOfUsedArea(False) cursor.gotoStartOfUsedArea(True) return cursor # applies f to every row in the range def foldRows(rectangle, f, accum): for row in rectangle.Rows: accum = f(row, accum) def fillNewCol(row, col_write_to): (dst_col, row_index) = col_write_to if row.getCellByPosition(I_COL_MARKS, 0).String == MARK: dst_cell = dst_col.getCellByPosition(0, row_index) # see comments under the post: a cell has different state when a number is # assigned compared to a string. So here I test for whether we're dealing # with strings or numbers. But you may want to remove overhead of this test # if you know what you deal with right away src_cell = row.getCellByPosition(I_COL_TO_READ_FROM, 0) if src_cell.String.isdigit(): dst_cell.Value = src_cell.Value else: dst_cell.String = src_cell.String return (dst_col, row_index+1) return (dst_col, row_index) focused_sheet = connectToLO().CurrentController.ActiveSheet used_range = getUsedRectangle(focused_sheet) foldRows(used_range, fillNewCol, (focused_sheet.Columns.getByIndex(I_COL_TO_WRITE_TO), 0))
The main part is implemented at
fillNewCol: it checks rows for a mark, and writes to the new column as needed.
You may want to tweak column indices in
I_COL_TO_READ_FROM,
I_COL_MARKS, and
I_COL_TO_WRITE_TO variables to accord to your spreadsheet. They're "hardcoded" for simplicity, though ideally maybe one could derive them from column names or whatever. And similar with
MARK field.
Otherwise, the code is hopefully self-descriptive, but feel free to ask.
Here's how you can use it:
soffice --calc --accept="socket,host=localhost,port=2002;urp;StarOffice.ServiceManager"
calc:gen-new-col.pyfile ...
@Opaque oh, yeah, I mean, I am not sure I fully understood you (e.g. I don't see how you figured that from the screenshot), but I just remember that for whatever reason cells has two ways to assign to them: one way is
cell.String = some_text and the other one is
cell.Value = some_number. I'll probably replace
String with
Value in the code in a sec, and add a comment about it.
@Opaque okay, to make sure script does not break if OP has strings instead of numbers, I figured I can just do
cell.String.isdigit(), and assign either to
Value or to
String based on that. Done. Huh, can't delete my prev. comment :/ I just figured btw what you meant by "left alignment", I see now how you figured that from the shot :)
Asked: 2019-07-18 21:16:11 +0100
Seen: 156 times
Last updated: Jul 23 ]
Please don't close your questions that way. You made two mistakes:
These things disallow others with similar questions to learn from the question on this Q&A site, which goal is to collect questions and their resolutions for everyone. So in the end, this turned out (inadvertently, I'm sure) to be very selfish, and not giving back to the community that tried to help you.
Please use your question's update history to recover the original question, and mark the correct solution, for everyone's benefit. Thanks! | https://ask.libreoffice.org/en/question/201546/is-there-any-way-to-automate-this-in-my-spreadsheet/?answer=201592 | CC-MAIN-2020-05 | refinedweb | 862 | 57.16 |
project directions:
1. Create a class called Student that has name, grade, house as instance variables
2. Create a constructor that take these (3) items as parameters and sets them to your private instance/global variables
3. Next, create a test class StudentTest
4. Inside of StudentTest create an ArrayList called myList
5. Create a loop that runs 5 times. It asks the user for the (3) pieces of information (name, grade, house). Take that info and create and Student object that will be added to the ArrayList
Test class
Code :
public class StudentTest { /** * main * @param args */ public static void main(String[] args) { Scanner keyboard = new Scanner(System.in); ArrayList<Student> myList = new ArrayList<Student> (); for (int i = 0; i < 5; i++) { System.out.println("What is the name of the student?"); String name = keyboard.nextLine(); System.out.println("What is his/her house?"); String house = keyboard.nextLine(); System.out.println("What is his/her grade? Numbers only, please."); int grade = keyboard.nextInt(); myList.add(i, new Student(name, grade, house)); } } }
Student class
Code :
public class Student { //instance variables String name; int grade; String house; /** * student constructor * @param name * @param grade * @param house */ public Student(String name, int grade, String house) { this.name = name; this.grade = grade; this.house = house; } }
Console:
What is the name of the student?
Bob
What is his/her house?
Random
What is his/her grade? Numbers only, please.
9
What is the name of the student?
What is his/her house?
I couldn't type in the name of the second student because the next line "What is his/her house?" came up with it. | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/15974-basic-code-problem-printingthethread.html | CC-MAIN-2014-49 | refinedweb | 270 | 79.46 |
Post-VS 2008-Technology: LINQ to XSD and LINQ to Stored XML
- |
-
-
-
-
-
-
-
Read later
My Reading List.
The first two parts of the presentation cover LINQ to XML basics and the current advanced XML features:
-
The slides can be downloaded from the conference's web site.
In the third part of the presentation Shyam presents future extensions of LINQ to XML: LINQ to XSD and LINQ to Stored XML.
LINQ to XSD has first been announced by Microsoft's XML team in November 2006 (including a preview alpha 0.1 for the LINQ May 2006 CTP)::Using LINQ to XSD, the query is written in a much clearer and type-safe way:
(from item in purchaseOrder.Item
select item.Price * item.Quantity
).Sum();
In June 2007 another preview alpha 0.2 for Visual Studio 2008 beta 1 was published. No LINQ to XSD preview appeared for VS 2008 beta 2, probably because Dr. Ralf Lämmel, who spearheaded the technology, left Microsoft. Roger Jennings, principal consultant of OakLeaf Systems, has the details of LINQ to XSD's history. He pursued the matter and requested an update of the technology from Microsoft:.
LINQ to Stored XML (XML in the database) offers ways of issuing queries against XML datatype columns within an SQL Server 2005. The goal is to "provide a strongly-typed LINQ experience over data in XML datatype columns" by providing "mapping from XML schema to classes" and "query translation from LINQ expressions to server XQuery expressions" (sample taken from the AdventureWorks database with 'Resume' being an XML datatype column):
Query:
var q = from o in _data.JobCandidates
where o.Resume.Skills.Contains("production")
select o.Resume.Name.Name_Last;
Output:
SELECT [Extent1].[Resume].query(
N'declare namespace r="";
/*[1]/r:Name/r:Name.Last'
).value(N'.', N'nvarchar(max)') AS [C1]
FROM [HumanResources].[JobCandidate] AS [Extent1]
WHERE cast(1 as bit) = ([Extent1].[Resume].query(
N'declare namespace r="";
contains(/*[1]/r:Skills, "production")'
).value(N'.', N'bit'))
The presentation is the first sign that activity is continued on LINQ to XSD and that LINQ to Stored XML is underway. But unfortunately no release dates - not even for preview bits - are mentioned.
It would be difficult to summarize all MS plans for post VS 2008 technology under a single headline. This is but one topic of their plans, which goes well in line with the evolution of C# and VB (LINQ). X/O-Mapping is an important topic for MS (and others), which in my opinion goes beyond simple XML parsing.
-Hartmut | http://www.infoq.com/news/2007/12/post-vs2008-linq-to-xml | CC-MAIN-2016-18 | refinedweb | 420 | 56.76 |
Welcome to ELI5 Full Stack: Breakthrough with Django & EmberJS. This is an introduction to full stack development for everyone, especially beginners. We’ll go step-by-step through the development of a basic web application. A library of sorts. Together we’ll build a back-end to store data and a RESTful API to manage it. Then we’ll construct a front-end user interface for users to view, add, edit, and delete the data.
This isn’t meant to be a deep dive into either Django or EmberJS. I don’t want us to get bogged down with too much complexity. Rather its purpose is to show the critical elements of basic full stack development. How to stitch together the back end and front end into a working application. I’ll go into detail about the software, frameworks, and tools used in the process. Every terminal command run and line of code in the final application is present in this tutorial.
I’ve kept each section short and to the point so that no one’s head explodes. There are also indicators to mark points for reflection so you can go back and look at what we’ve done and save state. If you don’t know what something means click through to the linked articles which will explain in detail. Remember, this is as an introduction to everyone including beginners. If you don’t need the hand holding push on through to the sections relevant to you.
If you’re a beginner, I that suggest you write every line of code and run each terminal command yourself. Don’t copy and paste. It won’t sink in. Take your time and think about what you’re doing. This is a critical trait of an effective and self-sufficient programmer. You will develop this over time if you write your own code and think about what you’re writing. If you mess up (look at my commit history, I definitely did) don’t sweat it. Go back. This isn’t a race. You’ll be fine if you take your time.
Note: I developed this tutorial on a MacBook Pro running macOS High Sierra (10.3.6). I’m using iTerm2 for the terminal and Sublime Text 3 as my text editor. All testing uses the Chrome browser and its built-in tools. The actual code shouldn’t have any differences. You can download the final project files from the Github repository.
Table of Contents
Section 1: The Whats, Hows, and Whys
1.1 Why I Wrote This Tutorial
1.2 Back End, Front End. What’s the Difference?
1.3 The Concept: A Basic Library Application
1.4 Project Directory Structure
1.5 Project Directory Setup
1.6 Conclusion
Section 2: Diving into the Back End
2.1 Install Required Software
2.2 Start a Django Project: server
2.3 Start a Django App: books
2.4 Describe the Book model
2.5 Register the Book model with the admin
2.6 Conclusion
Section 3: Build a Server, then REST
3.1 Django REST Framework
3.2 Create the books API folder
3.3 Create a book serializer
3.4 Create a view to GET and POST books data
3.5 Create URLs to access books data
3.6 Conclusion
Section 4: Laying Down Front-end Foundations
4.1 Install Required Software
4.2 Start an Ember Project: client
4.3 Displaying books data
4.4 The books route
4.5 Displaying real data in the books route
4.6 Conclusion
Section 5: Correct data formats, deal with individual records
5.1 Install the Django REST Framework JSON API
5.2 Working with individual book records
5.3 The book route
5.4 Conclusion
Section 6: Functional Front end
6.1 Adding a new book to the database
6.2 Deleting a book from the database
6.3 Editing a book in the database
6.4 Conclusion
Section 7: Moving On
7.1 What’s Next?
7.2 Further Reading
Section 1: The Whats, Hows, and Whys
1.1 Why I Wrote This Tutorial
Imagine that you’ve recently joined a new company. They’ve been in business for some time, and their major products are already out in production. Think of the application you see today as cake. The process of picking the ingredients, recipe, and putting it all together… well that’s long over. You’ll be working on pieces of that finished cake.
The developers at the start of a project have laid down certain configurations. These change and conventions are also developed over time as developers come and go. By the time you arrive it may be difficult to comprehend how we’ve gotten to where we are. This was my situation. I felt that dipping into the whole stack would be the only way for me to feel comfortable. It would help me understand where we came from and how to move forward with the software we’re building.
This tutorial is the culmination of my experiences as a junior software developer. I’ve been learning a lot at my time with Closing Folders. It represents a shift in my thinking as I take steps towards more complex full stack development. It also serves as an entry point for developers at the stage where they’re wondering how the cake gets baked. I hope this tutorial is as useful for you as it was instructive for me to create.
Note: In a typical workflow a developer would start on the back end to set up the database, and create a REST API. Then, they would work on the front end and build the user interface. Things aren’t so simple though. We make mistakes and often have to go back and forth to resolve them. The jumping back and forth will help build more connections in your mind. and help you better understand how all the pieces fit together. Embrace your mistakes. You’ll be making a lot of them!
Note2: Attention Senior Devs, Junior Devs, and Designers! Closing Folders is hiring now so feel free to get in touch.
1.2 Back End, Front End. What’s the Difference?
Back-end development. Front-end development. Full-stack development. So much development... What’s the difference anyway?
Think of front-end development as the part of the application that you see and interact with. For example, the user interface is part of the front end. That’s where the user views data and interacts with it.
Back-end development is everything that stores and serves data. Think about what happens when you login to Medium. None of your user profile data or stories exists on the front end. It’s stored and served from the back end.
The front end and back end work together to form the application. The back end has the instructions for how to store and serve the data. The front end has the instructions to capture the data, and how to display it.
Find out more about the differences in this article.
1.3 The Concept: A Basic Library Application
Before we start building anything, let’s outline our plans and what we’re trying to achieve. We want to build a web application called my_library that runs in the browser. The application is exactly what it sounds like, a digital library of books. We won’t be dealing with actual book content though. The books will only have title, author, and description information. Keeping it simple.
The application will have the following functionality:
- View all books as a single list on the home page, ordered by title
- View each book in detail, displaying its title, author, and description
- Add a new book with the fields title, author, and description
- Edit an existing book’s title, author, and description fields
- Delete an existing book
1.3.1 my_library’s final design and functionality
Take a look at the screenshots below. They depict the application’s final look and functionality:
1.4 Project Directory Structure
There are innumerable ways to structure a given project. I’ll keep everything under one
my_library folder for simplicity’s sake like so:
my_library - server - server - books - api - db.sqlite3 - manage.py - client - app - adapters - controllers - models - routes - templates - styles router.js
These aren’t all the folders and files that the project will contain, though they’re the main ones. You’ll notice quite a few autogenerated files that you can ignore. Though it would be useful for you to read documentation that explains their purpose.
The
my_library directory contains folders for the back end and front end sub-projects.
server refers to the Django back end, and
client refers to the EmberJS front end.
1.4.1 Back End
servercontains another folder called
server. Inside are the top level configurations and settings for the back end.
- The
booksfolder will contain all the models, views, and other configuration for the book data.
- Inside the
books/apifolder we’ll create the serializers, URLs, and views that make up our REST API.
1.4.2 Front End
clientis our EmberJS front end. It contains routes, templates, models, controllers, adapters, and styles.
router.jsdescribes all the application routes.
Let’s go ahead and set up the main project directory
my_library.
1.5 Project Directory Setup
1.5.1 Create the main project folder: my_library
Now that we know what we’re going to build, let’s take a few minutes to set up the main project directory
my_library:
# cd into desktop and create the main project folder cd ~/desktop && mkdir my_library
Create a basic
README.md file inside the folder with the following content:
# my_library This is a basic full stack library application built. Check out the tutorial: 'ELI5 Full Stack: Breakthrough with Django & EmberJS'.
Now let’s commit this project to a new Git repository as the project start point.
1.5.2 Install Git for version control
Git is version control software. We’ll use it to keep track of our project and save our state step-by-step so we can always go back if we make breaking errors. I’m sure most of you’re already familiar with it.
For the uninitiated, you can find out more here. If you don’t have Git installed, you can download it here.
Check that it installed with:
$ git --version
1.5.3 Create a new project repository
I have an account with Github. It’s popular and works well so that’s what I’ll be using. Feel free to use other solutions if they suit you better.
Create a new repository and get the remote URL which should look like this:
git@github.com:username/repo_name.git
1.5.4 Commit and push your changes to the project repository
Inside the
my_library folder initialize the empty repository:
git init
Now add the remote URL so Git knows where we’re pushing our files to:
git remote add origin git@github.com:username/repo_name.git # check that it's been set, should display the origin git remote -v
Time to push our code to Github:
# check the status of our repo # should show the new file README.md, no previous commits git status # add all changes git add . # create a commit with a message git commit -m "[BASE] Project Start" # push changes to the repo's master branch git push origin master
The remote Git repository updates with the changes we’ve pushed:
Now that we have a main project directory and a repository we can finally start working on our back end!
NOTE: From this point onward I won’t be going into any more detail about commits. The review and commit indicator below will let you know when it’s a good time to do so:
1.6 Conclusion
We’ve come to the end of Section 1 with the following steps completed:
- Got a feel for what we’re building and how it will work
- Created the
my_librarymain project directory
- Installed
gitand created a remote project repository on Github
- Initialized the local repository and set the remote URL
- Created a
README.mdfile, then committed and pushed all changes
Section 2: Diving into the Back End
This section is all about back-end development with Django. We’ll begin with the installation of the required software.
Next, we’ll move onto the creation of a new Django project called
server and create a new app called
books. In the
books app we describe the
Book model and register the model with the admin.
Once we create a
Superuser account we can login to the Django Admin site. We’ll use the Django Admin site to administrate the database and start seeding it with book data.
2.1 Install Required Software
Before we begin our back end project we’ll need to install some software:
2.1.1 Python
If your MacOS is up-to-date it likely already has
Python 2.7 installed. Feel free to use either
2.7 or
3.x. They’re the same for the purposes of this tutorial.
Installation is simple. Download the installer and install as you would a typical MacOS application. Open up the terminal and check that it’s installed:
python --version
2.1.2 pip
In simple terms, pip (Pip Installs Packages) is a package management system. It’s used to install and manage software packages written in Python. In the terminal:
# cd into the desktop cd ~/desktop # download the pip Python script curl -o get-pip.py # run the script python get-pip.py # once installation completes, verify that it's installed pip —-version
Full installation documentation is available here.
2.1.3 virtualenv
virtualenv is a ‘tool to create isolated Python environments’. These environments have their own installation directories. They don’t share libraries with others. Such silos protect the globally installed libraries from unwanted changes.
With it we can play with Python libraries without messing up the global environment. For example, you install
exampleSoftware 1.0 on your computer. With a virtual environment activated you can upgrade to
exampleSoftware 1.2 and use it. This won’t affect the global install of
exampleSoftware 1.0 at all.
For the development of a particular app you may want to use
1.2 and for other contexts
1.0 will be appropriate. Virtual environments give us the ability to separate these contexts. Full installation documentation is available here.
Now, open up the terminal to install virtualenv:
# use pip to install virtualenv pip install virtualenv # verify that it's installed virtualenv —-version
Let’s create a directory to house our virtual environments:
# cd into the root directory cd ~/ # create a hidden folder called .envs for virtual environments mkdir .envs # cd into the virtual environments directory cd .envs
We can now create a virtual environment for our project:
# create a virtual environment folder: my_library virtualenv my_library # activate the virtual environment from anywhere using source ~/.envs/my_library/bin/activate
Now that we’ve created a virtual environment called
my_library there are a few rules to keep in mind. Make sure the environment is always activated before installing, or updating any packages.
Finally, take a moment to upgrade pip inside this virtual environment:
pip install -U pip
2.1.4 Django 1.11 (LTS)
Django is a web framework that ‘encourages rapid development and clean, pragmatic design…’
It provides us with a set of common components so we don’t have to reinvent everything from scratch.
Examples include:
- a management panel
- a way to handle user authentication
Checkout out this DjangoGirls article to learn more about Django and why it’s used.
In this project we’ll be using Django to handle the back end. Along with its add-ons, Django provides the basic tools to develop a REST API.
# inside my_library with virtualenv activated pip install Django==1.11 # verify that it's installed, open up the Python shell python # access the django library and get the version (should be 1.11) import django print(django.get_version()) # exit using keyboard shortcut ctrl+D or: exit()
Full installation documentation is available here.
2.2 Start a Django Project: server
Let’s use the django-admin to generate a new Django project. This is Django’s ‘command-line utility for administrative tasks’:
# cd into the project folder cd ~/desktop/my_library # initialize the virtual environment source ~/.envs/my_library/bin/activate # use Django to create a project: server django-admin startproject server # cd into the new Django project cd server # synchronize the database python manage.py migrate # run the Django server python manage.py runserver
Now visit in your browser and confirm that the Django project is working:
You can shut down the server with
cmd+ctrl.
2.2.1 Create the Superuser account
We’ll have to create a superuser to login to the admin site and handle database data. Inside
my_library/server we run:
# create superuser python manage.py createsuperuser
Fill in the fields
Username,
Password. You should receive a success message.
Now run the server with
python manage.py runserver and go to
localhost:8000/admin to see the admin login page. Enter your superuser account details to login.
Nice! We have access to the Django admin site. Once we create the
books model and do the appropriate setup we’ll be able to add, edit, delete, and view book data.
Logout and shut down the server with
cmd+ctrl.
2.2.2 Protecting Our Secrets
Before moving on, we’ll want to update the settings.py file. It contains authentication credentials that we don’t want to expose to the public. We’ll want to keep these credentials out of our remote repository. There are many ways of protecting ourselves. This is my approach to it:
# create a config.json file to hold our configuration values my_library/server/server/config.json
Inside we’ll store our
SECRET_KEY value from
settings.py under
API_KEY:
{ "API_KEY" : "abcdefghijklmopqrstuvwxyz123456789" }
In
settings.py import the
json library and load the config variables:
import os import json BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) with open(BASE_DIR + '/server/config.json', 'r') as config: obj = json.load(config) SECRET_KEY = obj["API_KEY"] ...
So that
config.json (with the secret key) isn’t pushed to the repository, create a
.gitignore file in
my_library. This ignores it (along with some other autogenerated files and the database):
### Django ### config.json *.log *.pot *.pyc __pycache__/ local_settings.py db.sqlite3 media
Now when you commit the changes the files and folders listed above aren’t added. Our secrets are safe and our repo won’t contain unnecessary extra files!
2.3 Start a Django App: books
Think of Django apps as modules that plugin into your project. We’ll create an app called
books containing the models, views, and other settings. This is how we interact with the books data in the database.
What are the differences between projects and apps in Django? Check out this thread.
# create new app: books python manage.py startapp books # creates directory: my_library/server/books
Now we’ll install the
books app into the
server project. Open the settings file:
my_library/server/server/settings.py.
Scroll to the
INSTALLED_APPS array. Django has installed it's own core apps by default. Install the
books app at the end of the array:
INSTALLED_APPS = [ ... 'books' ]
2.4 Describe the Book model
Next we describe the
Book model in the books app. Open the models file
my_library/server/books/models.py.
Describe a
Book model which tells Django that every book in the database will have:
- a
titlefield up to 500 characters in length
- an
authorfield up to 100 characters
- a
descriptionfield with an open-ended number of characters
from django.db import models class Book(models.Model): title = models.CharField(max_length=500) author = models.CharField(max_length=100) description = models.TextField()
2.5 Register the Book model with the admin
Now we register the
Book model with the admin for our
books app. This lets us view it in the admin site and manipulate the books data from there. Open the admin file
my_library/server/books/admin.py and add:
from django.contrib import admin from .models import Book @admin.register(Book) class bookAdmin(admin.ModelAdmin): list_display = ['title', 'author', 'description']
With a new model created we’ll have to make and run migrations so that the database synchronizes:
python manage.py makemigrations python manage.py migrate
Run the server and go to
localhost:8000/admin to login. Notice that the Book model registered with the admin displays:
Clicking on ‘Books’ displays an empty list because there are no books in the database. Click ‘Add’ to begin creating a new book to add to the database. Go ahead and create a few books.
Save and go back to the list to view the new data. Now it displays the title, author, and description (
list_display array) fields.
This is great. We can now view our database books in the admin site. Create, edit, and delete functions are also available.
Note: For simplicity’s sake we’ll use the SQLite database. It comes preinstalled with the creation of every Django project. No need to do any extra work with databases for the purposes of this tutorial.
2.6 Conclusion
Congrats, we made it to the end of Section 2! This is what we’ve done so far:
- Installed
python
- Used
pythonto install the
pippackage manager
- Used
pipto install
virtualenvto create virtual environments
- Created a virtual environment in
~/.envscalled
my_library
- Activated the
my_libraryenvironment and upgraded
pip
- Installed
Django 1.11 LTSwithin the
my_libraryenvironment
- Created our project directory
my_library
- Created the Django project
server
- Created a
Superuseraccount to access the Django admin site
- Protected our secrets by moving our
SECRET_KEYinto
config.json
- Ignored autogenerated and/or sensitive files with
.gitignore
- Created a new app called
books
- Described the
Bookmodel
- Registered the
Bookmodel with the admin
- Added books data into the database
Section 3: Build a Server, then REST
In this section we use the Django REST Framework to build our
books API. It has serializers, views, and URLs that query, structure, and deliver the book data. The data and methods are accessible through API endpoints.
These endpoints are one end of a communication channel. Touchpoints of the communication between the API and another system. The other system in this context is our Ember front end client. The Ember client will interact with the database through the API endpoints. We create these endpoints with Django and the Django REST Framework.
We used Django to set up the
book model and the admin site that lets us interact with the database. Django REST Framework will help us build the REST API that the front end will use to interact with the back end.
3.1 Django REST Framework
Django REST Framework (DRF) builds on top of Django. It simplifies the creation of RESTful Web APIs. It comes with tools to make the process straightforward.
The developers of DRF have identified common patterns for serializers and views. Since our data and what users can do with it are simple, we’ll use the built-in serializers and views. Remember, our book data only has three fields
title,
author, and
description. Users are able create new records of books, edit, and delete existing records. This functionality is well within the range of basic common patterns. They’re well supported by the built-in serializers and views. We won’t have to build these from scratch.
For more complex projects you’ll want to overwrite defaults or make your own. Again, for the purposes of simplicity we’ll use what comes out of the box without undue modification.
3.1.1 Install Django REST Framework
Enter the
my_library directory and activate the virtual environment. To start working with DRF, install it with
pip:
# enter my_library cd ~/desktop/my_library # activate the virtual environment source ~/.envs/my_library/bin/activate # install Django REST Framework pip install djangorestframework # install Markdown support for the browsable API pip install markdown
Now open up
my_library/server/server/settings.py. Install DRF right above the
books app in the
INSTALLED_APPS array:
INSTALLED_APPS = [ ... 'rest_framework', 'books' ]
Add the default settings at the bottom of the file as an object called
REST_FRAMEWORK:
REST_FRAMEWORK = { 'DEFAULT_PERMISSION_CLASSES': [ 'rest_framework.permissions.DjangoModelPermissionsOrAnonReadOnly' ] }
The settings object contains a
DEFAULT_PERMISSION_CLASSES key with an array. The only item in the array is a permission class. This ‘allows unauthenticated users to have read-only access to the API’. Find out more about permissions here.
3.2 Create the books API folder
With DRF installed let’s start building the
books API. Create a new folder called
api inside the
books app. Then create an empty
__init__.py file within:
my_library/server/books/api/__init__.py.
The empty file tells Python that this folder is a Python module. The
api folder will contain the serializers, views, and URLs for our books data. I’ll get into the meanings of these terms in their respective sections below.
3.3 Create a book serializer
In simple terms, serializers take database data and restructure it. This structure is a blueprint for the data to alternate between application layers. It gets the front end and backend to speak to each other in a common language.
For example, the front end we’ll create expects the response returned to it from a request to be in the JSON format. Serializing the data to be in JSON ensures the front end will be able to read and write it.
from rest_framework import serializers from books.models import Book class bookSerializer(serializers.ModelSerializer): class Meta: model = Book fields = ( 'id', 'title', 'author', 'description', )
This serializer takes the data and transforms it into the JSON format. This ensures that it’s understandable to the front end.
Imports
We import built-in
serializers from DRF, and the
Book model from our
books app.
from rest_framework import serializers from books.models import Book
The bookSerializer Class
For this project we want a
Serializer class that ‘corresponds to the Model fields’. The serializer should map to the model fields
title,
author, and
description. We can do this with the
ModelSerializer. According to the documentation:
The
ModelSerializer class is the same as a regular
Serializer class, except that:
- It will generate a set of fields for you, based on the model.
- It will generate validators for the serializer, such as unique_together validators.
- It includes simple default implementations of
.create()and
.update().
The built-in tools are more than capable of handling our basic needs.
class bookSerializer(serializers.ModelSerializer): class Meta: model = Book fields = ( 'id', 'title', 'author', 'description', )
3.4 Create a view to GET and POST books data
View functions take in a web request and return web responses. A web request to
localhost:8000/api/books for example elicits a response from the server.
This response can be ‘HTML contents of a Web page, or a redirect, or a 404 error, or an XML document, or an image . . . or anything…’ In our case we expect to get back books data structured in the JSON format.
Create the views file in
my_library/server/books/api/views.py:
from rest_framework import generics, mixins from books.models import Book from .serializers import bookSerializer class bookAPIView(mixins.CreateModelMixin, generics.ListAPIView): resource_name = 'books' serializer_class = bookSerializer def get_queryset(self): return Book.objects.all() def post(self, request, *args, **kwargs): return self.create(request, *args, **kwargs)
Imports
First we import
generics and
mixins from DRF. Then the
Book model from our
books app and the
bookSerializer that we created.
generics refers to API views that ‘map to your database models’. These are ‘pre-built views that provide for common patterns’.
mixins are classes that ‘provide the actions that used to provide the basic view behavior’. Our book model is simplistic. It only has
title,
author, and
description attributes so these provide us with the basics we need.
from rest_framework import generics, mixins from books.models import Book from .serializers import bookSerializer
The bookAPI View
We then create a
bookAPIView which takes in the
CreateModelMixin and
ListAPIView.
CreateModelMixin provides a
.create(request, *args, **kwargs) method. This implements the creation and persistence of a new model instance. When successful it returns a
201 Create response. This comes with a serialized representation of the object that it created.
For example, we would make a POST request to create a new book record for the Steve Jobs book by Walter Isaacson. If successful we get back a response with the code
201. The serialized representation of the book record like so:
{ "data": { "type": "books", "id":"10", "attributes": { "title": "Steve Jobs", "author": "Walter Isaacson", "description": "Based on more than forty interviews with Jobs conducted over two years—as..." } } }
When unsuccessful, we’ll get back a
400 Bad Request response with errors details. For example, if we try to create a new book record but don’t provide any
title information:
{ "errors":[ { "status": "400", "source": { "pointer": "/data/attributes/title" }, "detail": "This field may not be blank." } ] }
ListAPIView serves our read-only endpoints (GET). It represents ‘a collection of model instances’. We use it when we want to get all or many books.
bookAPIView also takes in the recently created
bookSerializer for its
serializer_class.
We set the
resource_name to ‘books’ to ‘specify the type key in the json output’. The front end client data store layer will have a
book model that is case sensitive. We don’t want to
book model in Ember and the
Book model in Django to clash. Setting the
resource_name here nips that issue in the bud.
class bookAPIView(mixins.CreateModelMixin, generics.ListAPIView): resource_name = 'books' serializer_class = bookSerializer
Functions
The function
get_queryset returns all the book objects in the database.
post takes in the request and arguments and creates a new database record of a book if the request is valid.
def get_queryset(self): return Book.objects.all() def post(self, request, *args, **kwargs): return self.create(request, *args, **kwargs)
3.5 Create URLs to access books data
URL patterns map a URL to views. For example, visiting
localhost:8000/api/books should map to a URL pattern. That then returns the results of a query to that view.
Create the URLs file in
my_library/server/books/api/urls.py:
from .views import bookAPIView from django.conf.urls import url urlpatterns = [ url(r'^$', bookAPIView.as_view(), name='book-create'), ]
Imports
We import our view
bookAPIView and
url. We’ll use
url to create a list of url instances.
from .views import bookAPIView from django.conf.urls import url
booksAPI URL patterns
In the
urlpatterns array we create a URL pattern with the following structure:
- the pattern
r'^$'
- the Python path to the view
bookAPIView.as_view()
- the name
name='book-create'
The pattern
r’^$’is a regular expression that ‘matches an empty line/string’. This means it matches to
localhost:8000. It matches to anything that comes after the base URL.
We call
.as_view() on
bookAPIView because to connect the view to the url. It ‘is the function(class method) which will connect [the] class with its url’. Visit a particular URL and the server attempts to match it to the URL pattern. That pattern will then return the
bookAPI view results that we’ve told it to respond with.
The
name=’book-create’ attribute provides us with a
name attribute. We use it to refer to our URL throughout the project. Let’s say you want to change the URL or the view it refers to. Change it here. Without
name we would have to go through the entire project to update every reference. Check out this thread to find out more.
urlpatterns = [ url(r'^$', bookAPIView.as_view(), name='book-create'), ]
server URL patterns
Now let’s open up
server’s URLs file
my_library/server/server/urls.py:
from django.conf.urls import url, include from django.contrib import admin urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^api/books', include('books.api.urls', namespace='api-books')) ]
Here we import
include and create the
r’^api/books’ pattern which takes in any URLs we created in the
api folder. Now the base URL for our
books API URLs becomes
localhost:8000/api/books. Visiting this URL will match to our
r’^/api/books’ pattern. This matches to the
r’^$’ pattern we constructed in the
books API.
We use
namespace=’api-books’ so that the URLs don’t collide with each other. This would happen if they’re named the same in another app we create. Learn more about why we use
namespaces in this thread.
3.5.1 Demonstration: Browsing the books API
Now that we have the base REST framework setup let’s check out the data the back end is returning. With the server running, visit
localhost:8000/api/books. The browsable API should return something like this:
3.6 Conclusion
Awesome, we’re getting going now. By the end of Section 3 we’ve completed the following steps:
- Installed Django REST Framework into our project
- Started building the
booksAPI
- Created a
serializerfor books
- Created a
viewfor books
- Created
URLsfor books
- Browsed the books API that returns book data from the back end
Section 4: Laying Down Front-end Foundations
In this section we shift our attention to the front end and begin working with the Ember framework. We’ll install the required software, set up a basic DOM, styles, create the
book model, and the
books route. We’ll also load up fake book data for demonstration purposes before we go on to access real data from the back end.
4.1 Install Required Software
To begin front-end development we need to install some software:
4.1.1 NodeJS and NPM
NodeJS is an open source server environment. We don’t need to get into the details right now. NPM is a package manager for Node.js packages. We use it to install packages like the Ember CLI.
Install NodeJS and NPM using the installation file from the official site.
Once installation is complete check that everything installed:
node --version npm --version
4.1.2 Ember CLI
Let’s use NPM to install the Ember CLI. That’s the ‘official command line utility used to create, build, serve, and test Ember.js apps and addons’. Ember CLI comes with all the tools we need to build the front end of our application.
# install Ember CLI npm install -g ember-cli # check that it's installed ember --version
4.2 Start an Ember Project: client
Let’s create a front end client called
client using Ember CLI:
# cd into the main project folder cd ~/desktop/my_library # create a new app: client ember new client # cd into the directory cd client # run the server ember s
Head over to and you should see this screen:
The base Ember client project is running as expected. You can shut down the server with
ctrl+C.
4.2.1 Update .gitignore with Ember exclusions
Before we make any new commits, let’s update the
.gitignore file. We want to exclude unwanted files from from the repo. Add on to the file below the Django section:
... ### Ember ### /client/dist /client/tmp # dependencies /client/node_modules /client/bower_components # misc /client/.sass-cache /client/connect.lock /client/coverage/* /client/libpeerconnection.log /client/npm-debug.log /client/testem.log # ember-try /client/.node_modules.ember-try/ /client/bower.json.ember-try /client/package.json.ember-try
4.3 Displaying books data
4.3.1 Setup the DOM
Now that we’ve generated a base project, let’s set up a basic DOM and styles. I’m not doing anything fancy here. It’s the least necessary to have our data displaying in a readable format.
Locate the file
client/app/templates/application.hbs. Get rid of
{{welcome-page}} and the comments .
Next, create a
div with the class
.nav. Use Ember’s built-in
{{#link-to}} helper to create a link to the route
books(we’ll create it later):
<div class="nav"> {{#link-to 'books' class="nav-item"}}Home{{/link-to}} </div>
Wrap everything including the
{{outlet}} in a
div with the
.container class. Each route template will render inside
{{outlet}}:
<div class="container"> <div class="nav"> {{#link-to 'books' class="nav-item"}}Home{{/link-to}} </div> {{outlet}} </div>
This is the template for the parent level
application route. any sub-routes like
books will render inside the
{{outlet}}. This means that the
nav will always be visible on screen.
4.3.2 Create styles
I’m not going to get into the nitty-gritty of the CSS. It’s pretty simple to figure out. Locate the file
client/app/styles/app.css and add the following styles:
Variables and Utilities
:root { --color-white: #fff; --color-black: #000; --color-grey: #d2d2d2; --color-purple: #6e6a85; --color-red: #ff0000; --font-size-st: 16px; --font-size-lg: 24px; --box-shadow: 0 10px 20px -12px rgba(0, 0, 0, 0.42), 0 3px 20px 0px rgba(0, 0, 0, 0.12), 0 8px 10px -5px rgba(0, 0, 0, 0.2); } .u-justify-space-between { justify-content: space-between !important; } .u-text-danger { color: var(--color-red) !important; }
General
body { margin: 0; padding: 0; font-family: Arial; } .container { display: grid; grid-template-rows: 40px calc(100vh - 80px) 40px; height: 100vh; }
Navigation
.nav { display: flex; padding: 0 10px; background-color: var(--color-purple); box-shadow: var(--box-shadow); z-index: 10; } .nav-item { padding: 10px; font-size: var(--font-size-st); color: var(--color-white); text-decoration: none; } .nav-item:hover { background-color: rgba(255, 255, 255, 0.1); }
Headings
.header { padding: 10px 0; font-size: var(--font-size-lg); }
Books List
.book-list { padding: 10px; overflow-y: scroll; } .book { display: flex; justify-content: space-between; padding: 15px 10px; font-size: var(--font-size-st); color: var(--color-black); text-decoration: none; cursor: pointer; } .book:hover { background: var(--color-grey); }
Buttons
button { cursor: pointer; }
Book Detail
.book.book--detail { flex-direction: column; justify-content: flex-start; max-width: 500px; background: var(--color-white); cursor: default; } .book-title { font-size: var(--font-size-lg); } .book-title, .book-author, .book-description { padding: 10px; }
Add/Edit Book Form
.form { display: flex; flex-direction: column; padding: 10px 20px; background: var(--color-white); } input[type='text'], textarea { margin: 10px 0; padding: 10px; max-width: 500px; font-size: var(--font-size-st); border: none; border-bottom: 1px solid var(--color-grey); outline: 0; }
Actions
.actions { display: flex; flex-direction: row; justify-content: flex-end; padding: 10px 20px; background-color: var(--color-white);; box-shadow: var(--box-shadow) }
4.4 The books route
4.4.1 Create the books route
Now we have our styles and container DOM in place. Let’s generate a new route that will display all the books in our database:
ember g route books
The router file
client/app/router.js updates with:
import EmberRouter from '@ember/routing/router'; import config from './config/environment'; const Router = EmberRouter.extend({ location: config.locationType, rootURL: config.rootURL }); Router.map(function() { this.route('books'); }); export default Router;
4.4.2 Load fake data in the model hook
Let’s edit the books route
client/app/routes/books.js to load all books from the database.
import Route from '@ember/routing/route'; export default Route.extend({ model() { return [ {title: 'Monkey Adventure'}, {title: 'Island Strife'}, {title: 'The Ball'}, {title: 'Simple Pleasures of the South'}, {title: 'Big City Monkey'} ] } });
The model hook is returning an array of objects. This is fake data for demonstration purposes. We’ll come back here later and load the actual data from the database using Ember Data when we’re ready.
4.4.3 Update the books route template
Let’s edit the books route template
client/app/templates/books.hbs. We want to display the books returned in the model.
<div class="book-list"> {{#each model as |book|}} <div class="book"> {{book.title}} </div> {{/each}} </div>
Ember uses the Handlebars Template Library. Here we use the
each helper to iterate through our array of books data in
model. We wrap each of the items in the array in a
div with the class
.book. Access and display it’s title information with
{{book.title}}.
4.4.4 Demonstration: books route loading and displaying fake data
Now that we have the DOM,
book model, and
books route setup with some fake data we can see this running in the browser. Take a look at
localhost:4200/books:
4.4.5 Create application route for redirect
It’s kind of annoying to have to put a
/books to visit the
books route. Let’s generate the
application route. We can use the
redirect hook to redirect to the
books route when we enter the base route
/.
ember g route application
If prompted to overwrite the
application.hbs template, say no. We don’t want to overwrite the template we already set up.
In
client/app/routes/application.js create the
redirect hook:
import Route from '@ember/routing/route'; export default Route.extend({ redirect() { this.transitionTo('books'); } });
Now, if you visit
localhost:4200 it will redirect to
localhost:4200/books.
4.5 Displaying real data in the books route
4.5.1 Create an application adapter
We don’t want to use fake data forever. Let’s connect to the back end using an adapter and start pulling the books data into the client. Think of the adapter as an “object that receives requests from a store’. It ‘translates them into the appropriate action to take against your persistence layer…’
Generate a new application adapter:
ember g adapter application
Locate the file
client/app/adapters/application.js and update it:
import DS from 'ember-data'; import { computed } from '@ember/object'; export default DS.JSONAPIAdapter.extend({ host: computed(function(){ return ''; }), namespace: 'api' });
The JSONAPIAdapter is the ‘default adapter used by Ember Data’. It transforms the store’s requests into HTTP requests that follow the JSON API format. It plugs into the data management library called Ember Data. We use Ember Data to interface with the back end in a more efficient way. It can store and manage data in the front end and make requests to the back end when required. This means minor page updates don’t need constant requests to the back end. This helps the user experience feel more fluid with generally faster loading times
We’ll use its
store service to access
server data without writing more complex
ajax requests. These are still necessary for more complex use cases though.
Here the adapter is telling Ember Data that its
host is at
localhost:8000, namespaced to
api. This means that any requests to the server start with.
4.5.2 Create the book model
Ember Data has particular requirements for mapping its data to what comes from the back end. We’ll generate a
book model so it understands what the data coming from the back end should map to:
ember g model book
Locate the file in
client/models/book.js and define the
book model:
import DS from 'ember-data'; export default DS.Model.extend({ title: DS.attr(), author: DS.attr(), description: DS.attr() });
The attributes are the same as those we’ve defined in the back end. We define them again so that Ember Data knows what to expect from the structured data.
4.5.3 Update the
books route
Let’s update the books route by importing the
store service and using it to request data.
import Route from '@ember/routing/route'; import { inject as service } from '@ember/service'; export default Route.extend({ store: service(), model() { const store = this.get('store'); return store.findAll('book'); } });
4.5.4 Demonstration: books has a CORS issue
So far we’ve created an application adapter and updated the
books route to query for all books in the database. Let’s see what we’re getting back.
Run both the Django and Ember servers. Then visit
localhost:4200/books and you should see this in the console:
There seems to be a problem with CORS.
4.5.5 Resolve the Cross-Origin Resource Sharing (CORS) issue
CORS defines a way in which browser and server interact to determine whether it’s safe to allow a request. We’re making a cross-origin request from
localhost:4200 to
localhost:8000/api/books. From the client to the server with the purpose of accessing our books data.
Currently, the front end isn’t an allowed origin to request data from our back-end endpoints. This block is causing our error. We can resolve this issue by allowing requests to pass through.
Begin by installing an app that adds CORS headers to responses:
pip install django-cors-headers
Install it into
server's
settings.py file under the
INSTALLED_APPS array:
INSTALLED_APPS = [ ... 'books', 'corsheaders' ]
Add it to the top of the
MIDDLEWARE array:
MIDDLEWARE = [ 'corsheaders.middleware.CorsMiddleware', ... ]
Finally, allow all requests to get through during development:
CORS_ORIGIN_ALLOW_ALL = DEBUG
4.5.6 Demonstration: CORS issue resolved, incompatible data format
Visit
localhost:4200 and you should see this in the console:
Looks like we solved the CORS issue and we’re receiving a response from
server with the data that we expect:
[ { "id": 1, "title": "Conquistador", "author": "Buddy Levy", "description": "It was a moment unique in ..." }, { "id": 2, "title": "East of Eden", "author": "John Steinbeck", "description": "In his journal, Nobel Prize ..." } ]
Although get an array of objects in JSON format, it’s still not in the format we want it to be. This is what Ember Data expects:
{ data: [ { id: "1", type: "book", attributes: { title: "Conquistador", author: "Buddy Levy", description: "It was a moment unique in ..." } }, { id: "2", type: "book", attributes: { title: "East of Eden", author: "John Steinbeck", description: "In his journal, Nobel Prize ..." } } ] }
Close but not quite there yet.
4.6 Conclusion
We’ve completed the following steps in Section 4:
- Installed NodeJS and NPM
- Installed the Ember CLI and created a new client project
- Basic DOM setup
- Created a
booksroute and template to load and display books
- Demonstrated the app running with fake data
- Created an application adapter to connect to the back end and receive data
- Created a
bookmodel and updated the
booksroute to capture back-end data
- Demonstrated that the back-end data isn’t structured in the way that Ember Data expects it to be
Section 5: Correct data formats, deal with individual records
In this section we’ll use the Django REST Framework JSON API to structure the data in a way that Ember Data can work with. We’ll also update the
books API to return book a single instance of a book record. We’ll also add the functionality to add, edit, and create books. Then we’re done with our application!
5.1 Install the Django REST Framework JSON API
First we use pip to install the Django REST Framework JSON API (DRF). It will transform regular DRF responses into an
identity model in JSON API format.
With the virtual environment enabled:
# install the Django REST Framework JSON API pip install djangorestframework-jsonapi
Next, update DRF settings in
server/server/settings.py:
REST_FRAMEWORK = { 'PAGE_SIZE': 100, 'EXCEPTION_HANDLER': 'rest_framework_json_api.exceptions.exception_handler', 'DEFAULT_PAGINATION_CLASS': 'rest_framework_json_api.pagination.JsonApiP', 'DEFAULT_FILTER_BACKENDS': ( 'rest_framework.filters.OrderingFilter', ), 'ORDERING_PARAM': 'sort', 'TEST_REQUEST_RENDERER_CLASSES': ( 'rest_framework_json_api.renderers.JSONRenderer', ), 'TEST_REQUEST_DEFAULT_FORMAT': 'vnd.api+json' }
These override the default settings for DRF with defaults from the JSON API. I increased the
PAGE_SIZE so we can get up to 100 books back in a response.
5.2 Working with individual book records
5.2.1 Create a view
Let’s also update our
books API so that we can retrieve single instances of a book record.
Create a new view called
bookRudView in
server/books/api/views.py:
class bookRudView(generics.RetrieveUpdateDestroyAPIView): resource_name = 'books' lookup_field = 'id' serializer_class = bookSerializer def get_queryset(self): return Book.objects.all()
This view uses the
id
lookup_field to retrieve an individual book record. The RetrieveUpdateDestroyAPIView provides basic
GET,
PUT,
PATCH and
DELETE method handlers. As you might imagine these let us create, update, and delete individual book data.
5.2.2 Update the book API URLs
We’ll need to create a new URL pattern that delivers data through the
bookRudView.
from .views import bookAPIView, bookRudView from django.conf.urls import url urlpatterns = [ url(r'^$', bookAPIView.as_view(), name='book-create'), url(r'^(?P<id>\d+)', bookRudView.as_view(), name='book-rud') ]
Import
bookRudView, match it to the pattern
r'^(?P<id>;\d+)', and give it the name
book-rud.
5.2.3 Update the server URLs
Finally, update the
books API URL pattern in
server/server/urls.py. We want to match to patterns which begin after
books/:
... urlpatterns = [ ... url(r'^api/books/?', include('books.api.urls', namespace='api-books')), ]
5.2.4 Demonstration: Access a single book record
Now if you visit
localhost:8000/api/books/1 it should display a single book record that matches to a book’s
id:
Notice that we have access to the
DELETE,
PUT,
PATCH and other methods. These come with
RetrieveUpdateDestroyAPIView.
5.2.5 Demonstration: Capturing and displaying data from the back end in the correct format
With the
JSONAPI installed the back end should be sending back responses Ember can work with. Run both servers and visit
localhost:4200/books. We should get back real data from the back end and have the route display it. Success!
Take a look at the response coming through. It’s in the valid
JSONAPI format that Ember Data works with.
5.3 The book Route
We can now view the list of books from our database in the
books route. Next, let’s create a new route in the front-end
client. It will display individual books in detail with
title,
author, and
description data.
5.3.1 Create the
book route
Generate a new route for the individual book page:
ember g route book
In
router.js update the new route with the path
‘books/:book_id’. This overrides the default path and takes in a
book_id parameter.
... Router.map(function() { this.route('books'); this.route('book', { path: 'books/:book_id' }); }); ...
Next update the
book route
client/app/routes/book.js to retrieve a single book record from the database:
import Route from '@ember/routing/route'; import { inject as service } from '@ember/service'; export default Route.extend({ store: service(), model(book) { return this.get('store').peekRecord('book', book.book_id); } });
As outlined in
router.js the
book route takes in the
book_id parameter. The parameter goes into the route’s
model hook and we use it to retrieve the book with the Ember Data
store.
5.3.2 Update the
book template
Our
client/app/templates/book.hbs template should display the book data we get back from the
store. Get rid of
{{outlet}} and update it:
<div class="book book--detail"> <div class="book-title"> {{model.title}} </div> <div class="book-author"> {{model.author}} </div> <div class="book-description"> {{model.description}} </div> </div>
Like in the
books template we access the
model attributes using dot notation.
5.3.3 Update the
books template
Finally, let’s update the
books template. We want to link to each individual book page as displayed in the
book route we created:
<div class="book-list"> {{#each model as |book|}} {{#link-to 'book' book.id class="book"}} {{book.title}} {{/link-to}} {{/each}} </div>
Wrap the
book.title with the
link-to helper. It works like this:
- creates a link to the
bookroute
- takes in the
book.idas a parameter
- takes a
classto style the
<;a> tag generated in the DOM.
5.3.4 Demonstration: Select book to view detailed information
Now check out
localhost:4200/books. We can click on our books to get a detailed view. Sweet!
5.4 Conclusion
We’ve come to the end of Section 5 with the following steps completed:
- Identified the problem with the data coming from Django
- Installed the Django REST Framework JSON API
- Updated the
booksroute template
- Created the
bookroute and template
Section 6: Functional Front end
In this section we’ll add the following functionality to the front-end experience:
- Add a new book with the fields title, author, and description
- Edit an existing book’s title, author, and description fields
- Delete an existing book
That’s all we have to do to complete the rest of our application. We come a long way. Let’s push on to the end!
6.1 Adding a new book to the database
We can now view all the books from the database and view individual book records in detail. It’s time to build the functionality to add a new book to the database. These are the steps we’ll take to make that happen:
- The
create-bookroute handles the process of creating a new book and adding it to the database
- The
create-booktemplate will have a form with two inputs and a text area to take in a
title,
author, and
description
- The
create-bookcontroller handles the data entered into the form
6.1.1 Create the create-book route and controller
Generate the
create-book route to handle new book creation:
ember g route create-book
Create a controller of the same name to hold form data:
ember g controller create-book
6.1.2 Setup the
create-book controller
In
client/app/controllers/create-book.js create a computed property called
form. It will return an object with our book data attributes. This is where we capture the new book data entered in by the user. It’s empty by default.
import Controller from '@ember/controller'; import { computed } from '@ember/object'; export default Controller.extend({ form: computed(function() { return { title: '', author: '', description: '' } }) });
6.1.3 Setup the
create-book route
In
client/app/routes/create-book.js we do the following:
- create actions to confirm creation of a new book
- cancel the creation process
- use a route hook to clear the form data upon entering the route:
import Route from '@ember/routing/route'; import { inject as service } from '@ember/service'; export default Route.extend({ store: service(), setupController(controller, model) { this._super(controller, model); this.controller.set('form.title', ''); this.controller.set('form.author', ''); this.controller.set('form.description', ''); }, actions: { create() { const form = this.controller.get('form'); const store = this.get('store'); const newBook = store.createRecord('book', { title: form.title, author: form.author, description: form.description }); newBook.save() .then(() => { this.transitionTo('books'); }); }, cancel() { this.transitionTo('books'); } } });
The
setupController hook allows us to reset the form’s values. This is so that they don’t persist when we go back and forth through pages. We don’t want to click away to another page without having completed the create book process. If we do, we’ll come back to see the unused data still sitting in our form.
The
create() action will take the form data and create a new record with the Ember Data
store. It then persists it to the Django back end. Once complete it will transition the user back to the
books route.
The
cancel button transitions the user back to the
books route.
6.1.4 Setup the
create-book template
Next, in
client/app/template/create-book.hbs we build the form:
<form class="form"> <div class="header"> Add a new book </div> {{input value=form.title <div> <button {{action 'create'}}> Create </button> <button {{action 'cancel'}}> Cancel </button> </div> </div>
The
form uses the built-in
{{input}} helpers to:
- take in values
- display placeholders
- turn autocomplete off.
The
{{text}} area helper works in a similar way, with the addition of the number of rows.
The actions
div contains the two buttons to create and cancel. Each button ties to its namesake action using the
{{action}} helper.
6.1.5 Update the
books route template
The final piece of the create book puzzle is to add a button in the
books route. It will get us into the
create-book route and begin creating a new book.
Add on to the bottom of
client/app/templates/books.hbs:
... {{#link-to 'create-book' class='btn btn-addBook'}} Add Book {{/link-to}}
6.1.6 Demonstration: Can add a new book
Now if we go back and try to create a new book again we’ll find success. Click into the book to see a more detailed view:
6.2 Deleting a book from the database
Now that we can add books to the database we should be able to delete them too.
6.2.1 Update the
book route template
First update the
book route’s template. Add on under
book book--detail:
... <div class="actions {{if confirmingDelete 'u-justify-space-between'}}"> {{#if confirmingDelete}} <div class="u-text-danger"> Are you sure you want to delete this book? </div> <div> <button {{action 'delete' model}}>Delete</button> <button {{action (mut confirmingDelete)false}}> Cancel </button> </div> {{else}} <div> <button {{action (mut confirmingDelete) true}}>Delete</button> </div> {{/if}} </div>
The
actions
div contains the buttons and text for the book deletion process.
We have a
bool called
confirmingDelete which will be set on the route’s
controller.
confirmingDelete adds the
.u-justify-space-between utility class on
actions when it’s
true.
When it’s true, it also displays a prompt with the utility class
.u-text-danger. This prompts the user to confirm deletion of the book. Two buttons show up. One to run
delete action in our route. The other uses the
mut helper to flip
confirmingDelete to
false.
When
confirmingDelete is
false (the default state) a single
delete button display. Clicking it flips
confirmingDelete to
true. This then displays the prompt and the other two buttons.
6.2.2 Update the
book route
Next update the
book route. Under the
model hook add:
setupController(controller, model) { this._super(controller, model); this.controller.set('confirmingDelete', false); },
In
setupController we call
this._super(). This is so the controller goes through its usual motions before we do our business. Then we set
confirmingDelete to
false.
Why do we do this? Let’s say we start to delete a book, but leave the page without either cancelling the action or deleting the book. When we go to any book page
confirmingDelete would be set to
true as a leftover.
Next let’s create an
actions object that will hold our route actions:
actions: { delete(book) { book.deleteRecord(); book.save().then(() => { this.transitionTo('books'); }); } }
The
delete action as referenced in our template takes in a
book. We run
deleteRecord on the
book and then
save to persist the change. Once that promise completes
transitionTo transitions to the
books route (our list view).
6.2.3 Demonstration: Can delete an existing book
Let’s check this out in action. Run the servers and select a book you want to delete.
When you delete the book it redirects to the
books route.
6.3 Editing a book in the database
Last but not least we’ll add the functionality to edit an existing books information.
6.3.1 Update the
book route template
Open up the
book template and add a form to update book data:
{{#if isEditing}} <form class="form"> <div class="header">Edit</div> {{input value=form.title <div> <button {{action 'update' model}}>Update</button> <button {{action (mut isEditing) false}}>Cancel</button> </div> </div> {{else}} ... <div> <button {{action (mut isEditing) true}}>Edit</button> <button {{action (mut confirmingDelete) true}}>Delete</button> </div> ... {{/if}}
First let’s wrap the entire template in an
if statement. This corresponds to the
isEditing property which by default will be
false.
Notice that the form is very almost identical to our create book form. The only real difference is that the actions
update runs the
update action in the
book route. The
cancel button also flips the
isEditing property to
false.
Everything we had before gets nested inside the
else. We add the
Edit button to flip
isEditing to true and display the form.
6.3.2 Create a
book controller to handle form values
Remember the
create-book controller? We used it to hold the values that’s later sent to the server to create a new book record.
We’ll use a similar method to get and display the book data in our
isEditing form. It will pre-populate the form with the current book’s data.
Generate a book controller:
ember g controller book
Open
client/app/controllers/book.js and create a
form computed property like before. Unlike before we’ll use the
model to pre-populate our form with the current
book data:
import Controller from '@ember/controller'; import { computed } from '@ember/object'; export default Controller.extend({ form: computed(function() { const model = this.get('model'); return { title: model.get('title'), author: model.get('author'), description: model.get('description') } }) });
6.3.3 Update the
book route
We’ll have to update our route again:
setupController(controller, model) { ... this.controller.set('isEditing', false); this.controller.set('form.title', model.get('title')); this.controller.set('form.author', model.get('author')); this.controller.set('form.description', model.get('description')); },
Let’s add on to the
setupController hook. Set
isEditing to
false and reset all the form values to their defaults.
Next let’s create the
update action:
actions: { ... update(book) { const form = this.controller.get('form'); book.set('title', form.title); book.set('author', form.author); book.set('description', form.description); book.save().then(() => { this.controller.set('isEditing', false); }); } }
It’s pretty straightforward. We get the form values, set those values on the
book and persist with
save. Once successful we flip
isEditing back to
false.
6.3.4 Demonstration: Can edit information of an existing book
6.4 Conclusion
We’ve completed the following steps by the end of Section 6:
- Identified the problem with the data coming from Django
- Installed JSON API into Django
- Updated the Books Route Template
- Created the book detail route and template
- Can view, add, edit, and delete database records from the EmberJS client
That’s it. We’ve done it! We built a very basic full stack application using Django and Ember.
Let’s step back and think about what we’ve built for a minute. We have an application called
my_library that:
- lists books from a database
- allows users to view each book in more detail
- add a new book
- edit an existing book
- delete a book
As we built the application we learned about Django and how it’s used to administer the database. We created models, serializers, views, and URL patterns to work with the data. We used Ember to create a user interface to access and change the data through the API endpoints.
Section 7: Moving On
7.1 What’s Next
If you’ve gotten this far, you’ve finished the tutorial! The application is running with all the intended functionality. That’s a lot to be proud of. Software development, complicated? That’s an understatement. It can feel downright inaccessible even with all the resources available to us. I get that feeling all the time.
What works for me is to take frequent breaks. Get up and walk away from what you’re doing. Do something else. Then get back and break down your problems step by step into the smallest units. Fix and refactor until you get to where you want to be. There are no shortcuts to building your knowledge.
Anyways, we’ve might have done a lot here for an introduction but we’re only scratching the surface. There is plenty more for you to learn about full stack development. Here are some examples to think about:
- user accounts with authentication
- testing functionality of the application
- deploying the application to the web
- writing the REST API from scratch
When I have time I’ll look into writing more on these topics myself.
I hope you found this tutorial useful. It’s intended to serve as a jump-off point for you to learn more about Django, Ember and full stack development. It was definitely a learning experience for me. Shoutout to my Closing Folders team for the support and encouragement. We’re hiring now so feel free to get in touch!
If you’d like to reach out you can contact me through the following channels:
7.2 Further Reading
Writing this tutorial forced me confront the edges of my knowledge. Here are the resources that helped with my comprehension of the topics covered:
What is a full stack programmer?
What is a web application?
What is Django?
What is EmberJS?
What is version control?
What is Git?
How do I use Git with Github?
How do I create a Git repository?
How do I add a Git remote?
What is a model?
What is a view?
What is a superuser?
What is making a migration?
What is migrating?
What is SQLite?
JSON Python Parsing: A Simple Guide
How to secure API keys
What is Python?
What is pip?
What is virtualenv?
Best practices for virtualenv and git repo
What is an API?
What are API endpoints?
What is the Django REST Framework?
What is __init__.py?
What is a serializer?
What are views?
What are URLS?
What is JSON?
What are regular expressions?
What does __init__.py do?
What is REST?
What is Node.js?
What is NPM?
What is EmberJS?
What is Ember CLI?
What is Ember-Data?
What is a model?
What is a route?
What is a router?
What is a template?
What is an adapter?
What is the Django REST Framework JSON API?
What is the JSON API format?
What is dot notation? | https://www.freecodecamp.org/news/eli5-full-stack-basics-breakthrough-with-django-emberjs-402fc7af0e3/ | CC-MAIN-2022-05 | refinedweb | 10,904 | 59.3 |
A curated record of the most in fashion JavaScript libraries to retract your productivity
JavaScript is the ‘Lingua Franca’ of the Web. Additionally it is the 2d most in fashion programming language on this planet, appropriate within the lend a hand of Python.
If the most stylish pattern continues, JavaScript will soon overtake Python because the most in fashion programming language. One of the most Key aspects of JavaScript is it has a tiny Customary library. To complement it, JavaScript has millions of libraries (packages). JavaScript kit manager NPM ecosystem is one of the critical in fashion and vibrant kit manager ecosystems within the Utility Building trade.
As a JavaScript developer, you ought to salvage a factual plan about the supreme and necessary JavaScript libraries and utilize them in wish to reinventing the wheel. What are the most in fashion JavaScript libraries?
Here I’m record the 10 most in fashion JavaScript libraries per the following aspects:
- The different of downloads.
- The different of dependent projects.
- Collection of GitHub stars
Please gift that this record is about libraries and no longer about frameworks. Also, I’m record libraries all the plan in which by your entire stack: Front-stay and Aid-stay.
Whether it is seemingly you’ll maybe maybe well be organising a Front-stay or Aid-stay applications in JavaScript, it is seemingly you’ll maybe maybe feel the need for a main Utility library. Lodash is by a ways the most most ceaselessly ragged primary JavaScript Utility library. It affords utility capabilities for primary programming duties the utilize of the functional programming paradigm. It’s built upon the in fashion JavaScript library underscore.js. It makes JavaScript coding less difficult and cleaner.
Predominant Aspects
- Authorized capabilities to iterate arrays, objects, and strings.
- Manipulate and test numbers and values.
- Create composite capabilities.
- Developed functional programming aspects.
Reputation
With 36 million weekly downloads and round 123.9k dependent packages, Lodash is by a ways the most influential and pervasive JavaScript library:
With 46.8k stars, it is a ways also one of the critical in fashion JavaScript libraries in GitHub and the third most starred library in this record:
Set up
npm i --attach lodash
Hyperlink
Console output is one of the critical ragged JavaScript debugging tactics in Front-stay and Aid-stay pattern alike. Whereas you retract to hope to build your console groovy and sparkling, JavaScript library Chalk will also be very to hand. With the slogan “Terminal string styling done excellent” it helps styling console output merely and instant. It has an expressive and extremely performant API.
Predominant Aspects
- Supports 256 colors and Appropriate Colours.
- Auto-detects color give a enhance to.
- It’s tidy and centered.
- It’s going to nest kinds.
Reputation
With round 60 millions weekly downloads and 61okay dependent packages, chalk is one of the critical downloaded JavaScript libraries:
With 15.2k stars, it is a ways also one of the critical in fashion JavaScript libraries in GitHub:
Set up
npm i --attach chalk
Hyperlink
JavaScript is the programming language of the Web. In stylish cases, JavaScript libraries and frameworks are in pole region to acquire Front-stay Web applications. Amongst all JavaScript libraries and frameworks, one library stands head and shoulder above others: React. It’s a disruptive JavaScript library for constructing Client interface the utilize of 1-capacity recordsdata circulation and Factor-primarily based UI pattern. Whereas you suggest to make utilize of a stylish JavaScript library for your marvelous Client Interface, it is seemingly you’ll maybe maybe well utilize React.
Predominant Aspects
- It’s a Factor-primarily based library for the Scrutinize layer.
- It helps one-capacity recordsdata binding.
- It affords the functional programming paradigm in Front-stay pattern.
- It’s a ways also ragged to acquire Client Interface for Web, Desktop, Cell.
Reputation
With 8 million weekly downloads and round 63.8k dependent packages, react is by a ways the most in fashion and influential Client-Aspect JavaScript library:
With 157okay stars, it is one of the critical starred GitHub projects in your entire instrument pattern trade:
Set up
npm i --attach react
Hyperlink
Many JavaScript developers utilize console.log to debug JavaScript applications, critically within the Browser. Debug is the mighty better different to debugging JavaScript applications. It’s a tiny utility library to debug JavaScript applications in both Browser and Node.js. It also permits toggling the debug output for assorted ingredients of the application module to boot to the module as a full
Predominant Aspects
- Affords a decorated model of console.error.
- Save a superior color to a superior namespace.
- Supports many usual formatter.
- The debugger is extendable.
Reputation
With 76 million weekly downloads and 36.5k dependent packages, debug is the most downloaded library in this record:
With 9.1k stars, it is a ways also a posh JavaScript library in GitHub:
Set up
npm i --attach debug
Hyperlink
Relate Line Interface is a truly critical feature in Aid-stay instrument pattern. Impressed by Ruby’s commander, Commander.js is a library that affords a full account for-line interface acknowledge for Server-Aspect JavaScript.
Predominant Aspects
- Developed Relate Line alternate choices.
- Fluent API.
- Computerized and custom lend a hand.
- Custom match listener.
- Asynchronous give a enhance to.
Reputation
With 46 million weekly downloads and approx. 47okay dependent packages, commander is one of the critical in fashion Node.js libraries:
With 18.9k stars, it is a ways also one of the critical in fashion Node.js libraries in GitHub:
Set up
npm i --attach commander
Hyperlink
HTTP is by a ways the most ragged application Protocol in Change application pattern and trendy Web pattern. Whereas it is seemingly you’ll maybe maybe well be organising a Front-stay application, it is seemingly you’ll maybe maybe need an HTTP client. Depend on is the most most ceaselessly ragged HTTP client within the JavaScript landscape. This is in a position to maybe maybe well give you the most straight forward capacity to build HTTP calls with many developed and extremely advantageous aspects.
Predominant Aspects
- It helps streaming and Async/Await.
- HTTP Authentication.
- Custom HTTP headers.
- OAuth Signing.
- TLS/SSL protocol give a enhance to.
Reputation
With 21 million weekly downloads and 50.6k dependent packages, attach a question to is one of the critical dependent upon JavaScript library:
It has 24.8k GitHub stars, and it is amongst the most in fashion JavaScript project in GitHub:
Set up
npm i --attach attach a question to
Hyperlink
JavaScript is the programming language that is built on the Asynchronous programming paradigm. As a JavaScript developer, it is seemingly you’ll maybe maybe feel the need for a main utility library for asynchronous performance. Even though there exist many libraries that give a enhance to asynchronous functionalities, I acquire Async the supreme amongst the lot. It’s a main utility library offering extremely advantageous capabilities to work with Asynchronous JavaScript.
Predominant Aspects
- Asynchronous series capabilities.
- Asynchronous retain an eye on circulation.
- Asynchronous Utilities.
- Give a enhance to both Node.js and Browser.
Reputation
With 31 million weekly downloads and 30.3k dependent packages, async is the most in fashion asynchronous Utility library in JavaScript:
It also has 26.8k GitHub stars and one of the critical in fashion JavaScript Utility library in GitHub:
Set up
npm i --attach async
Hyperlink
Whereas it is seemingly you’ll maybe maybe well be the utilize of JavaScript for Server Aspect pattern, it is seemingly you’ll maybe maybe well wish to place in pressure an HTTP Server. Verbalize is the most properly-recognized and most ragged HTTP Server implementation in JavaScript. It’s mainly ragged to acquire Web Applications and REST API. It’s a minimalistic, instant, and no longer more-opinionated library. There are many JavaScript Web Framework built upon Verbalize.
Predominant Aspects
- It affords middleware, routing, template.
- It helps scream negotiation.
- Very immediate and excessive performant.
- It has HTTP helpers for redirection, caching.
Reputation
With14 million weekly downloads and 46.6k dependent packages, attach a question to is one of the critical in fashion Server-Aspect JavaScript libraries:
With 50.5k stars, it is the 2d most in fashion GitHub library in this record:
Set up
npm i --attach order
Hyperlink
As a instrument developer, we want to dwelling date and time. In JavaScript, in general, and in earlier versions in explicit, the give a enhance to for date and time became too small. There are many date/time libraries in JavaScript to give a enhance up to now and time in JavaScript. 2nd is by a ways the most in fashion date and time library in JavaScript. This library affords obedient give a enhance to to dwelling date and time in JavaScript.
Predominant Aspects
- parse date and time.
- validate the date and time.
- structure date and time
- manipulate date and time.
Reputation
With 15 million weekly downloads and 46.7k dependent packages, 2nd is one of the critical in fashion JavaScript libraries:
With 45okay stars, it is the fourth most in fashion GitHub library in this record:
Set up
npm i --attach moment
Hyperlink
Whereas you’re employed with Aid-stay JavaScript, it be critical to dwelling the file contrivance. Unfortunately, the file contrivance functionalities equipped in Node.js is minimal. Fs-extra is the library that affords extra and developed the plan in which to dwelling the file systems. It wants to be a descend-in exchange for the Node.js native file contrivance library fs.
Predominant Aspects
- Developed and extra file contrivance concepts.
- Prevents EMFILE error.
- Tumble-in exchange for fs.
- Form alongside with mkdirp, rimraf, ncp packages redundant.
- Give a enhance to Sync, Async, and Async/Await.
Reputation
With 33 million weekly downloads and 37.9k dependent packages, fs-extra is one of the critical in fashion Server-Aspect JavaScript libraries:
Additionally it is a ways a favored JavaScript library in GitHub:
Set up
npm i --attach fs-extra | https://gisttree.com/reviews/top-10-most-popular-javascript-libraries-to-use-in-2021/ | CC-MAIN-2020-45 | refinedweb | 1,656 | 57.57 |
!
[purehtml]
<script type=”text/javascript” src=””></script>
<script type=”text/javascript” src=””></script>
<script type=”text/javascript” src=””></script>
[/purehtml]
Introduction
Freebase (What is FreeBase?) has been founded by Metaweb, now acquired by Google and it powers Google semantic search…when you search for a specific thing (movie, actor…) or when ask something (how tall is the eiffel tower) it appears an info box on the right: these info are from Freebase!
Now let’s look what Freebase know…well type something in the input below…and when I say something, I mean anything from Eggplant to F12berlinetta! A list will appear and if you hover an item you will get a short description…
[purehtml]
<input class=”fb-suggest” />
<script>(function($) {
var css = jQuery(“<link>”);
css.attr({rel: “stylesheet”,type: “text/css”,href: “”});
$(“head”).append(css);
$.getScript(“”,function() {$(“.fb-suggest”).suggest();});
})( jQuery );</script>
[/purehtml]
What you just used is an autocomplete widget (Freebase suggest is the official name) to explore the fantastic world of Freebase.
Freebase is organized in types, something similar to classes or categories. In turn, domains group together several types belonging to the same macro-category (i.e. Film domain has the types Actor, Director, etc.), finally there are the entities or instances of types, which are called topics.
An example? The music domain contains these types. Here some topics of the type /music/artist.
A note about the notation /music/artist: it is a short for, where is Freebase namespace and the rest is in the form /<domain>/<type> .
Topics may have (and generally do) more than one type: forget about Object Oriented Programming classes where inheritance holds and an entity is instance of one and only one class and its superclasses! In OWL/RDF an instance has as many classes as it needs: this gives more freedom in defining entities! All of you know George Clooney as an actor, but this would not be a exhaustive description: George Clooney’s types (this is a JSON response to a MQL query, I’ll tell you later!).
<<Yes…cool…but what can I do with it?>> - you may ask! The first advantage is in search! You cannot ask Wikipedia for all Led Zeppelin’s album, but you can with Freebase and, if you need it, you can ask all the formats in which have been released all Led Zeppelin’s albums or you may be interested in 32 actors born in the 60s with 2 films of each actors.
You can find a lot of examples, guides and tutorials on Freebase itself, about these powerful queries, but this post is about using semantic data in your website, not about creating a smart search engine! For example what about using the /people/person type for your users profile: look how Freebase describes a person. Then you should use Freebase topics instead of raw text for users info. For myself, the property /people/person/places_lived would have these topics as object:
rdf.freebase.com/ns/m/09bzvz
rdf.freebase.com/ns/m/096g3
rdf.freebase.com/ns/m/09btk
This technique can be used on almost everything, obviously you can define your own property like ‘Cars owned’ or ‘Music I like’ and then assign cars and musical groups.
Please notice the way Freebase represents topics: /m/<id>. It’s called Machine ID, it’s not the only identifier for a topic, but it’s the preferred one for storing Freebase topics on your own graph store.
Behind the scenes
At this point you should be interested on what’s behind: everything, from domains to types, is based on a schema. A schema describes how data is structured, it states that a type belongs to a domain, it defines the properties a type has, in practice it is what Semantic Web would call Ontology. You’ve already seen the Person schema, a type belonging to the People domain. /people/person has several properties like Place of Birth, Country of Nationality, Profession etc. and every property has an expected type (respectively) /location/location, /location/country, /people/profession: this means that an instance of /people/person should be connected, through the property /people/person/place_of_birth, to a topic of type /location/location (should is mandatory here because a graph do not check for the correctness of the expected type, but don’t worry for the moment!).
Included types
A type may include other types (included types), for example all types (but mediators) include the type /common/topic which is a base type for description, name, url etc. This somehow behave like a inheritance system: the type Politician include the type Person and the inclusion holds for deeper levels.
Mediator types
Some types are marked as mediator, they are used to better describe a property and their lonely existence is useless! For example the property /people/person/education expects a mediator type. How would you describe the education of a person with one and only one Freebase topic? You may use only the institution (i.e. University of Bologna), but it would be an incomplete description! Here it comes the mediator type /education/education, that features institution, field and period of study, etc. You create an instance of it and this is the object of the property /people/person/education! Anyway a topic containing the information {student: Luca Faggianelli, institution: University of Bologna, field: Electronics} is useless alone, so the mediator type doesn’t include the /common/topic type, so it doesn’t have a name, description etc!
You can easily browse behind-the-scenes information on dev.freebase.com. Freebase exposes an API and uses the Metaweb Query Language (MQL, a proprietary query language) to access data (in read and write) very easily. Some links you clicked were calls to Freebase API embedding MQL queries. To experiment with MQL, please visit the Query Editor and read the manual.
Freebase also provide a hosted development environment, Acre, where you can build your own app. Although it may be useful to get started, I don’t think it’s good for building a web app.
Extending with semantics
Whatever you are going to create, a recipes web app using Freebase topics instead of classical tags for the ingredients, or a music/movies recommendation system or other exotic stuffs, I would like to give you some advises!
First of all there’s no need to throw away an existing website for replacing it with graph-driven stuffs, you can easily extend it! Also, in my opinion, tools, standards and technologies are not mature enough to found an entire app on semantic web! Furthermore, the majority of web frameworks, libraries and systems relies on relational DBs, thus replacing them with graph stores would be not so trivial! This nice speech is to introduce the hybrid semantic web application, which is based on classical web technologies in parallel with semantic web magic.
Let’s go back to the example of /people/person as user profile, you can keep the important user info on the DB (password, email, etc.), then create an instance of Person on the graph (/people/person/123) and finally make a connection. Take a look at the graph (made with Dracula Graph Library, you can drag nodes!):
[purehtml]<div id=”graph1″></div><script>
var g = new Graph();
g.addEdge(“/people/person#123″, “/m/0f_3j7″, {label: “/people/person/place_of_birth”, directed: true});
g.addEdge(‘/user#456′,”/people/person#123″, {label: ‘isPerson’, directed: true});
var layouter = new Graph.Layout.Spring(g);layouter.layout();
var renderer = new Graph.Renderer.Raphael(‘graph1′, g, 500, 200);
renderer.draw();
</script>[/purehtml]
The connection between the DB and the graph is made with the triple
{ ‘/user#user_table_ID’, ‘isPerson’, ‘/people/person#graph_inst_ID’ }
user_table_ID is the ID of a user in the DB, while graph_inst_ID is the ID of the instance of /people/person.
Useful code
You can use Freebase REST API with any programming language simply issuing a HTTP request. I hadn’t find any really useful library for either Ruby or JS, so I wrote some useful function.
Javasctipt
The only dependence is jQuery for the AJAX call.
var Freebase = { api: '', key: '?key=' + 'your_key_here', // Main image (icon) of a topic img: function(mid) { return this.api + '/image' + mid + this.key; }, // MQL query with callback on success mql: function(query, callback) { $.get(this.api + '/mqlread', { query: JSON.stringify(query) }, callback, 'json'); } }
Most of the time I need the entire schema of a Freebase type (try in editor):
var type = '/music/artist'; var query = [{ "schema": type, "type": "/type/property", "name": null, "id": null, "expected_type": { "/freebase/type_hints/mediator": null, "name": null, "id": null }, "/freebase/property_hints/disambiguator": null, "master_property": null, "reverse_property": null, "unique": null }]; Freebase.mql(query, function(schema) {});
In your graph you will store topics with their MID (/m/1b123) and type as /music/artist so when you visualize them you must retrieve their real name, the variable unknown accepts both (try in editor):
var unknown = ['/m/0f_3j7', '/music/artist'], query = [{ "mid": null, "name": null, "id": null, "type": {limit: 1, id: null}, "topics:mid|=": unknown }]; Freebase.mql(query, function(names){})
When you are going to fill Freebase properties, you need the Freebase suggest to find the MIDs: it is a powerful widget, but some tweaks are still needed. First of all, when you find a topic and then click on it, the suggest fills the input element with the name, but you need the MID, so register a fb-select event and store the MID in the input as jQuery data, then retrieve it when you need it.
var NS = 'your://namespace.com'; jquery_text_input_element.suggest() .bind("fb-select", function(e, data) { $(this).val(data.name).data({mid: NS+data.mid}) });
In many case I need to search among topics of a certain type or domain (for place_of_birth, you would like to search for topics of type /location/location). You may also need to reconfigure the filter after creation of the widget: here’s a hack to do that.
function set_suggest_filter(suggest_elem, filters) { // filters = {type: '/music/artist'} var suggest = suggest_elem.data('suggest'), default_filters = { type: '', domain: ''}; filters = $.extend(default_filters, filters); $.each(filters, function(key, value) { if (!value) { delete(suggest.options.ac_param[key]); } else { suggest.options.ac_param[key] = value; } }); }
Ruby
You can port the above code in any language, for example I use this in a Ruby on Rails app, in the /lib folder.
module Freebase require 'rest-client' NS = '' API_URL = '' API_KEY = 'your_key_here' RestClient.proxy = ENV['http_proxy'] MQLClient = RestClient::Resource.new(API_URL) def Freebase.mql(q) response = MQLClient['/mqlread'].get({params: { query: q.to_json }}) return false if response.code != 200 # If errors return JSON.parse response # Otherwise end def Freebase.get_type_schema(type) _t = type.gsub(NS, '') # Strip the namespace if any q = [{ schema: _t, type: "/type/property", name: nil, id: nil, expected_type: { :'/freebase/type_hints/mediator' => nil, name: nil, id: nil }, :"/freebase/property_hints/disambiguator" => nil, master_property: nil, reverse_property: nil, unique: nil }]; return Freebase.mql(q) end end
In Ruby on Rails I generally use hooks to combine DB and graph information for a model.
class Entity < ActiveRecord::Base attr_accessible :name, :description attr_accessor :details # When retrieve an Entity, only DB information is fetched, so use # a the 'after_find' hook to search info in the graph after_find :get_details def get_details # Note that the Entity is identified on the graph with the same DB ID # using 'self.id'. Check below how Entity details are structured query = " select ?type ?prop ?val ?pname where { v:entity.#{self.id} v:hasDetails ?det. ?det ?prop ?val. ?det a ?type. optional { ?prop rdfs:label ?pname . } filter(?prop != rdf:type) }" response = Sparql.query(query) if response details = {} response['rows'].each do |r| # Set type # /food/beer => {} _t = r[0]['value'] details[_t] = {} if !details[_t] _t = details[_t] # Add properties _p = r[1]['value'] _v = r[2] _t[_p] = {name: r[3]['value'], values: []} if !_t[_p] _t[_p][:values] << [_v['type'], _v['value']] end end # Here you have Freebase-related info! self.details = details end end
The table shows the SQL table where the Entity models are stored. Name and description are the attributes of the model.
This example app allows the user to create anything that is modeled in Freebase, for example a beer! Attributes common to every entities (like /common/topics) are stored in the DB while the entity-specific info are on the graph. The v:entityDetails.451cf43 contains detailed info for the v:entity.123 (note that the ID used in graph is the same used in the DB to obtain a bridge between the two!) and the property rdf:type tells us that it is a beer, then we find Freebase properties like /food/beer/beer_style from the /food/beer schema. If you want to add more details (that is, another Freebase type) just add another v:entityDetails.<random_id> to v:entity.123 and you’re done!
[purehtml]<div id=”graph2″></div><script>
var g2= new Graph();
g2.addEdge(“v:entity.123″, “v:entityDetails.451cf43″, {label: “v:hasDetails”, directed: true});
g2.addEdge(‘v:entityDetails.451cf43′,”fb:food/beer”, {label: ‘rdf:type’, directed: true});
g2.addEdge(‘v:entityDetails.451cf43′,”fb:/m/02hv1lh”, {label: ‘fb:food/beer/beer_style’, directed: true});
var layouter = new Graph.Layout.Spring(g2);layouter.layout();
var renderer = new Graph.Renderer.Raphael(‘graph2′, g2, 500, 200);
renderer.draw();
</script>[/purehtml]
Please tell me your opinions and suggestions!
One thought on “Freebase: building a Semantic Web app”
Kairat
August 21, 2013 — 09:20
You got an error in your query , you forgot the bcears:{ query : [ { “*” : [ {} ], guid : null, limit : 5, name : null, profession : author , type : /people/person } ]}now the code returned is code : /api/status/ok , | http://tesladocet.com/programming/freebase-semantic-web-app-google-semantic-search/ | CC-MAIN-2017-34 | refinedweb | 2,253 | 54.93 |
fs.multifs¶
A MultiFS is a filesystem composed of a sequence of other filesystems, where the directory structure of each filesystem is overlaid over the previous filesystem. When you attempt to access a file from the MultiFS it will try each ‘child’ FS in order, until it either finds a path that exists or raises a ResourceNotFoundError. looks for files in templates if they don’t exist in theme. We can do this with the following code:
from fs.osfs import OSFS from fs.multifs import MultiFS themed_template_fs = MultiFS() themed_template_fs.addfs('templates', OSFS('templates')) themed_template_fs.addfs('theme', OSFS('theme'))
Now we have a themed_template_fs FS object presents a single view of both directories:
|-- snippets | |-- panel.html | |-- widget.html | `-- extra.html |-- index.html |-- profile.html |-- base.html `-- theme.html
A MultiFS is generally read-only, and any operation that may modify data (including opening files for writing) will fail. However, you can set a writeable fs with the setwritefs method – which does not have to be one of the FS objects set with addfs.
The reason that only one FS object is ever considered for write access is that otherwise it would be ambiguous as to which filesystem you would want to modify. If you need to be able to modify more than one FS in the MultiFS, you can always access them directly.
-.
clearwritefs(*args, **kwargs)¶
Clears the writeable filesystem (operations that modify the multifs will fail)
setwritefs(*args, **kwargs)¶
Sets the filesystem to use when write access is required. Without a writeable FS, any operations that could modify data (including opening files for writing / appending) will fail. | http://pyfilesystem.readthedocs.io/en/latest/multifs.html | CC-MAIN-2017-47 | refinedweb | 268 | 53.51 |
There are various ways to pass data from a Controller to a View. I'm going to
discuss how Controllers interact with Views and specifically cover ways you can
pass data from a Controller to a View to render a response back to a client. So,
let's get started. ViewBag
ViewBag is very well known way to pass the data from Controller to View & even
View to View. ViewBag uses the dynamic feature that was added in C# 4.0. We can
say ViewBag=ViewData + Dynamic wrapper around the ViewData dictionary. Let's see
how it is used. Between Controller and View
In the above image, you can see how data flows from the "Controller" to the
"View", and how it looks in the browser. Between View to View
In the above image, you see how data is initialized on the "View" page itself
using 'ViewBag.Title = "Index"' and then how it is getting rendered using '@ViewBag.Title'.
What is "Title"? It is nothing more than a key, which has very limited
availability and can be used on the same page only. So, the key naming is up to
you, use any name which makes you happy.
Look at one more case, where I will take the advantage of "Model".
In the above case, we have a "Model" defined by name "Friend" that has three
properties "Id", "Name" and "Address". On the "Controller", we have an object of
the "Friend" class named "frnd" and then using the dot (.) operation properties
are assigned then attached to these properties of the ViewBag.
Look at one more example, in which a list of students is passed using ViewBag.
So, in this way we can pass the list of students. I hope this is clear to you. ViewData
ViewBag and ViewData serves the same purpose in allowing developers to pass data
from controllers to views. When you put objects in either one, those objects
become accessible in the view. Let's look at one example:
In the above image, everything is normal instead something in foreach loop.
ViewData is typed as a dictionary containing "objects", we need to cast
ViewData["Students"] to a List<string> or an IEnumerable<string> in order to use
the foreach statement on it. Such as in:
@foreach (var std in (List<string>)ViewData["Students"])
OR
@foreach (var std in (IEnumerable<string>)ViewData["Students"])
Now look at one more beauty of MVC, you can put data into the ViewBag and
access it from ViewData or put data in the ViewData and access it from the
ViewBag, here you have all the freedom. ViewData to ViewBag
ViewBag to ViewData
So these two (ViewBag and ViewData) things seem to work almost exactly the same.
Then, what's the difference? The difference is only how you access the data.
ViewBag is actually just a wrapper around the ViewData object, and its whole
purpose is to let you use access the data dynamically instead of using magic
<strings> conversion, you can realize it by the above examples. Some people
prefer one style over the other. You can pick whichever makes you happy. In
fact, because they're the same data just with two different ways of accessing
it, you can use them interchangeably like ViewData to ViewBag or ViewBag to
ViewData. It is not recommend, however, that you actually use them
interchangeably, since it will will confuse others.
Now, so far we have looked into some ViewBag and ViewData, which is really very
useful but we can pass the data using a model also and this will provide you
full intellisense features. ViewModel
Using ViewModel we can also pass the data from the Controller to View; let's
look at the image.
In the above image, we have multiple people in a list form being passed as a
View, simple. I will add one more thing here, you are not going to get
intellisence support or a strongly typed view page here, to get it do it this
way. Just add a reference of the model by using the IEnumerable interface and
you are done.
Please read this blog also: TempData
TempData is meant to be a very short-lived instance, and you should only use it
during the current and the subsequent requests only. Since TempData works this
way, you need to know for sure what the next request will be, and redirecting to
another view is the only time you can guarantee this. You can use TempData to
pass error messages or something similar. Example1: Using TempData like a ViewData and ViewBag.
public class FriendController : Controller
{
//
// GET:
/Friend/
public ActionResult Index()
{
ViewData["VDFriend"]
= "Deepak
K Gupta";
ViewBag.VBFriend
= "Deepak
K Gupta";
TempData["TDFriend"]
= "Deepak
K Gupta";
return View();
}
}
And on "View":
<p>Using
ViewData: @ViewData["VDFriend"]</p>
<p>Using
ViewBag: @ViewBag.VBFriend</p>
<p>Using
TempData: @TempData["TDFriend"] </p>
This is a simple example, but we are not using the real advantage of TempData,
let's look at one more example. Example2: Using TempData to get data after redirect also.
public class FriendController : Controller
//
// GET: /Friend/
public ActionResult Index()
{
ViewData["VDFriend"]
= "Deepak K Gupta";
ViewBag.VBFriend = "Deepak K
Gupta";
TempData["TDFriend"]
= "Deepak K Gupta";
return new RedirectResult(@"~\Friend\AnotherPage\");
}
public ActionResult AnotherPage()
return View();
}
And on "View":
<p>Using
ViewData: @ViewData["VDFriend"]</p>
<p>Using
ViewBag: @ViewBag.VBFriend</p>
<p>Using
TempData: @TempData["TDFriend"]</p>
As in the above "FriendController", I'm redirecting the view, in other words the
Index() view will be redirected to the AnotherPage() view instantly and now on
another view after one redirect we won't be able to get data from ViewData or
ViewBag but TempData will work here.
Please read this blog also:.
©2016
C# Corner. All contents are copyright of their authors. | http://www.c-sharpcorner.com/UploadFile/abhikumarvatsa/various-ways-to-pass-data-from-controller-to-view-in-mvc/ | CC-MAIN-2016-22 | refinedweb | 958 | 62.17 |
Convert from
(+ (f 0 (g 1)) 2)
to
(g' (lambda (r0) (f' (lambda (r1) (+ r1 2)) 0 r0)) 1)
where data structure internally in Haskell is like
data AST = Node [AST] | Leaf Value data Value = IntVal Int | Plus | Atom String | Lambda [String]
Implementation and description
import qualified Control.Monad.State as S data AST = Node [AST] | Leaf Value instance Show AST where show (Node xs) = "(" ++ unwords (map show xs) ++ ")" show (Leaf v) = show v data Value = IntVal Int | Plus | Atom String | Lambda [String] instance Show Value where show (IntVal i) = show i show Plus = "+" show (Atom name) = name show (Lambda names) = "lambda (" ++ unwords names ++ ")" -- (+ (f 0 (g 1)) 2) -- (g' (lambda (r0) (f' (lambda (r1) (+ r1 2)) 0 r0)) 1) program :: AST program = Node [Leaf Plus, Node [Leaf (Atom "f"), Leaf (IntVal 0), Node [Leaf (Atom "g"), Leaf (IntVal 1)]], Leaf (IntVal 2)] main = do print program print $ cps program cps :: AST -> AST cps ast = let (newAst, modifiers) = S.runState (cps' ast) [] in foldl (flip ($)) newAst modifiers cps' :: AST -> S.State [AST -> AST] AST cps' (Node (Leaf (Atom f) : xs)) = do xs' <- mapM cps' xs n <- length `fmap` S.get let name = 'r' : show n append $ \root -> Node $ (Leaf . Atom $ f ++ "'") : Node [Leaf (Lambda [name]), root] : xs' return $ Leaf (Atom name) cps' (Node xs) = Node `fmap` mapM cps' xs cps' c@(Leaf _) = return c append x = S.modify (x :)
This converts correctly.
I used State Monad to modify given tree. The function
cps starts state and the actual function
cps' traverses given AST subtrees recursively.
(+ (f 0 (g 1)) 2) ^^^^^^^^^^^
When
cps' sees this subtree, oh yes the first item of the list is a user-defined function and it's not tail-call, so
cps' wants to replace the part with a new variable (say
r), and enclose whole tree with new function
f' and the arguments.
(f' (lambda (r) ((+ r 2) 0 (g 1)))) ^^^^^^^^^^^^^^^^^ ^ ^^^^^^^^^^^
It's easy to change subtree but it's not trivial to change outside the subtree. But fortunately we already know that we only have to enclose something around the whole tree, so you can just save a function in state.
After
cps' process is done, you apply all functions that the state has accumulatively to enclose trees. That will complete the job. | http://ujihisa.blogspot.com/2011/12/continuous-passing-conversion-in.html | CC-MAIN-2017-22 | refinedweb | 380 | 70.16 |
MessageQueue Programming Architecture
The MessageQueue component uses these portions of the Microsoft .NET Framework namespaces:
When you add an instance of the MessageQueue component to your Visual Studio project, the system automatically creates the references and import statements you need to access these namespaces and classes. If you are creating your MessageQueue components in code in your Visual Studio project, you need to add a reference to System.Messaging.dll and add a statement to your code importing (in Visual Basic) or using (in C#) System.Messaging. For instructions on adding and removing project references, see How to: Add or Remove References in Visual Studio (Visual Basic).
If you are developing your application using the .NET Framework, you need to add a reference to System.Messaging.dll when you compile. You also need to add a statement to your code importing (in Visual Basic) or using (in C#) System.Messaging. For information on including references during compilation, see /reference (Visual Basic) or /reference (Import Metadata) (C# Compiler Options).
You can use the following methods to interact with an instance of the MessageQueue component:
Use the Create method to create a new message queue using the path you specify, and use the Delete method to delete an existing queue.
Use the Exists method to see whether a particular message queue exists.
Use the GetPublicQueues method to locate message queues in your Message Queuing network.
Use the Peek or BeginPeek method to look at messages in a particular queue without removing the messages from the queue.
Use the Receive and BeginReceive methods to retrieve the message at the front of the specified queue and remove it from the queue.
Use the Send method to send a message to the specified queue.
You can view details about your Message Queuing installation by using Server Explorer to look at the messaging server. For more information, see How to: Find Queues in Server Explorer. You can also get detailed information about the configuration of your message queue network by using Message Queuing Explorer, which is installed automatically with Message Queuing. | http://msdn.microsoft.com/en-us/library/4c3f1tzb(v=vs.90).aspx | CC-MAIN-2014-10 | refinedweb | 345 | 53.41 |
Java is an object-oriented programming language but also retains all features of a high-level programming language. An object-oriented programming language is class-based where a class represents a real-world entity.
The class itself does not do anything, but it is used to create objects that have properties and functions ( called methods) and it behaves like the real world entity that it represents. The object derives its set of properties and methods from the class.
For example,
A fruit is a class that represents a fruit, is a real-world entity. You can create objects from class fruit such as mango that has properties like color, taste, shape, and methods like hang_from_tree( ), fall_from_tree( ).
A person is a class that represents a real-world entity person. The driver, teacher are objects of class Person, the driver has properties like legs, arm, dress_code, salary and method like driving( ), starting_vehicle( ) and stopping_vehicle( ).
All properties and methods correspond to the real-world entity such as mango, driver whose properties and functions we already know.
Structure of a Java Program
In the previous section, we learned about classes and objects. Every java program has at least one class, if the program has only one class, then it is a public class.
The Java language put a lot of restrictions on classes such as class visibility. Class visibility decides who will access a particular Java class from outside of class or from another class.
The public class is visible to other classes, but the private classes are strictly restricted for other classes and functions outside of the class. We will discuss more on visibility in future articles.
Java File Name
The source code is saved in a file with specific extensions. In c/c++ language, the extension is .c or .cpp. The java has certain rules for a file name.
In a java program, there is only one public class. If the public class name is the car then the file name should be car.java. No restriction on what or how to decide a name for your class.
Import command
The java program starts with an import statement located at the top of the program source code. The import statement has the same task as the #include statement in C language. We import packages that hold some predefined classes.
When java program runs it looks for two places for classes, first, in the current user-defined package and second, inside imported packages.
Let us understand what packages does in java program. Consider the following example.
String name = "Jack";
A string is a class and its location is in java.lang package.
The above command is the same as
java.lang.String name = "jack";
By using the import statement you can avoid the lengthy prefixes in the program.
import java.lang.*;
The asterisk (*) represents all classes inside the java.lang package, but you can also specify a specific class if you know about it.
The Main Function
Similar to c/c++ language, Java also has a main function. Since Java is a object-oriented programming, it is hard to decide which object to create first and which method to invoke at the beginning. To solve this problem, the main function in java is a static function which does not belong to any particular object and starts even before other objects are created.
The main resides in the public class of java, which happens to be the only public class in java program. It is time to write to you first program.
Your First Program in Java – Hello World!
It is a convention to write a “hello world !” program when you start a programming language. The source code for ‘hello world” program is given below, that consists of all the elements we discuss above.
/* * File name : helloworld.java * Data Created: July 12, 2018 * Author : Notesformsc.org * File Size : Na * Licence : Free */ import java.lang.*; public class helloworld { public static void main(String[] args) { //prints the output System.out.println("Hello World!"); } }
Output
Hello World !
Comments are very useful in the java program to bring clarity to the program, especially if the program has a huge amount of code.
There are two types of comments available in java.
- Single line comments ( // ) – It is used for writing short comments that span only a single line.
- Multiline comments ( /* … */) – It is usually for writing long-form comments at the beginning of the program providing all sorts of information about the author and the program.
There are no restrictions on commenting as far as position or placement is concerned, but common practice is to put a comment above or below the code block.
Also, you cannot nest or mix two types of comments. | https://notesformsc.org/java-program-structure/ | CC-MAIN-2021-04 | refinedweb | 784 | 65.12 |
tmpfs man page
tmpfs — a virtual memory filesystem
Description).
Mount options
The tmpfs filesystem supports the following mount options:
- size=bytes
Specify an upper limit on the size of the filesystem. The size is given in bytes, and rounded up to entire pages.
The size may have a k, m, or g suffix for Ki, Mi, Gi (binary kilo (kibi), binary mega (mebi) and binary giga (gibi)).
The size may also have a % suffix to limit this instance to a percentage of physical RAM.
The default, when neither size nor nr_blocks is specified, is size=50%.
- nr_blocks=blocks
The same as size, but in blocks of PAGE_CACHE_SIZE.
Blocks may be specified with k, m, or g suffixes like size, but not a % suffix.
- nr_inodes=inodes
The maximum number of inodes for this instance. The default is half of the number of your physical RAM pages, or (on a machine with highmem) the number of lowmem RAM pages, whichever is smaller.
Inodes may be specified with k, m, or g suffixes like size, but not a % suffix.
- mode=mode
Set initial permissions of the root directory.
- gid=gid (since Linux 2.5.7)
Set the initial group ID of the root directory.
- uid=uid (since Linux 2.5.7)
Set the initial user ID of the root directory.
- huge=huge_option (since Linux 4.7.0)
Set the huge table memory allocation policy for all files in this instance (if CONFIG_TRANSPARENT_HUGE_PAGECACHE is enabled).
The huge_option value is one of the following:
- never
Do not allocate huge pages. This is the default.
- always
Attempt to allocate huge pages every time a new page is needed.
- within_size
Only allocate huge page if it will be fully within i_size. Also respect fadvise(2)/madvise(2) hints
- advise
Only allocate huge pages if requested with fadvise(2)/madvise(2).
- deny
For use in emergencies, to force the huge option off from all mounts.
- force
Force the huge option on for all mounts; useful for testing.
- mpol=mpol_option (since Linux 2.6.15)
Set the NUMA memory allocation policy for all files in this instance (if CONFIG_NUMA is enabled).
The mpol_option value is one of the following:
- default
Use the process allocation policy (see set_mempolicy(2)).
- prefer:node
Preferably allocate memory from the given node.
- bind:nodelist
Allocate memory only from nodes in nodelist.
- interleave
Allocate from each node in turn.
- interleave:nodelist
Allocate from each node of in turn.
- local
Preferably allocate memory from the local node.
In the above, nodelist is a comma-separated list of decimal numbers and ranges that specify NUMA nodes. A range is a pair of hyphen-separated decimal numbers, the smallest and largest node numbers in the range. For example, mpol=bind:0-3,5,7,9-15.
Versions
The tmpfs facility was added in Linux 2.4, as a successor to the older ramfs facility, which did not provide limit checking or allow for the use of swap space.
Notes.
See Also
df(1), du(1), memfd_create(2), mmap(2), set_mempolicy(2), shm_open(3), mount(8)
The kernel source files Documentation/filesystems/tmpfs.txt and Documentation/admin-guide/mm/transhuge.rst.
Colophon
This page is part of release 5.01 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
Referenced By
cgroups(7), fallocate(2), filesystems(5), ioctl_userfaultfd(2), keyrings(7), lseek(2), madvise(2), memfd_create(2), mmap(2), proc(5), remap_file_pages(2), shm_open(3), shm_overview(7), swapon(2), sysfs(5), user_namespaces(7). | https://www.mankier.com/5/tmpfs | CC-MAIN-2019-30 | refinedweb | 591 | 59.5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.