text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
this site is very useful for all those who really want to know the basic and advance features of Java. i m recommend everyone for this site who want to learn Java
sir,
i have observed here that outputs of botg using Thread or Runnable are not same in line5 of stdout
Post your Comment
Execution of Multiple Threads in Java
Execution of Multiple Threads in Java Can anyone tell me how multiple threads get executed in java??I mean to say that after having called the start method,the run is also invoked, right??Now in my main method if I want
Threads
.
Threads vs Processes
Multiple processes / tasks
Separate programs...
Threads
Basic Idea
Execute more than one piece of code at the "same... time slicing.
Rotates CPU among threads / processes.
Gives
creating multiple threads - Java Beginners
creating multiple threads demonstrate a java program using multiple thread to create stack and perform both push and pop operation synchronously. Hi friend,
Use the following code:
import java.util.*;
class
Creating multiple Threads
In this section you will learn how to create multiple thread in java. Thread... multiple thread to run concurrently. Each and every thread has the
priority... to override
run() method.Example : Code for creating multiple thread.
public
Synchronized Threads
being corrupted by multiple
threads by a keyword synchronized to synchronize them... methods, multiple
threads can still access the class's non-synchronized methods...
Synchronized Threads
Explain about threads:how to start program in threads?
Explain about threads:how to start program in threads? import...; Learn Threads
Thread is a path of execution of a program... more than one thread. Every program has at least one thread. Threads are used
Synchronized Threads
being corrupted by multiple
threads by a keyword synchronized to synchronize them...-synchronized methods, multiple
threads can still access the class's non...
Synchronized Threads
threads
threads what are threads? what is the use in progarmming
interfaces,exceptions,threads
with multiple threads is referred to as a multi-threaded process.
In Java Programming... THE COMPLETE CONEPTS OF INTERFACES,EXCEPTIONS,THREADS
Interface... class.
In java, multiple inheritance is achieved by using the interface
threads
Threads
Java - Threads in Java
or
multiprogramming is delivered through the running of multiple threads
concurrently...
Java - Threads in Java
Thread is the feature of mostly languages including Java. Threads
Running threads in servlet only once - JSP-Servlet
Running threads in servlet only once Hi All,
I am developing a project with multiple threads which will run to check database continuously. With those two separate threads I can check with database and do some other
Creation of Multiple Threads
Creation of Multiple Threads
Like creation of a single thread, You can also create
more...
In this program, two threads are created along with the
"main" thread
Reading and writting multiple files
Reading and writting multiple files how can i read and write say two different files at the same time using threads
Java threads
Java threads What are the two basic ways in which classes that can be run as threads may be defined
Synchronization on threads
Threads in realtime projects
Threads in realtime projects Explain where we use threads in realtime projects with example
Coding for life cycle in threads
Coding for life cycle in threads program for life cycle in threads
Life Cycle of Threads
states implementing Multiple-Threads are:
As we have seen different...;
When you are programming with threads, understanding...;This method returns the number of active threads in a particular thread group and all
java,j2eeranja g April 4, 2011 at 1:06 PM
this site is very useful for all those who really want to know the basic and advance features of Java. i m recommend everyone for this site who want to learn Java
creating multythreadsbhaskar July 7, 2011 at 2:21 PM
sir, i have observed here that outputs of botg using Thread or Runnable are not same in line5 of stdout
Post your Comment | http://roseindia.net/discussion/22972-Creation-of-Multiple-Threads.html | CC-MAIN-2014-15 | refinedweb | 659 | 61.46 |
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
You can subscribe to this list here.
Showing
1
results of 1
Hi !
I've got a problem with SQLObject in WW4Py (webkit 0.8.1).
I' a simple script working with SQLObject as a standalone.
As soon as I try to plug it into webware I raise an import Error in SQLObject
Here it is :
...
File "c:\www\site\context\Dossier.py", line 1, in ?
from sqlobject import *
File "c:\python23\lib\ihooks.py", line 398, in import_module
q, tail = self.find_head_package(parent, str(name))
File "c:\python23\lib\ihooks.py", line 439, in find_head_package
q = self.import_it(head, qname, parent)
File "c:\python23\lib\ihooks.py", line 489, in import_it
m = self.loader.load_module(fqname, stuff)
File "WebKit\ImportSpy.py", line 30, in load_module
File "c:\python23\lib\ihooks.py", line 274, in load_module
m = self.hooks.load_package(name, filename, file)
File "c:\python23\lib\ihooks.py", line 174, in load_package
return imp.load_module(name, file, filename, ("", "", PKG_DIRECTORY))
File "C:\Python23\lib\site-packages\sqlobject\__init__.py", line 7, in ?
from dbconnection import ConnectionURIOpener
ImportError: cannot import name ConnectionURIOpener
/ / I can make it work with version 0.5.2 of SQLObject (with a few changes in the code), but i'd like to use version 0.6.x
I don't understand the problem. I've tried with the svn version and with the downloadable one at SF.net
I use Python2.3.3 , apache2.xx, WW4PY last non cvs version (0.8.1 I think) on Widows2000
Thanx in advance for help
Phil | http://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200412&viewday=15 | CC-MAIN-2015-22 | refinedweb | 276 | 63.46 |
Hi all,A summary of what this is about can be found here: are still a lot of things to handle. Especially aboutwhat is done by scheduler_tick() but we also need to:- completely handle cputime accounting (need to find every "reader"of cputime and flush cputimes for all of them).-handle perf- handle irqtime finegrained accounting- handle ilb load balancing- etc...Nonetheless this is time to post a new iteration of the patchsetbecause the design has changed a bit, some bugs have been fixed,more simplification, more unification with dynticks-idle code,namespace fixes, various improvements here and there...The git branch can be fetched from:git://github.com/fweisbec/linux-dynticks.git nohz/cpuset-v2Changelog since v1:- Rebase against 3.3-rc7 + tip:timers/core branch targetedfor 3.4-rc1- Refine some changelogs-- Restart the tick anytime more than one task is on the runqueue. We were previouslyonly covering wake ups, now we also handle migration and any other source of task enqueuing- Handle use of RCU in schedule() when called right before resuming userspace(new schedule_user() API)- Take the decision to stop the tick from irq exit instead of the middle of the timerinterrupt. This gives more opportunity to stop it and is one step more to unify idleand adaptive tickless.- Unify tickless idle and tickless user/system CPU time accounting infrastructures.- If the tick is stopped adaptively and we are going to schedule the idletask, don't restart the tick.- Remove task_nohz_mode per cpu var and use ts->tick_stopped instead. Thisleads to more unification between idle tickless and adaptive tickless.Frederic Weisbecker (32): nohz: Separate idle sleeping time accounting from nohz logic nohz: Make nohz API agnostic against idle ticks cputime accounting nohz: Rename ts->idle_tick to ts->last_tick nohz: Move nohz load balancer selection into idle logic nohz: Move ts->idle_calls incrementation into strict idle logic nohz: Move next idle expiry time record into idle logic area x86: Add adaptive tickless hooks on do_notify_resume() nohz: Don't restart the tick before scheduling to idle rcu: New rcu_user_enter() and rcu_user_exit() APIs rcu: New rcu_user_enter_irq() and rcu_user_exit_irq() APIs rcu: Switch to extended quiescent state in userspace from nohz cpuset nohz: Exit RCU idle mode when we schedule before resuming userspace nohz/cpuset: Disable under some configs arch/Kconfig | 3 + arch/x86/Kconfig | 1 + arch/x86/include/asm/entry_arch.h | 3 + arch/x86/include/asm/hw_irq.h | 7 + arch/x86/include/asm/irq_vectors.h | 2 + arch/x86/include/asm/smp.h | 11 + arch/x86/include/asm/thread_info.h | 10 +- arch/x86/kernel/entry_64.S | 12 +- arch/x86/kernel/irqinit.c | 4 + arch/x86/kernel/ptrace.c | 10 + arch/x86/kernel/signal.c | 3 + arch/x86/kernel/smp.c | 26 ++ arch/x86/kernel/traps.c | 20 +- arch/x86/mm/fault.c | 13 +- fs/proc/array.c | 2 + include/linux/cpuset.h | 29 ++ include/linux/kernel_stat.h | 2 + include/linux/posix-timers.h | 1 + include/linux/rcupdate.h | 8 + include/linux/sched.h | 10 +- include/linux/tick.h | 75 ++++-- init/Kconfig | 8 + kernel/cpuset.c | 107 +++++++ kernel/exit.c | 8 + kernel/posix-cpu-timers.c | 12 + kernel/printk.c | 15 +- kernel/rcutree.c | 150 ++++++++-- kernel/sched/core.c | 83 ++++++- kernel/sched/sched.h | 23 ++ kernel/softirq.c | 6 +- kernel/sys.c | 6 + kernel/time/tick-sched.c | 540 +++++++++++++++++++++++++++++------- kernel/time/timer_list.c | 7 +- kernel/timer.c | 2 +- 34 files changed, 1042 insertions(+), 177 deletions(-)-- 1.7.5.4 | http://lkml.org/lkml/2012/3/21/214 | CC-MAIN-2014-10 | refinedweb | 567 | 53.68 |
1234567891011121314151617181920212223242526272829303132333435363738
using namespace std;
int main ()
{
int selection = 0;
double input1(0.0);
double input2(0.0);
char indicator('n');
cout << "Welcome to Andrew's C++ Calculator" << endl;
cout << " Please enter two integers to find the sum, difference, product, quotient, square root, and power of " << endl;
for (;;)
{
cin >> input1;
cin >> input2;
cout << " The difference of " << input1 << " minus " << input2 << " is " << input1-input2 << endl;
cout << " The sum of " << input1 << " plus " << input2 << " is " << input1+input2 << endl;
cout << " The product of " << input1 << " times " << input2 << " is " << input1*input2 << endl;
cout << " The quotient of " << input1 << " divided by " << input2 << " is " << input1/input2 << endl;
cout << " The square root of " << input1 << " is " << sqrt( input1 );
cout << input2 << " to the 4th power is " << pow( input2
using namespace std;
#include <iostream>
#include <string>
int main (){
string first_name;
string last_name;
int idnum;
int jobclass;
int totalhours;
cout << " State employee's name. ( First, Last ) " << endl;
cin >> first_name;
cin >> last_name;
cout << " State your Id Number. " << endl;
cin >> idnum;
cout << " State job classification. " << endl;
cin >> jobclass;
cout << " Hours worked this week: " << endl;
cin >> totalhours;
double hourlyrate;
switch (jobclass){
case 1:
hourlyrate = 5.50;
break;
case 2:
hourlyrate = 6.00;
break;
case 3:
hourlyrate = 7.00;
break;
case 4:
hourlyrate = 9.00;
break;
case 5:
hourlyrate = 12.00;
break;
default:
hourlyrate = 5.50;
break;
}
int overtimehours, regularhours;
if (totalhours > 40){ //employee worked more than regular hours
overtimehours = totalhours - 40;
regularhours =40;
}
else { //employee didn't work more than regular hours
overtimehours = 0;
regularhours = totalhours;
}
double reg_pay;
double overtime_pay;
reg_pay = regularhours * hourlyrate;
overtime_pay = (overtimehours) * 1.5 * hourlyrate;
cout << " Employee Name: " << first_name << " " << last_name << " ID. Number: " << idnum << endl;
cout << " Job Classification: " << jobclass << " Hourly Rate " << hourlyrate << endl;
cout << " Total Hours Worked: " << totalhours << " Overtime Hours: " << overtimehours << endl;
cout << " Regular Pay: " << reg_pay << " Overtime Pay: " << overtime_pay << endl;
cout << " Total Earnings ......... " << reg_pay + overtime_pay << endl;
if (totalhours < 40){
cout << " Inadequit number hours worked. " << endl;
}
if (totalhours > 60){
cout << " Excessive number of hours worked! " << endl;
}
if (jobclass < 1 || jobclass > 5) {
cout << " **** The Employee's Job Classification is in error **** " << endl;
}
} | http://www.cplusplus.com/forum/beginner/117163/ | CC-MAIN-2015-18 | refinedweb | 323 | 53.41 |
In this Python tutorial, we will learn about Fractal Python Turtle and we will also cover different examples related to fractal turtles. And, we will cover these topics.
- Fractal python turtle
- Fractal tree python turtle
- Fractal recursion python turtle
- Fractal drawing turtle
Fractal python turtle
In this section, we will learn about the fractal turtle in Python Turtle.
Fractal python turtle is used to make geometrical shapes with different scales and sizes. In this, it makes repeating shapes in geometry form which works on a different scale and size it is not same in shape.
Code:
In the following code, we have used the fractal python turtle which indicated to make geometrical shapes. To create this we import the turtle library.
We use speed(), penup(), pendown(), forward(), left() goto(), getscreen(), and bgcolor() functions to make this geometry shape.
- speed() is used to give the speed at which we are creating a geometry shape.
- penup() is used to stop the drawing.
- pendown() is used to start the drawing.
- goto() is used to move the turtle.
- forward() is used to move the turtle forward.
- left() is used to move the turtle to left direction.
from turtle import * import turtle tur = turtle.Turtle() tur.speed(6) tur.getscreen().bgcolor("black") tur.color("cyan") tur.penup() tur.goto((-200, 50)) tur.pendown() def star(turtle, size): if size <= 10: return else: for i in range(5): turtle.forward(size) star(turtle, size/3) turtle.left(216) star(tur, 360) turtle.done()
Output:
In the following output, we can see the different geometry shapes and different scales that we can see in this gif, We use speed(), penup(), pendown(), forward(), left() and goto() functions to make this geometry shape.
Also, check: Python Turtle Dot
Fractal tree python turtle
In this section, we will learn about how to create a fractal tree turtle in a python turtle.
In this, we are creating a tree using python fractal we created sub-branches (Left and right) and we shorten the new sub-branches until we reach the minimum end to create a tree.
Code:
In the following code, we import the turtle module from turtle import *, import turtle for creating this fractal tree.
To create a tree we assign the x-axis and y-axis that defined the acute angle between the branch of Y.
We assigned the speed() function that helps to draw the shape in what speed a user has assigned.
- speed() is used to define the speed of the pen to draw the shape.
- y-axis() is used to plot a Y
- pencolor() is used for setting color according to color level.
from turtle import * import turtle speed('fastest') right(-90) angle = 30 def yaxis(size, lvl): if lvl > 0: colormode(255) pencolor(0, 255//lvl, 0) forward(size) right(angle) yaxis(0.8 * size, lvl-1) pencolor(0, 255//lvl, 0) lt( 2 * angle ) yaxis(0.8 * size, lvl-1) pencolor(0, 255//lvl, 0) right(angle) forward(-size) yaxis(80, 7) turtle.done()
Output:
After running the above code, we get the following output in which we can see the fractal tree is created with size 80 and level 7.
Read: Python turtle onclick
Fractal recursion python turtle
In this section, we will learn about fractal recursion in python turtle.
Recursion is the process of repeating units in a similar way fractal is used to generate an infinite amount of copies of pictures that form a fractal pattern.
Code:
In the following code, we imported a turtle library we have defined the title to a window with the name “Python Guides” assigning the bg_color and giving the screen height and width.
We define the drawline() which is drawing from pos1 to pos2 (Position is a phrase of pos) after that we assigned the recursive() which is used to generate multiple copies of the same picture.
from turtle import * import turtle speed = 5 bg_color = "black" pen_color = "red" screen_width = 800 screen_height = 800 drawing_width= 700 drawing_height = 700 pen_width = 5 title = "Python Guides" fractal_depth = 3 def drawline(tur, pos1, pos2): tracing the algorithm. tur.penup() tur.goto(pos1[0], pos1[1]) tur.pendown() tur.goto(pos2[0], pos2[1]) def recursivedraw(tur, x, y, width, height, count): drawline( tur, [x + width * 0.25, height // 2 + y], [x + width * 0.75, height // 2 + y], ) drawline( tur, [x + width * 0.25, (height * 0.5) // 2 + y], [x + width * 0.25, (height * 1.5) // 2 + y], ) drawline( tur, [x + width * 0.75, (height * 0.5) // 2 + y], [x + width * 0.75, (height * 1.5) // 2 + y], ) if count <= 0: # The base case return else: # The recursive step count -= 1 recursivedraw(tur, x, y, width // 2, height // 2, count) recursivedraw(tur, x + width // 2, y, width // 2, height // 2, count) recursivedraw(tur, x, y + width // 2, width // 2, height // 2, count) recursivedraw(tur, x + width // 2, y + width // 2, width // 2, height // 2, count) if __name__ == "__main__": screenset = turtle.Screen() screenset.setup(screen_width, screen_height) screenset.title(title) screenset.bgcolor(bg_color) artistpen = turtle.Turtle() artistpen.hideturtle() artistpen.pensize(pen_width) artistpen.color(pen_color) artistpen.speed(speed) recursivedraw(artistpen, - drawing_width / 2, - drawing_height / 2, drawing_width, drawing_height, fractal_depth) turtle.done()
Output:
In the following output, we can see that how recursion is working and making the same copies of a single picture.
Read: Python Turtle Race
Fractal drawing turtle
In this section, we will learn about how to draw fractal drawings in python turtle.
Fractal is used to generate an infinite amount of copies of pictures that form a fractal pattern. This fractal drawing is drawn with the help of a turtle.
Code:
In the following code, we have imported the turtle library and after that, we have defined the fractdraw() later we use the left() right() forward() to give direction to the pattern.
from turtle import * import turtle def fractdraw(stp, rule, ang, dept, t): if dept > 0: x = lambda: fractdraw(stp, "a", ang, dept - 1, t) y = lambda: fractdraw(stp, "b", ang, dept - 1, t) left = lambda: t.left(ang) right = lambda: t.right(ang) forward = lambda: t.forward(stp) if rule == "a": left(); y(); forward(); right(); x(); forward(); x(); right(); forward(); y(); left(); if rule == "b": right(); x(); forward(); left(); y(); forward(); y(); left(); forward(); x(); right(); turtle = turtle.Turtle() turtle.speed(0) fractdraw(5, "a", 90, 5, turtle)
Output:
In the following output, we can see how we draw the fractal turtle and how it is working to create the same picture multiple times using the fractal pattern.
You may also like to read the following tutorials.
- Python Turtle Tracer
- Python Turtle Window
- Python Turtle Triangle
- Replit Python Turtle
- Python Turtle Oval
- Python Turtle Size
- Python Turtle Mouse
- Python Turtle Font
- Python Turtle Get Position
Here, we will discuss Fractal Python Turtle and we have also covered different examples related to its implementation. Here is the list of examples that we have covered.
- Fractal python turtle
- Fractal tree python turtle
- Fractal recursion python turtle
- Fractal drawing turtle
Entrepreneur, Founder, Author, Blogger, Trainer, and more. Check out my profile. | https://pythonguides.com/fractal-python-turtle/ | CC-MAIN-2022-21 | refinedweb | 1,166 | 63.59 |
Python Tutorial - Statement and Comment
In this section, Python statements, indentation and comments will be discussed.
Docstring as a special type of Python comment is also introduced in the last session.
Python Statement
A Python statement is an instruction given to the interpreter to execute. A statement can contain an expression like below
result = x + y
Python statement can be regarded as an instruction to the interpreter to solve the expression and store its result in a variable. The statements such as
for,
while,
Python Multi-Line Statements
When you press enter key after a statement, that particular statement is terminated and is a one-line statement. A multi-line statement can be created in Python by using line continuation character
\ which will extend Python statement to multi-line.
Consider the code below:
x = 100 + 101 + 102 \ + 103 + 104 \ + 105
This is called explicit line continuation.
You can also do implicit line continuation by using parenthesis
(), square brackets
[] or curly braces
{}.
For example, you can write above multiple line statement using parenthesis as:
x = (100 + 101 + 102 + 103 + 104 + 105)
Python Indentation
The block of statements, for example, the body of a function or a loop or class, starts with indentation. The indentation should be same for each statement inside the block. You will get an
IndentationError when indentation is incorrect.
Usually, 4 spaces for indentation as advised in Style Guide for Python Code. Consider the example below:
x = {1, 2, 3, 4} for i in x: print(i)
Indentation is basically used to create more readable programs.
In the example below, the same code is written in two different ways:
x = {1, 2, 3, 4} for i in x: print(i) for i in x: print(i)
You can see here that the first example has a better readability than the second one.
Python Comments
Comments are used to describe the purpose or working of a program. Comments are the lines ignored by Python during interpreting and they do not disturb the flow of a program.
If you are writing a code of hundreds of lines, you should add comments as other user does not have enough time to read every line to understand the working of the code. In this way, comments increase readability and explain how the code works as well.
A Python comment starts from hash
# symbol.
#Single Line comment #Program to print a string print("Hello Python Programmer")
Python Multi-Line Comments
Using hash symbol in each line can define a multi-line comment. But there is another way to add a multi-line comment in Python, that is using triple quotation marks. You can use either
''' or
""".
"""Multi-line comments in Python Programming language"""
Triple quotation marks are actually used to define multiline documentation string but you can also use them as multi-line comments.
Docstring in Python
Docstring is the documentation string which is the first statement in a Python function, class, module, etc. The description and comments of functions, methods, and class are inside a docstring (documentation string).
Consider the example below:
def sum(a, b): """This function adds two values""" return a+b
You can see here docstring tells what the function does. | https://www.delftstack.com/tutorial/python-3-basic-tutorial/statement-indentation-and-comment/ | CC-MAIN-2021-43 | refinedweb | 532 | 59.74 |
-
- greyjusticar
- Registered Member
- Member for 9 years, 5 months, and 11 days
Last active Tue, Feb, 26 2019 20:49:46
- 1 Follower
- 496 Total Posts
- 42 Thanks
- Dec 5, 2011nope that did not workPosted in: Mods Discussion
0anyone else have any ideas while this is reobfuscatingPosted in: Mods Discussion
0well im testing it in mcp. Ill test in an unmodded minecraft right nowPosted in: Mods Discussion
0Ok ill see if anyone can help me on this, I have been trying to fix this for the past 3 days don't know why its not working.Posted in: Mods Discussion
I have updated wildcaves mod and added a few more stalactites and stalagmites but the generation in the world will not work at all I made a temp crafting recipe and it works great I can place them and break them but it will not generate anything in the world heres the code
package net.minecraft.src;
import java.util.Random;
public class WorldGenStalactite extends WorldGenerator
{
public WorldGenStalactite(int i)
{
dogEatFat = i;
}
public boolean generate(World world, Random random, int i, int j, int k)
{
for(int l = 0; l < 7; l++)
{
int i1 = (i + random.nextInt(8)) - random.nextInt(8);
int j1 = (j + random.nextInt(4)) - random.nextInt(4);
int k1 = (k + random.nextInt(8)) - random.nextInt(8);
if(world.isAirBlock(i1, j1, k1) && ((BlockStalactite)Block.blocksList[dogEatFat]).canBlockStay(world, i1, j1, k1))
{
world.setBlock(i1, j1, k1, mod_wildcaves.Stalactite.blockID);
world.setBlock(i1, j1 - 1, k1, mod_wildcaves.Stalactite.blockID);
}
}
return true;
}
private int dogEatFat;
}
Please tell me what is wrong in this code generationg to not generate my stalactite block?
and the dogEatFat is from Psientists mods that are now mine to update so I have full permission you can check out his old thread and see for yourselfs on the top
No im not a coder im just starting to learn java and now trying modding early so still new.
so if you want to help me out on making a new dimension that I thought of or is willing to help me with this and a few other coding things please send a pm
thanks for anyone who can help.
0greyjusticar posted a message on Mini Joblol well i figured more would respond if they were paid to make some mods but looks like there are few coders around.Posted in: Discussion
0greyjusticar posted a message on Mini JobHowdy I was wondering how many minecraft coders modders would be willing to do basicly a mini Job. Was wondering if that is even allowed, anyways respond if you would be willing to take on a mini Job.Posted in: Discussion
No I am NOT doing this right now im just asking around.
But i would like to know what kind of payments for what kind of mods ratio type thing.
thanks
Greyjusticar
0bumpPosted in: Minecraft Mods
0Wow adding all these images at once was a chore, I'll have the rest hopefully by tomorrow between work and trying to get images I can't mod lolPosted in: Minecraft Mods
0ok well I have asked him to post on his original mod post that he gave me permission, Ill have the permission cleared up soon hopefullyPosted in: Minecraft Mods
I apologize, but I can't download this with a clear conscience without seeing some proof that you got permission from him, only because they are on adfly links. No offense or anything, but if you get proof, I'll dl and check them out, if not, consider this thread reported for attempting to gain money off someone else's content. Even if you did update it, it shouldn't be on adfly links without some visible proof of permission granted, especially since there is someone else updating them and doing so without adfly links.
Again, not calling you a thief or anything, but with how much content theft seems to be happening, I feel the need to give you warning it will be reported if proof of permission is not posted. Wouldn't have even been an issue if you weren't using adfly and someone else wasn't updating it already.
I understand completely did not think about it until I had already posted anyways he has updated his link vist his mod post and at the top of it is proof.
0Ok well I got permission to use adfly from him. and ill have pics for all his mods on my post later tonight.Posted in: Minecraft Mods
0Guys Im doing the pictures now, But I posted Psientists original post so that you do have pictures.Posted in: Minecraft Mods
Edit:Can't get my computer to take a bloody picture so going to be just a little bit longer until I figure out why it won't take.
1Posted in: Minecraft Mods
Howdy everyone
Permission!
Ok I have been granted permission to do whatever I want with these mods, So I have updated them to (minecraft 1.0.0).
I will begin to add addons for them and hopefully get some block id's down.
For Permission's please visit Psientists main mod post and read the top Thank you!
I have full Permission to do what I like.
I Just can't let these mods die, They are toooo good lol.
Or this is from his post.
Update:
I've abandoned my mods and texture pack (obviously), but the user Greyjusticar asked me for permission to update/modify my mods and to use AdFly in combination with them. And I'm totally fine with him taking the responsibility for that!
I guess that's all
I need a name for all these and my addons so for now its Enhanced wildlife. (Please post what you think a good name would be).
All these mods are mainly for use with 256 texture packs, I will however be making(hopefully can come up with something)new textures for 16 and 32 bit because the downscaling to those look horrid lol.
I am trying to put together a texture pack so be on the look out for that.
And if you want me to try and update Psientist texture pack please leave a comment ill see if I can.
ALL MODS NEEDS MODLOADER!!
All these mods are Psientists I have only updated them. (I have gained permission to edit whatever I like tho)
Glow Vines
Version: 1.0.0
Block ID's: 170-171
Crafting: 1x = 1x Paper. 3x = 1x Ink Sack (Black Dye)
Uses: Decoration, weak light source. Can be placed under stone, dirt and grass.
Images:
Download: Glow Vines
More Plants and Seeds
Version: 1.0.0
Block ID's: 173-178
Crafting: Can be crafted into dyes specific to each flower. 9x Dirt = 1x Flower seed.
Uses: Flower seeds turns into a random flower after a while if planted and given enough light. Flowers can also be used for decoration.
Images:
Additional info:
Download: More Plants and Seed
The Flowers: (They actually don't do any cool stuff, I just wanted a little story behind their names)
Bluebulb - This plant was named after it's buds which blooms into flowers for just one day before the flower wilts and dies.
Creeperlure - This flower was first discovered by a hunter who was stalking the nearby woods of his camp for a creeper that had been seen sneaking around the other day. When he finally tracked down the creeper he found not just one but three creepers, gently grazing on a field of these plants. This is how the flower earned it's name and a reputation for attracting creepers, whether or not the the latter is actually true remains to be proven.
Dankweed - This plant thrives in warm climates and seems to constantly "sweat" an oily and sticky fluid. Having seemingly no use as either food or medicine it was eventually discovered that the fluids it produces makes for an excellent dye.
Flamesprout - The plant itself doesn't really give any impression about being fire-like, but every night the plant opens up it's flowers and releases pheromones which attracts a glowing insect that feeds on it's nectar. The way these insects seems to dance around the flowers like little sparks eventually gave it the name Flamesprout among the natives.
Nightcrawler - There are legends around these oddly colored and shaped flowers that they used to originate from the ocean, and so every night they crawl in unison under the moonlight back to the waters that birthed them.
This is however, as any scholar will tell you, untrue and probably originates from people with very bad orientation skills.
Whisperstalk - The stem of this flower is full of small holes and on a silent day you can hear how the wind produces faint sounds as it blows through the stem, this has produced many tales regarding the flower. Some say that if you sleep in a field of Whisperstalks they'll tell you many secrets, other revere the plants and believes that they hold the souls of their village's departed.
Steel Box
Version: 1.0.0
Block ID's: 169
Crafting: A wooden chest surrounded by 8 Iron Ingots turns into 1 Steel Box.
Uses: Can store as much as a large chest but only using 1 block, and can be placed more densely.
Images:
Download: Steal Box
Flametongue
Version: 1.0.0
Block ID's: 183
Crafting: No crafting recipes as of now.
Uses: Decorative, can also be put in a furnace where it produces as much heat as a bucket of lava. Can be placed on sand or stone.
Images:
Additional info:
Downloads: Flametongue
This rare flower is sometimes found in deserts or sandy beaches, but despite it's rarity it's very hard to miss due to the fact that it's emitting flames!
The biology is this flower causes it to produce chemical vapors that catches on fire on contact with air, fortunately for the flower it is quite fire resistant.
The same is however not true for humans, so great caution should be exercised when attempting to pick this flower as it's most likely to set anyone but an experienced botanist on fire in the process.
Concrete and Marble
Version: 1.0.0
Block ID's: 150-161
Crafting: Sand + Gravel makes Concrete. Concrete can be turned into marble by combining it with: dirt, sand, sandstone, netherrack, gold ingot, obsidian, gravel, stone, cobblestone, Lapis block or Clay.
Uses: Purely decorative, plus I wanted to make gravel and sand more useful.
Images:
Downloads: Concrete and Marble
Not done yet please wait a bit longer
Climbable Vines
Version: 1.0.0
Block ID's: 132
Warning: In order to climb the vines you need to add the nq.class into your jar (otherwise you just wont be ablet to climb them
Crafting: No crafting uses as of now.
Uses: Can be attached to Leaves, Planks or Wooden blocks and used to climb, instead of using ladders. You can place a vine onto another vine and it'll extend further.
Images:
Additional info:
Downloads: Climable Vines
Notice:
I based this on Djoslins Rope Mod because I thought his mod is something of a must have for minecraft, but I also wanted to add my own vines in the game so I decided on a compromise and simply modified his code to fit onto my vines.
I'm not ashamed to admit I'm just a hack, I would never have been able to write such nice code, so go check out his mods
Notice #2: This mod modifies the lo.class (EntityLiving.java), if you have a mod which also modifies it then it won't be compatible.
Advanced users may simply find and replace these lines in the EntityLiving.java to work-around it:
public boolean isOnLadder() { int i = MathHelper.floor_double(posX); int j = MathHelper.floor_double(boundingBox.minY); int k = MathHelper.floor_double(posZ); return worldObj.getBlockId(i, j, k) == Block.ladder.blockID || worldObj.getBlockId(i, j, k) == mod_Lians.liantop.blockID; }
The Green Hell
Version: 1.0.0
Block ID's: 162-168
Crafting: None.
Uses: Decoration, natural hazards.
Images:
Additional info:
Downloads: Green Hell
This mod adds the following:
1 standard run of the mill bush.
1 grass as tall as yourself. Spreads into smaller grass which eventually grows into this grass.
1 thorny bush. it works like a cactus but only 1 block high. spreads slowly.
1 poisonous bush. (the purple/red plant in the pictures) which slows you down and hurts you as you walk through it. Spreads slowly.
1 winter bush. Does nothing, just adds a little life to snowy areas.
Angler Mushroom
Version: 1.0.0
Block ID's: 179-182
Crafting: No crafting uses as of now.
Uses: Can be used in defense against other players or for griefing, but it's real purpose is to act as an environmental hazard. It grows in caves and acts as a strong light source, it occasionally creates a puff of poisonous gas which slowly dissipates. It also spreads slowly, potentially creating entire cave systems full of poison given enough time.
Images:
Additional info:
Downloads: Angler Mushroom
Fluff: The Angler Mushrooms is a predatory fungus, they lure creatures to themselves by emitting a bright light and by their glowing colorful patterns. Once a prey comes close enough the angler mushroom will emit poisonous gas which kills a man-sized mammal within seconds. The fungi then creeps closer using it's arm-like feeding appendage and starts devouring it's prey by dissolving it with it's stomach acid.
Sadly for us, most of the native lifeforms that actually presents a threat to us have proven to be far more resilient towards Angler Mushroom poison than us.
Ok now me I have changed the way the poison is generated hopefully this works out better.
Forgotten Ruins (Without treasures)
Version: 1.0.0
Block ID's: 185-188
Crafting: None.
Uses: Decoration, exploration, slim chance to drop treasure boxes*.
Images:
Additional info:
Please visit his main post and see them ill have pics later for this one
Download: Forgotten ruins
Please visit Psientists original post
I will be making addons to these mods
Bugs: None reported yet
Ok guys there you are hopefully I can work on the treasures and get that version going to.
And what Psientist idea for more mods was this.
Fog & Marshes - The idea is that I'll make a marshland block which will create fog blocks in empty spaces around itself. The fog block itself will simply be intangible so that you'll be able to walk through it. Your vision will be impaired within the fog.
Will probably require looking into liquids coding so I'll save it for later.
I am going to attempt to create what he wanted and this will be generated in the swamp biomes make it a bit more.
I am not a coder tho im trying to learn and updating these was some usefull experience but I want to make a more advanced mod I have a couple good ideas that I think everyone will love. So if you are a coder good with Gui and buttons in game and would like to help send me a pm. I also need to know a couple things about coding if someone would be willing to help me out on that send me a pm.
And again im going to try and make all these mods use less id's just need to learn how:) lol
Ok guys please leave comments and suggestions Because I am going to start addons and more recipes to these mods.
Oh and can someone make a video showing all these mods working with an HD texture pack I still need to make the smaller sprites.
And please for those who like to make textures make some textures for these mods to match other texture packs.
Ill post your textures for alternates to download.
please report bugs
0greyjusticar posted a message on Better Than Wolves Total Conversion!Oh ya don't get me wrong Love the mod and elevator just epic pure epic lolPosted in: Minecraft Mods
This mod can compete with any mod out there.
But taking forge away makes less possible creations designs for my automated machines.
I like the cook as much as you like above the fire.
As for my logic thats the way I look at most things don't know if thats good or bad.
Did anyone get my Joke? lolol 2 cents? yes? O COME ON!!
This segment is a copy: Im looking for someone to help me understand where multiplayer inputs are suppose to go in the code. (Im still a newb at coding period plus newber for minecraft code lol)
0greyjusticar posted a message on LB Photo Realism, 1.6 convert 7/16/2013. RPG Realism 1.3.1 updated 10/12/2012Howdy howdyPosted in: Resource Packs
ok im making a texture pack i was wondering if i could use your texture pack as a base.
Just about all the terrain I will change to mine and some of the items I will change to mine.
But some of the textures like water lava armor pieces things like that I really like yours.
Anywho I sent a pm a while back but no responce so hopefully I can get a responce.
Thanks
Greyjusticar
0greyjusticar posted a message on Better Than Wolves Total Conversion!Ok so let me get this straight because you have issues with someone in forge your not going to use it anymore?Posted in: Minecraft Mods
And you may not of made the mod for popularity but you still made it to be downloaded, you still setup an entire forum page about your mod tons of info, why? for the community to enjoy correct?
Even if it takes awhile to get your hooks in they are still going in correct? even railcraft said something of the sort they its taking a long time to get new hooks in.
Patients is all thats needed in this case
Question||were you helping code forge? if not then you think maybe its a bit difficult to add hooks to forge?
so it just taking time?
anyways my 2 cents lol get it 2 cents? MUAHAHAHA
i know all you reading this joke are laughing your heads off because of how good it is!!!!
- | https://www.minecraftforum.net/members/greyjusticar/posts?page=27 | CC-MAIN-2020-34 | refinedweb | 3,077 | 70.02 |
#include <hallo.h> Raphael Hertzog wrote on Fri May 17, 2002 um 10:42:25AM: > > late for woody ... and we'll release as is. But please update everything > > so that 3.0r1 will be ok. > > In fact, it's too important to "release as is". I suggest that we do yet > another bf upload but i386 only ... > > Eduard, can you update kernel-image-2.4.18-bf2.4 shortly ? Sense? It does fit on the floppy as-is, so where is the _good_ reason to make it module? The only one I can imagine is to make space for a forgotten driver for a weird Adaptec controller, which has been requested recently. Gruss/Regards, Eduard. -- DOS-Airlines Alle schieben das Flugzeug an, bis es abhebt, dann springen alle auf und lassen das Flugzeug trudeln, bis es wieder auf den Boden schlägt. Dann schieben wieder alle an, springen auf... -- To UNSUBSCRIBE, email to debian-boot-request@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org | http://lists.debian.org/debian-boot/2002/05/msg00453.html | CC-MAIN-2013-20 | refinedweb | 169 | 77.03 |
============= UST Global Interview Samples ===============• Technical Interview : C++, C (Basics, Codes and definations), Java (Recent technologies used by companies), Data Structures, Operating Systems
• HR Interview
++++++++Technical Interview Questions ++++++++++
Questions on C
1. What is C language ?
2. Explain different storage classes in C ?
3. What is Pointer ?
4. What is dangling Pointer ?
5. What is Void Pointer and when it is used?
6. Is it better to use malloc() or calloc()?
7. Should a function contain a return statement if it does not return a value ?
8. Is it possible to execute code even after the program exits the main() function ?
9. What is meant by “bit masking” ?
10. What are multibyte characters ?
Questions on C++ & Java :-
1. Can you mention some Application of C/C++ ?
2. What is a class ?
3. What is a object ?
4. What is a modifier ?
5. Define namespace.
6. Can you list out the areas in which data structures are applied extensively?
7. What is the advantage of OOP ?
8. List advantages and disadvantages of dynamic memory allocation vs. static memory allocation.?
9. What is Java?
10.Can you have virtual functions in Java?
11.What is thread?
12.What is multi-threading?
13. What is Externalizable?
14. What is an abstract method?
15. What do you understand by private, protected and public?
16. How are this() and super() used with constructors?
17. What are the steps in the JDBC connection?
18. Difference between HashMap and HashTable?
19. What are the different ways to handle exceptions?
20. What is synchronization and why is it important?
Questions on Data Structures :-
1. What is data structure?
2. If you are using C language to implement the heterogeneous linked list, what pointer type will you use?
3. In an AVL tree, at what condition the balancing is to be done?
4. What are the advantages and disadvantages of B-star trees over Binary trees.?
5. In RDBMS, what is the efficient data structure used in the internal storage representation?
6. What is a spanning Tree?
7. Does the minimum spanning tree of a graph give the shortest distance between any 2 specified nodes?
8. Explain different type of sorting techniques ?
9. Write down the algorithm of fastest and the slowest sorting techniques in all conditions ?
10. Explain Binary search ?
UST Global HR Interview Questions :-
1. Tell me about yourself ?
2. What are your strengths ?
3. What are your weaknesses ?
4. Why you want to join UST Global ?
5. What do you know about UST Global ?
6. Where do you see yourself 3-5 years from now?
7. Will you relocate ?
8. Why should i hire you ? | https://www.crackmnc.com/2018/02/ust-global-interview-questions.html | CC-MAIN-2020-40 | refinedweb | 438 | 71.51 |
A very good collection of stories, loosely connected by Philippe Petit’s high-wire walk across the Twin Towers in 1974.
Monthly Archives: April 2010
Book: Bambi vs. Godzilla
Book: A Reporter’s Life
Book: The Meaning of the Constitution
I?
Book: Erlang Programming
Book: The Hakawati
An Introduction to Asynchronous Programming and Twisted
Part 14: When a Deferred Isn’t
This continues the introduction started here. You can find an index to the entire series here.
Introduction
In this part we’re going to learn another aspect of the
Deferred class. To motivate the discussion, we’ll add one more server to our stable of poetry-related services. Suppose we have a large number of internal clients who want to get poetry from the same external server. But this external server is slow and already over-burdened by the insatiable demand for poetry across the Internet. We don’t want to contribute to that poor server’s problems by sending all our clients there too.
So instead we’ll make a caching proxy server. When a client connects to the proxy, the proxy will either fetch the poem from the external server or return a cached copy of a previously retrieved poem. Then we can point all our clients at the proxy and our contribution to the external server’s load will be negligible. We illustrate this setup in Figure 30:
Consider what happens when a client connects to the proxy to get a poem. If the proxy’s cache is empty, the proxy must wait (asynchronously) for the external server to respond before sending a poem back. So far so good, we already know how to handle that situation with an asynchronous function that returns a deferred. On the other hand, if there’s already a poem in the cache, the proxy can send it back immediately, no need to wait at all. So the proxy’s internal mechanism for getting a poem will sometimes be asynchronous and sometimes synchronous.
So what do we do if we have a function that is only asynchronous some of the time? Twisted provides a couple of options, and they both depend on a feature of the
Deferred class we haven’t used yet: you can fire a deferred before you return it to the caller.
This works because, although you cannot fire a deferred twice, you can add callbacks and errbacks to a deferred after it has fired. And when you do so, the deferred simply continues firing the chain from where it last left off. One important thing to note is an already-fired deferred may fire the new callback (or errback, depending on the state of the deferred) immediately, i.e., right when you add it.
Consider Figure 31, showing a deferred that has been fired:
If we were to add another callback/errback pair at this point, then the deferred would immediately fire the new callback, as in Figure 32:
The callback (not the errback) is fired because the previous callback succeeded. If it had failed (raised an Exception or returned a Failure) then the new errback would have been called instead.
We can test out this new feature with the example code in twisted-deferred/defer-11.py. Read and run that script to see how a deferred behaves when you fire it and then add callbacks. Note how in the first example each new callback is invoked immediately (you can tell from the order of the print output).
The second example in that script shows how we can
pause() a deferred so it doesn’t fire the callbacks right away. When we are ready for the callbacks to fire, we call
unpause(). That’s actually the same mechanism the deferred uses to pause itself when one of its callbacks returns another deferred. Nifty!
Proxy 1.0
Now let’s look at the first version of the poetry proxy in twisted-server-1/poetry-proxy.py. Since the proxy acts as both a client and a server, it has two pairs of Protocol/Factory classes, one for serving up poetry, and one for getting a poem from the external server. We won’t bother looking at the code for the client pair, it’s the same as in previous poetry clients.
But before we look at the server pair, we’ll look at the
ProxyService, which the server-side protocol uses to get a poem:
class ProxyService(object): poem = None # the cached poem def __init__(self, host, port): self.host = host self.port = port def get_poem(self): if self.poem is not None: print 'Using cached poem.' return self.poem print 'Fetching poem from server.' factory = PoetryClientFactory() factory.deferred.addCallback(self.set_poem) from twisted.internet import reactor reactor.connectTCP(self.host, self.port, factory) return factory.deferred def set_poem(self, poem): self.poem = poem return poem
The key method there is
get_poem. If there’s already a poem in the cache, that method just returns the poem itself. On the other hand, if we haven’t got a poem yet, we initiate a connection to the external server and return a deferred that will fire when the poem comes back. So
get_poem is a function that is only asynchronous some of the time.
How do you handle a function like that? Let’s look at the server-side protocol/factory pair:
class PoetryProxyProtocol(Protocol): def connectionMade(self): d = maybeDeferred(self.factory.service.get_poem) d.addCallback(self.transport.write) d.addBoth(lambda r: self.transport.loseConnection()) class PoetryProxyFactory(ServerFactory): protocol = PoetryProxyProtocol def __init__(self, service): self.service = service
The factory is straightforward — it’s just saving a reference to the proxy service so that protocol instances can call the
get_poem method. The protocol is where the action is. Instead of calling
get_poem directly, the protocol uses a wrapper function from the
twisted.internet.defer module named
maybeDeferred.
The
maybeDeferred function takes a reference to another function, plus some optional arguments to call that function with (we aren’t using any here). Then
maybeDeferred will actually call that function and:
- If the function returns a deferred,
maybeDeferredreturns that same deferred, or
- If the function returns a Failure,
maybeDeferredreturns a new deferred that has been fired (via
.errback) with that Failure, or
- If the function returns a regular value,
maybeDeferredreturns a deferred that has already been fired with that value as the result, or
- If the function raises an exception,
maybeDeferredreturns a deferred that has already been fired (via
.errback()) with that exception wrapped in a Failure.
In other words, the return value from
maybeDeferred is guaranteed to be a deferred, even if the function you pass in never returns a deferred at all. This allows us to safely call a synchronous function (even one that fails with an exception) and treat it like an asynchronous function returning a deferred.
Note 1: There will still be a subtle difference, though. A deferred returned by a synchronous function has already been fired, so any callbacks or errbacks you add will run immediately, rather than in some future iteration of the reactor loop.
Note 2: In hindsight, perhaps naming a function that always returns a deferred “maybeDeferred” was not the best choice, but there you go.
Once the protocol has a real deferred in hand, it can just add some callbacks that send the poem to the client and then close the connection. And that’s it for our first poetry proxy!
Running the Proxy
To try out the proxy, start up a poetry server, like this:
python twisted-server-1/fastpoetry.py --port 10001 poetry/fascination.txt
And now start a proxy server like this:
python twisted-server-1/poetry-proxy.py --port 10000 10001
It should tell you that it’s proxying poetry on port 10000 for the server on port 10001.
Now you can point a client at the proxy:
python twisted-client-4/get-poetry.py 10000
We’ll use an earlier version of the client that isn’t concerned with poetry transformations. You should see the poem appear in the client window and some text in the proxy window saying it’s fetching the poem from the server. Now run the client again and the proxy should confirm it is using the cached version of the poem, while the client should show the same poem as before.
Proxy 2.0
As we mentioned earlier, there’s an alternative way to implement this scheme. This is illustrated in Poetry Proxy 2.0, located in twisted-server-2/poetry-proxy.py. Since we can fire deferreds before we return them, we can make the proxy service return an already-fired deferred when there’s already a poem in the cache. Here’s the new version of the
get_poem method on the proxy service:
def get_poem(self): if self.poem is not None: print 'Using cached poem.' # return an already-fired deferred return succeed(self.poem) print 'Fetching poem from server.' factory = PoetryClientFactory() factory.deferred.addCallback(self.set_poem) from twisted.internet import reactor reactor.connectTCP(self.host, self.port, factory) return factory.deferred
The
defer.succeed function is just a handy way to make an already-fired deferred given a result. Read the implementation for that function and you’ll see it’s simply a matter of making a new deferred and then firing it with
.callback(). If we wanted to return an already-failed deferred we could use
defer.fail instead.
In this version, since
get_poem always returns a deferred, the protocol class no longer needs to use
maybeDeferred (though it would still work if it did, as we learned above):
class PoetryProxyProtocol(Protocol): def connectionMade(self): d = self.factory.service.get_poem() d.addCallback(self.transport.write) d.addBoth(lambda r: self.transport.loseConnection())
Other than these two changes, the second version of the proxy is just like the first, and you can run it in the same way we ran the original version.
Summary
In this Part we learned how deferreds can be fired before they are returned, and thus we can use them in synchronous (or sometimes synchronous) code. And we have two ways to do that:
- We can use
maybeDeferredto handle a function that sometimes returns a deferred and other times returns a regular value (or throws an exception), or
- We can pre-fire our own deferreds, using
defer.succeedand
defer.fail, so our “semi-synchronous” functions always return a deferred no matter what.
Which technique we choose is really up to us. The former emphasizes the fact that our functions aren’t always asynchronous while the latter makes the client code simpler. Perhaps there’s not a definitive argument for choosing one over the other.
Both techniques are made possible because we can add callbacks and errbacks to a deferred after it has fired. And that explains the curious fact we discovered in Part 9 and the twisted-deferred/defer-unhandled.py example. We learned that an “unhandled error” in a deferred, in which either the last callback or errback fails, isn’t reported until the deferred is garbage collected (i.e., there are no more references to it in user code). Now we know why — since we could always add another callback pair to a deferred which does handle that error, it’s not until the last reference to a deferred is dropped that Twisted can say the error was not handled.
Now that you’ve spent so much time exploring the
Deferred class, which is located in the
twisted.internet package, you may have noticed it doesn’t actually have anything to do with the Internet. It’s just an abstraction for managing callbacks. So what’s it doing there? That is an artifact of Twisted’s history. In the best of all possible worlds (where I am paid millions of dollars to play in the World Ultimate Frisbee League), the
defer module would probably be in
twisted.python. Of course, in that world you would probably be too busy fighting crime with your super-powers to read this introduction. I suppose that’s life.
So is that it for deferreds? Do we finally know all their features? For the most part, we do. But Twisted includes alternate ways of using deferreds that we haven’t explored yet (we’ll get there!). And in the meantime, the Twisted developers have been beavering away adding new stuff. In an upcoming release, the
Deferred class will acquire a brand new capability. We’ll introduce it in a future Part, but first we’ll take a break from deferreds and look at some other aspects of Twisted, including testing in Part 15.
Suggested Exercises
- Modify the twisted-deferred/defer-11.py example to illustrate pre-failing deferreds using
.errback(). Read the documentation and implementation of the
defer.failfunction.
- Modify the proxy so that a cached poem older than 2 hours is discarded, causing the next poetry request to re-request it from the server
- The proxy is supposed to avoid contacting the server more than once, but if several client requests come in at the same time when there is no poem in the cache, the proxy will make multiple poetry requests. It’s easier to see if you use a slow server to test it out.
Modify the proxy service so that only one request is generated. Right now the service only has two states: either the poem is in the cache or it isn’t. You will need to recognize a third state indicating a request has been made but not completed. When the
get_poemmethod is called in the third state, add a new deferred to a list of ‘waiters’. That new deferred will be the result of the
get_poemmethod. When the poem finally comes back, fire all the waiting deferreds with the poem and transition to the cached state. On the other hand, if the poem fails, fire the
.errback()method of all the waiters and transition to the non-cached state.
- Add a transformation proxy to the proxy service. This service should work like the original transformation service, but use an external server to do the transformations.
- Consider this hypothetical piece of code:
d = some_async_function() # d is a Deferred d.addCallback(my_callback) d.addCallback(my_other_callback) d.addErrback(my_errback)
Suppose that when the deferred
dis returned on line 1, it has not been fired. Is it possible for that deferred
to fire while we are adding our callbacks and errback on lines 2-4? Why or why not? | http://krondo.com/?m=201004 | CC-MAIN-2014-52 | refinedweb | 2,415 | 63.8 |
This project allows images to be automatically grouped into like clusters using a combination of machine learning techniques.
This project allows numerical features to be reduced down to fewer dimensions for plotting using unsupervised machine learning. Features can be taken simply as face value numbers from a spreadsheet (csv) file, or they can be extracted from images using a pre-trained model.
All functions in this package can be imported for use in your own python scripts, or run as stand-alone commands in a CLI.
In order to deal with all inputs in a standardardised fashion, csv files are parsed using
parse_datain
parse_data.py. While this is done automatically for CLI commands, if you're writing your own scripts you should parse your csv data in through this first. It essentially puts your data in a pd.DataFrame, where the first column is always a unique ID key column.
Current functionality: -
python cli.py features-
python cli.py tsne-
python cli.py umap
Running 'features' will extract the numerical features of a directory of images, and save them (with the unique IDs) to the output path.
Running 'tsne' or 'umap' will reduce such features (or features from a regular csv) into fewer dimensions, and save these (with the unique IDs) to the output path. These reduction functions will accept a
--modelargument, allowing you to specify one of several common pre-trained models to be used. I will probably add a command to specify your own custom model soon.
It is worth reiterating:
As stated in the above, it is imperetive you parse any data you want to reduce using
parse_datafirst if you're accessing these functions in your own scripts, and not using the CLI.
This project uses keras and tensorflow 2. I run these commands using tensorflow-gpu, so for the smoothest experience I recommend setting CUDA up from NVIDIA's website, I have not tested regular tensorflow (cpu).
I have some folder full of images at path
./imagesthat I want extract features from, and reduce into 2 dimensions using t-SNE.
python cli.py features "./images" "features.csv" python cli.py tsne "features.csv" "tsne-results.csv" --feature-cols all --unique-col A
from features import extract_features from tsne_reducer import tsne
features = extract_features('./images') reduced = tsne(features, write_to='./tsne_features.csv')
I have a csv file called
data.csvcontaining a unique ID column (called "name") at column D, and the important columns containing numbers are at A, C, H, and AB.
python cli.py umap "./data.csv" "./tsne.csv" --feature-cols A,C,H,AB --unique-col D
from parse_data import parse_data from umap_reducer import umap
data = parse_data('./data.csv', feature_cols=['A', 'C', 'H', 'AB'], unique_col='D') reduced = umap(data, write_to='./umap_features.csv') | https://xscode.com/zegami/image-similarity-clustering | CC-MAIN-2021-43 | refinedweb | 455 | 56.45 |
#include <lib/rcstring.h>
Go to the source code of this file.
Definition at line 24 of file net_utils.h.
The canonical_network_address class method is used to convert an address into its canonical form. Invalid addresses, or addresses already in canonical form, will be passed through unchanged.
Definition at line 26 of file canonical_network_address.cc.
Definition at line 23 of file network_class_from_number.cc.
Definition at line 23 of file network_mask_from_number.cc.
Definition at line 26 of file network_number_from_string.cc.
The string_from_network_number function is used to convert and IPv4 network number into a string in canonical form.
Definition at line 27 of file string_from_network_number.cc. | http://nis-util.sourceforge.net/doxdoc/net__utils_8h.html | CC-MAIN-2018-05 | refinedweb | 104 | 53.27 |
«
January 2006 |
»
Main
«
| March 2006
»
Monday 27 February 2006
My boss and I are looking to hire another engineer to join our
six-person team.
You:!
This.
Sunday 26 February 2006
Hi.
Saturday 25 February 2006
An enterprising coder has posted a visual side-by-side comparison of monospaced fonts:
18 Monospace fonts comparison screenshot.
I'm still using Lucida Console for console windows and plain-old Courier for code. There are a lot
of programmer's fonts out there, but they seem to be designed by programmers, so they feel awkward to me.
Friday 24 February 2006
The Generator Blog is simply a collection of all of those
generators out there on the web. For example,
and so on. Plenty of time wasters, though some are actually trying to be useful.
The:
Monday 20 February 2006.
S.
Friday 17 February 2006
Through..
My second-grader showed me his spelling words book. He has to write sentences using the words,
and underline them.
From the first entry back in September, a classic:
Where's their head?There's their head!They're headless.
Where's their head?
There's their head!
They're headless.
Monday 13 February 2006
I'm doing a lot of coding these days involving XY coordinates, and there's a handful of little
annoyances. They're no one's fault, I just want to vent.
First, it's natural to say "x and y", and it's natural to say "height and width", but x corresponds
to width, and y to height, so I often make mistakes that switch the two:
ht, wd = foox, fooy # This is wrong.
The same goes for loops over x and y. The natural order to visit the points in a grid is the raster
order: finish a row, then go on to the next row. But that means having the first loop be over y rather than x:
for y in range(lowy, hiy): for x in range(lowx, hix): do_something(x, y)
For this last, there's a solution: create a generator that makes x,y pairs in a single loop:
def xyrange(startx, endx, starty, endy): """ Generate the pairs (x, y) in a rectangle. """ for y in range(starty, endy): for x in range(startx, endx): yield x,y
Then this function is the only place that needs the inside-out y x ugliness, and you can use a
single loop everywhere else:
for x, y in xyrange(lowx, hix, lowy, hiy): do_something(x, y)
This has the advantage that you can break out of the loop cleanly when you find a point you are
looking for. It has the disadvantage that you can't do an action at the end of each row.
Update: Richard Schwartz noticed that I originally had
said,
First, it's natural to say "x and y", and it's natural to say "height and width", but x corresponds
to height, and y to width, so I often make mistakes that switch the two.
which makes that sentence itself an error of the sort it describes, making it an unintentionally
self-referential sentence!
I.
S?
Fr.
»
read more of: Autism Saturday... (32 paragraphs)
Zillow is a real-estate information site. In particular, it will
give you an estimate of how much a house is worth, given its address. It's very impressive. Not only
do they have tons of accurate information about houses, but they also have historical information,
so you can see what a house sold for, and when, with a graph of its estimated value over time.
It says my house is stucco when it is actually shingle, but I can only assume that this is an inaccuracy
in the data from the town.
It's from the same guys that started Expedia, so they have some experience disintermediating
professionals. "Disintermediating," I haven't said that in a while, how late 20th century of me!
Coolest feature: an aerial map of a neighborhood, with prices overlaid all the houses.
Most annoying feature: they've overdone it with the whole "z" thing. Estimates are called "zestimates" (TM!),
and the URLs end in a ".z" extension.
Typographically most annoying feature: To label themselves as being in beta, they've put the word "Beta"
in their logo, and the designer tried the cute trick of using the greek letter Beta for the B.
Except they've used a German double-s instead:
The German double-s is actually a lowercase letter,
a ligature of a long s and a normal s.
The long s in turn is the letter that everyone thinks is an f in old-fashioned texts.
You know, like at the top of the Bill of Rights, where it says, "In Congrefs".
So we have confusion piled on top of confusion, and Zillow seems to be in sseta (with a German accent).
These days it seems like every new site has an API, and the possibility of connecting up with dozens
of existing sites with APIs. You're not just imagining it.
Programmable Web is tracking
152 APIs and
397 mashups organized in an overwhelming matrix.
Just by coincidence, the web site features a header with a puzzle piece, similar to
Susan's site.
Wed.
Sunday 5 February 2006
Jeremy Shute wrote me to say that he like the idea of my Cog code generator
enough to reimplement it in Perl:
PCG :: The Perl Code Generator.
Jeremy also sent along an elisp snippet to get Emacs to run Cog interactively, handy during development
of the code generators:
;; COG stuff.(defun cog-buffer () (interactive) (save-buffer) (call-process "cog.py" nil "*Messages*" nil "-r" (buffer-file-name)) (revert-buffer nil t))(global-set-key [f5] 'cog-buffer)
Saturday 4 February 2006
K.
Another great trailer remix:
Brokeback to the Future.
Of course, they had three films worth of footage to draw from, including number 3 which
took place in the old west, so it really works.
Extra bonus parody:
Broke Mac Mountain.
2006,
Ned Batchelder | http://nedbatchelder.com/blog/200602.html | crawl-002 | refinedweb | 1,003 | 71.04 |
So, I am attempting to use ADO.Internet to stream personal files data saved within an image column inside a SQL Compact database.
To get this done, I authored a DataReaderStream class that can take an information readers, opened up for consecutive access, and signifies it as being a stream, redirecting calls to see(...) around the stream to IDataReader.GetBytes(...).
One "strange" facet of IDataReader.GetBytes(...), when in comparison towards the Stream class, is the fact that GetBytes necessitates the client to increment an offset and pass that in every time it's known as. It will this despite the fact that access is consecutive, and you cannot read "backwards" within the data readers stream.
The SqlCeDataReader implementation of IDataReader makes sure this by incrementing an interior counter that identifies the entire quantity of bytes it's came back. Should you pass inside a number either under or more than time, the technique will throw an InvalidOperationException.
The issue with this particular, however, is the fact that there's a bug within the SqlCeDataReader implementation that triggers it to create the interior counter towards the wrong value. This leads to subsequent calls to see on my small stream tossing exceptions once they should not be.
I discovered some infomation concerning the bug on this MSDN thread.
I could develop a disgusting, horribly hacky workaround, that essentially uses reflection to update the area within the class towards the correct value.
The code appears like this:
public override int Read(byte[] buffer, int offset, int count) { m_length = m_length ?? m_dr.GetBytes(0, 0, null, offset, count); if (m_fieldOffSet < m_length) { var bytesRead = m_dr.GetBytes(0, m_fieldOffSet, buffer, offset, count); m_fieldOffSet += bytesRead; if (m_dr is SqlCeDataReader) { //BEGIN HACK //This is a horrible HACK. m_field = m_field ?? typeof (SqlCeDataReader).GetField("sequentialUnitsRead", BindingFlags.NonPublic | BindingFlags.Instance); var length = (long)(m_field.GetValue(m_dr)); if (length != m_fieldOffSet) { m_field.SetValue(m_dr, m_fieldOffSet); } //END HACK } return (int) bytesRead; } else { return 0; } }
For apparent reasons, I would rather not make use of this.
However, I don't want to buffer the whole items in the blob in memory either.
Does anyone are conscious of a means I'm able to get streaming data from a SQL Compact database without needing to turn to such horrible code?
I approached Microsoft (with the SQL Compact Blog) plus they confirmed the bug, and recommended I personally use OLEDB like a workaround. So, I'll try might find out if that actually works for me personally.
Really, I made the decision to repair the problem just by not storing blobs within the database to start with.
This removes the issue (I'm able to stream data from the file), as well as fixes some issues I would have encounter with Sql Compact's 4 GB size limit. | http://codeblow.com/questions/hacky-sql-compact-workaround/ | CC-MAIN-2019-51 | refinedweb | 460 | 53.81 |
remctl man page
remctl, remctl_result_free — Simple remctl call to a remote server
Synopsis
#include <remctl.h>
struct remctl_result *
remctl(const char *host, unsigned short port,
const char *principal, const char **command);
void remctl_result_free(struct remctl_result *result);
Description.. If the client needs to control which ticket cache is used without changing the environment, use the full client API along with remctl_set_ccache(3).,).
Return Value).
Compatibility
This interface has been provided by the remctl client library since its initial release in version 2.0.
The default port was changed to the IANA-registered port of 4373 in version 2.11.
Support for IPv6 was added in version 2.4.
Caveats..
Notes
The remctl port number, 4373, was derived by tracing the diagonals of a QWERTY keyboard up from the letters
"remc" to the number row.
Author
Russ Allbery <eagle@eyrie.org>
Copyright 2007, 2008,
remctl_new(3), remctl_open(3), remctl_command(3), remctl_commandv(3), remctl_output(3), remctl_close(3)
The current version of the remctl library and complete details of the remctl protocol are available from its web page at <>. | https://www.mankier.com/3/remctl | CC-MAIN-2017-39 | refinedweb | 175 | 53.51 |
the signup form.
We will discuss the same scenario here and will guide you through the step by step process of using AJAX with Django.
So as per the scenario discussed above, first we need to create a form with username field along with other fields.
Use this code inside your form for login/username.
<label for="login" class="col-md-2 control-label">Login</label> <div class="col-md-4"> <input type="text" class="form-control input-sm" name="login" id="login" placeholder="Login or Username" required="True" onkeyup="check_login(this);return false;" data- {% csrf_token %} </div> <div class="col-md-6 col-sm-6" style="color:red;display:none;margin-top: 4px;" id="login_not"> <span class="glyphicon glyphicon-remove"></span> Username already taken. </div> <div class="col-md-6 col-sm-6" style="color: green;display: none;margin-top: 4px;" id="login_ok"> <span class="glyphicon glyphicon-ok"></span> Username available. </div>
We have created an input type of text which is a required field in the form. On keyup event on this input field, we will call
check_login function.
We are passing the input field as a parameter to this javascript function. So whenever text inside this input field changes,
check_login the function is triggered.
You can see an additional attribute
data-url of the scope of this article and hence we are going with this easy method.
function check_login(element) { $("#login_ok").hide(); $("#login_not").hide(); login = $(element).val(); if (login == "") { return; } $.ajax({ url : $(element).attr("data-url"), data : { "csrfmiddlewaretoken" : $(element).siblings("input[name='csrfmiddlewaretoken']" ).val(), "login":login }, method: "POST", dataType : "json", success : function (returned_data) { if (returned_data.is_success) { $("#login_ok").show(); } else { $("#login_not").show(); } } }); }
On any text change in the input field,
check_login the method is triggered. Let see line by line, what is happening inside this function.
First, we hide both the div where success/failure messages are being displayed. By default, these divs are hidden when page is loaded the first time. But as soon as you enter something in the input field, one or the other div is displayed with a message based on the result received from ajax request.
Then we get the value of the input field and if it is null/blank we do nothing and return from here. Then we make ajax call with parameters. URL is picked from
data-url attribute of the input field.
In data, we are sending
csrf_token and login/username. We defined the method as post and then defined actions to be taken on success or error response. If
is_success variable in
returned_data is set, then we show the div with success message else we show the div with an error message.
In the view, function check for availability of username and return appropriate JsonResponse.
def check_login(request): if request.method == "GET": raise Http404("URL doesn't exists") else: response_data = {} login = request.POST["login"] user = None try: try: user = UserModel.objects.get(login = login) except ObjectDoesNotExist as e: pass except Exception as e: raise e if not user: response_data["is_success"] = True else: response_data["is_success"] = False except Exception as e: response_data["is_success"] = False response_data["msg"] = "Some error occurred. Please let Admin know." return JsonResponse(response_data)
Import the required modules and models.
Github URL: | https://pythoncircle.com/post/130/how-to-use-ajax-with-django/ | CC-MAIN-2021-43 | refinedweb | 533 | 50.33 |
This note illustrates the effects on posterior inference of pooling data (aka sharing strength) across items for repeated binary trial data. It provides Stan models and R code to fit and check predictive models for three situations: (a) complete pooling, which assumes each item is the same, (b) no pooling, which assumes the items are unrelated, and (c) partial pooling, where the similarity among the items is estimated. We consider two hierarchical models to estimate the partial pooling, one with a beta prior on chance of success and another with a normal prior on the log odds of success. The note explains with working examples how to (i) fit models in RStan and plot the results in R using ggplot2, (ii) estimate event probabilities, (iii) evaluate posterior predictive densities to evaluate model predictions on held-out data, (iv) rank items by chance of success, (v) perform multiple comparisons in several settings, (vi) replicate new data for posterior p-values, and (vii) perform graphical posterior predictive checks.
Suppose that for each of \(N\) items \)
We use the small baseball data set of Efron and Morris (1975) as a running example, and in the same format provide the rat control data of Tarone (1982), the surgical mortality data of Spiegelhalter et al. (1996) and the extended baseball data set of Carpenter (2009).
As a running example, we include the data from Table 1 of (Efron and Morris 1975) as
efron-morris-75-data.tsv (it was downloaded 24 Dec 2015 from here). It is drawn from the 1970 Major League Baseball season from both leagues.
df <- read.csv("efron-morris-75-data.tsv", sep="\t"); df <- with(df, data.frame(FirstName, LastName, Hits, At.Bats, RemainingAt.Bats, RemainingHits = SeasonHits - Hits)); print(df);
FirstName LastName Hits At.Bats RemainingAt.Bats RemainingHits 1 Roberto Clemente 18 45 367 127 2 Frank Robinson 17 45 426 127 3 Frank Howard 16 45 521 144 4 Jay Johnstone 15 45 275 61 5 Ken Berry 14 45 418 114 6 Jim Spencer 14 45 466 126 7 Don Kessinger 13 45 586 155 8 Luis Alvarado 12 45 138 29 9 Ron Santo 11 45 510 137 10 Ron Swaboda 11 45 200 46 11 Rico Petrocelli 10 45 538 142 12 Ellie Rodriguez 10 45 186 42 13 George Scott 10 45 435 132 14 Del Unser 10 45 277 73 15 Billy Williams 10 45 591 195 16 Bert Campaneris 9 45 558 159 17 Thurman Munson 8 45 408 129 18 Max Alvis 7 45 70 14
We will only need a few columns of the data; we will be using the remaining hits and at bats to evaluate the predictive inferences for the various models.
N <- dim(df)[1] K <- df$At.Bats y <- df$Hits K_new <- df$RemainingAt.Bats; y_new <- df$RemainingHits;
The data separates the outcome from the initial 45 at-bats from the rest of the season. After running this code,
N is the number of items (players). Then for each item.
Although we consider many models, the data is coded as follows for all of them.
data { int<lower=0> N; // items int<lower=0> K[N]; // initial trials int<lower=0> y[N]; // initial successes int<lower=0> K_new[N]; // new trials int<lower=0> y_new[N]; // new successes }
As usual, we follow the convention of naming our program variables after the variables we use when we write the model out mathematically in a paper. We also choose capital letters for integer constants and y for the main observed variable(s).
With complete pooling, each item is assumed to have the same chance of success. With no pooling, each item is assumed to have a completely unrelated chance of success. With partial pooling, each item is assumed to have a different chance of success, but the data for all of the observed items informs the estimates for each item.
Partial pooling is typically accomplished through hierarchical models. Hierarchical models directly model the population of items. The population mean and variance is important, but the two hierarchical models we consider (chance of success vs. log odds of success) provide rather differently shaped posteriors.
From a population model perspective, no pooling corresponds to infinite population variance, whereas complete pooling corresponds to zero population variance.
In the following sections, all three types of pooling models will be fit for the baseball data.
The complete pooling model assumes a single parameter \(\phi\) representing the chance of success for all items. It is necessary in Stan to declare parameters with constraints corresponding to their support in the model. Because \(\phi\) will be used as a binomial parameter, we must have \(\phi \in [0,1]\). The variable
phi must therefore be declared in Stan with the following lower- and upper-bound constraints.
parameters { real<lower=0, upper=1> phi; // chance of success (pooled) }
The consequences for leaving the constraint off is that the program may fail during random initialization or during an iteration because Stan will generate initial values for
phi outside of \([0,1]\). Such a specification may appear to work if there are only a small number of such variables because Stan tries multiple random initial values by default for MCMC; but even so, results may be biased due to numerical arithmetic issues.
Assuming each player’s at-bats are independent Bernoulli trials, the sampling distribution for each player’s number of hits \(y_n\) is modeled as
\[ p(y_n \, | \, \phi) \ = \ \mathsf{Binomial}(y_n \, | \, K_n, \phi). \]
When viewed as a function of \(\phi\) for fixed \(y_n\), this is called the likelihood function.
Assuming each player is independent leads to the complete data likelihood
\[ p(y \, | \, \phi) = \prod_{n=1}^N \mathsf{Binomial}(y_n \, | \, K_n, \phi). \]
We will assume a uniform prior on \(\phi\),
\[ p(\phi) \ = \ \mathsf{Uniform}(\phi \, | \, 0, 1) \ = \ 1. \]
Whether a prior is uniform or not depends on the scale with which the parameter is expressed. Here, the variable \(\phi\) is a chance of success in \([0, 1]\). If we were to consider the log-odds of success, \(\log \frac{\phi}{1 - \phi}\), a uniform prior on log-odds is not the same as a uniform prior on chance of success (they are off by the Jacobian of the transform). A uniform prior on chance of success translates to a unit logistic prior on the log odds (the definition of the unit logistic density can be derived by calculating the Jacobian of the transform).
By default, Stan places a uniform prior over the values meeting the constraints on a parameter. Because
phi is constrained to fall in \([0,1]\), there is no need to explicitly specify the uniform prior on \(\phi\).
The likelihood is expressed as a vectorized sampling statement in Stan as
model { ... y ~ binomial(K, phi); }
Sampling statements in Stan are syntactic shorthand for incrementing the underlying log density accumulator. Thus the above would produce the same draws as
increment_log_prob(binomial_log(y, K, phi));
The only difference is that the sampling statement drops any constants that don’t depend on parameters or functions of parameters.
The vectorized sampling statement above is equivalent to but more efficient than the following explicit loop.
for (n in 1:N) y[n] ~ binomial(K[n], phi);
In general, Stan will match dimensions, repeating scalars as necessary; any vector or array arguments must be the same size. When used as a function, the result is the sum of the log densities. The vectorized form can be up to an order of magnitude or more faster in some cases, depending on how many repeated calculations can be avoided.
The actual Stan program in
pool.stan has many more derived quantities that will be used in the rest of this note; see the appendix for the full code of all of the models discussed.
We start by loading the RStan package.
library(rstan);
Loading required package: ggplot2 rstan (Version 2.9.0, packaged: 2016-01-05 16:17:47 UTC, GitRev: 05c3d0058b6a) For execution on a local, multicore CPU with excess RAM we recommend calling rstan_options(auto_write = TRUE) options(mc.cores = parallel::detectCores())
The model can be fit as follows, with
M being the total number of draws in the complete posterior sample (each chain is by default split into half warmup and half sampling iterations and 4 chains are being run).
M <- 10000; fit_pool <- stan("pool.stan", data=c("N", "K", "y", "K_new", "y_new"), iter=(M / 2), chains=4);
SAMPLING FOR MODEL 'pool' NOW (CHAIN 1). Chain 1, Iteration: 1 / 5000 [ 0%] (Warmup) Chain 1, Iteration: 500 / 5000 [ 10%] (Warmup) Chain 1, Iteration: 1000 / 5000 [ 20%] (Warmup) Chain 1, Iteration: 1500 / 5000 [ 30%] (Warmup) Chain 1, Iteration: 2000 / 5000 [ 40%] (Warmup) Chain 1, Iteration: 2500 / 5000 [ 50%] (Warmup) Chain 1, Iteration: 2501 / 5000 [ 50%] (Sampling) Chain 1, Iteration: 3000 / 5000 [ 60%] (Sampling) Chain 1, Iteration: 3500 / 5000 [ 70%] (Sampling) Chain 1, Iteration: 4000 / 5000 [ 80%] (Sampling) Chain 1, Iteration: 4500 / 5000 [ 90%] (Sampling) Chain 1, Iteration: 5000 / 5000 [100%] (Sampling)# # Elapsed Time: 0.070874 seconds (Warm-up) # 0.071481 seconds (Sampling) # 0.142355 seconds (Total) # SAMPLING FOR MODEL 'pool' NOW (CHAIN 2). Chain 2, Iteration: 1 / 5000 [ 0%] (Warmup) Chain 2, Iteration: 500 / 5000 [ 10%] (Warmup) Chain 2, Iteration: 1000 / 5000 [ 20%] (Warmup) Chain 2, Iteration: 1500 / 5000 [ 30%] (Warmup) Chain 2, Iteration: 2000 / 5000 [ 40%] (Warmup) Chain 2, Iteration: 2500 / 5000 [ 50%] (Warmup) Chain 2, Iteration: 2501 / 5000 [ 50%] (Sampling) Chain 2, Iteration: 3000 / 5000 [ 60%] (Sampling) Chain 2, Iteration: 3500 / 5000 [ 70%] (Sampling) Chain 2, Iteration: 4000 / 5000 [ 80%] (Sampling) Chain 2, Iteration: 4500 / 5000 [ 90%] (Sampling) Chain 2, Iteration: 5000 / 5000 [100%] (Sampling)# # Elapsed Time: 0.074807 seconds (Warm-up) # 0.078645 seconds (Sampling) # 0.153452 seconds (Total) # SAMPLING FOR MODEL 'pool' NOW (CHAIN 3). Chain 3, Iteration: 1 / 5000 [ 0%] (Warmup) Chain 3, Iteration: 500 / 5000 [ 10%] (Warmup) Chain 3, Iteration: 1000 / 5000 [ 20%] (Warmup) Chain 3, Iteration: 1500 / 5000 [ 30%] (Warmup) Chain 3, Iteration: 2000 / 5000 [ 40%] (Warmup) Chain 3, Iteration: 2500 / 5000 [ 50%] (Warmup) Chain 3, Iteration: 2501 / 5000 [ 50%] (Sampling) Chain 3, Iteration: 3000 / 5000 [ 60%] (Sampling) Chain 3, Iteration: 3500 / 5000 [ 70%] (Sampling) Chain 3, Iteration: 4000 / 5000 [ 80%] (Sampling) Chain 3, Iteration: 4500 / 5000 [ 90%] (Sampling) Chain 3, Iteration: 5000 / 5000 [100%] (Sampling)# # Elapsed Time: 0.073841 seconds (Warm-up) # 0.070549 seconds (Sampling) # 0.14439 seconds (Total) # SAMPLING FOR MODEL 'pool' NOW (CHAIN 4). Chain 4, Iteration: 1 / 5000 [ 0%] (Warmup) Chain 4, Iteration: 500 / 5000 [ 10%] (Warmup) Chain 4, Iteration: 1000 / 5000 [ 20%] (Warmup) Chain 4, Iteration: 1500 / 5000 [ 30%] (Warmup) Chain 4, Iteration: 2000 / 5000 [ 40%] (Warmup))# # Elapsed Time: 0.073971 seconds (Warm-up) # 0.075928 seconds (Sampling) # 0.149899 seconds (Total) #
ss_pool <- extract(fit_pool);
Here, we read the data out of the environment by name; normally we would prefer to encapsulate the data in a list to avoid naming conflicts in the top-level namespace. We showed the default output for the
stan() function call here, but will suppress it in subsequent calls.
The posterior sample for
phi can be summarized as follows.
print(fit_pool, c("phi"), probs=c(0.1, 0.5, 0.9));
Inference for Stan model: pool. 4 chains, each with iter=5000; warmup=2500; thin=1; post-warmup draws per chain=2500, total post-warmup draws=10000. mean se_mean sd 10% 50% 90% n_eff Rhat phi 0.27 0 0.02 0.25 0.27 0.29 3237 1 Samples were drawn using NUTS(diag_e) at Sat Jan 30 20:56:04 2016. For each parameter, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat=1).
The summary statistics begin with the posterior mean, the MCMC standard error on the posterior mean, and the posterior standard deviation. Then there are 0.1, 0.5, and 0.9 quantiles, which provide the posterior median and boundaries of the central 80% interval. The last two columns are for the effective sample size (MCMC standard error is the posterior standard deviation divided by the square root of the effective sample size) and the \(\hat{R}\) convergence diagnostic (its value will be 1 if the chains have all converged to the same posterior mean and variance; see the Stan Manual (Stan Development Team 2015) or (Gelman et al. 2013). The \(\hat{R}\) value here is consistent with convergence (i.e., near 1) and the effective sample size is good (roughly half the number of posterior draws; by default Stan uses as many iterations to warmup as it does for drawing the sample).
The result is a posterior mean for \(\theta\) of \(0.27\) with an 80% central posterior interval of \((0.25, 0.29)\). players have the same chance of success.
A model with no pooling involves a separate chance-of-success parameter \(\theta_n \in [0,1]\) for each item \(n\).
The prior on each \(\theta_n\) is uniform,
\[ p(\theta_n) = \mathsf{Uniform}(\theta_n \, | \, 0,1), \]
and the \(\theta_n\) are assumed to be independent,
\[ p(\theta) = \prod_{n=1}^N \mathsf{Uniform}(\theta_n \, | \, 0,1). \]
The likelihood then uses the chance of success \(\theta_n\) for item \(n\) in modeling the number of successes \(y_n\) as
\[ p(y_n \, | \, \theta_n) = \mathsf{Binomial}(y_n \, | \, K_n, \theta_n). \]
Assuming the \(y_n\) are independent (conditional on \(\theta\)), this leads to the total data likelihood
\[ p(y \, | \, \theta) = \prod_{n=1}^N \mathsf{Binomial}(y_n \, | \, K_n, \theta_n). \]
The Stan program for no pooling only differs in declaring the ability parameters as an \(N\)-vector rather than a scalar.
parameters { vector<lower=0, upper=1>[N] theta; // chance of success }
The constraint applies to each
theta[n] and implies an independent uniform prior on each.
The model block defines the likelihood as binomial, using the efficient vectorized form
model { y ~ binomial(K, theta); // likelihood }
This is equivalent to the less efficient looped form
for (n in 1:N) y[n] ~ binomial(K[n], theta[n]);
The full Stan program with all of the extra generated quantities, is in
no-pool.stan, which is shown in the appendix.
This model can be fit the same way as the last model.
fit_no_pool <- stan("no-pool.stan", data=c("N", "K", "y", "K_new", "y_new"), iter=(M / 2), chains=4); ss_no_pool <- extract(fit_no_pool);
Results are displayed the same way.
print(fit_no_pool, c("theta"), probs=c(0.1, 0.5, 0.9));
Inference for Stan model: no-pool. 4 chains, each with iter=5000; warmup=2500; thin=1; post-warmup draws per chain=2500, total post-warmup draws=10000. mean se_mean sd 10% 50% 90% n_eff Rhat theta[1] 0.40 0 0.07 0.31 0.40 0.50 10000 1 theta[2] 0.38 0 0.07 0.29 0.38 0.47 10000 1 theta[3] 0.36 0 0.07 0.27 0.36 0.45 10000 1 theta[4] 0.34 0 0.07 0.26 0.34 0.43 10000 1 theta[5] 0.32 0 0.07 0.23 0.32 0.41 10000 1 theta[6] 0.32 0 0.07 0.23 0.32 0.41 10000 1 theta[7] 0.30 0 0.07 0.21 0.30 0.39 10000 1 theta[8] 0.28 0 0.07 0.19 0.27 0.36 10000 1 theta[9] 0.26 0 0.06 0.18 0.25 0.34 10000 1 theta[10] 0.26 0 0.06 0.18 0.25 0.34 10000 1 theta[11] 0.23 0 0.06 0.16 0.23 0.31 10000 1 theta[12] 0.23 0 0.06 0.16 0.23 0.31 10000 1 theta[13] 0.23 0 0.06 0.16 0.23 0.31 10000 1 theta[14] 0.23 0 0.06 0.16 0.23 0.32 10000 1 theta[15] 0.23 0 0.06 0.16 0.23 0.32 10000 1 theta[16] 0.21 0 0.06 0.14 0.21 0.29 10000 1 theta[17] 0.19 0 0.06 0.12 0.19 0.27 10000 1 theta[18] 0.17 0 0.05 0.10 0.17 0.24 10000 1 Samples were drawn using NUTS(diag_e) at Sat Jan 30 20:56:34 2016. For each parameter, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat=1).
Now there is a separate line for each item’s estimated \(\theta_n\). The posterior mode is the maximum likelihood estimate, but that requires running Stan’s optimizer to find; the posterior mean and median will be reasonably close to the posterior mode despite the skewness (the posterior can be shown analytically to be a Beta distribution).
Each 80% interval is much wider than the estimated interval for the population in the complete pooling model; this is to be expected—there are only 45 data items for each parameter here as opposed to 810 in the complete pooling case. If the items each had different numbers of trials, the intervals would also vary based on size.
As the estimated chance of success goes up toward 0.5, the 80% intervals gets wider. This is to be expected for chance of success parameters, because the standard deviation of a random variable distributed as \(\mathsf{Binomial}(K, \theta)\) is \(\sqrt{\frac{\theta \, (1 - \theta)}{K}}\).
The no pooling model model provides better MCMC mixing than the complete pooling model as indicated by the effective sample size and convergence diagnostics \(\hat{R}\); although not in and of itself meaningful, it is often the case that badly misspecified models provide difficult computationally (a result Andrew Gelman has dubbed “The Folk Theorem”).% is too low for all but a few rare defensive specialists).
Complete pooling provides estimated abilities that are too narrowly distributed for the items and removes any chance of modeling population variation. Estimating each chance of success separately without any pooling provides estimated abilities that are too broadly distributed for the items. In this case, we will assume a beta distribution as the prior as it is scaled to values in \([0, 1]\),
\[ p(\theta_n \, | \, \alpha, \beta) \ = \ \mathsf{Beta}(\theta_n \, | \, \alpha, \beta), \]
where \(\alpha, \beta > 0\) are the parameters of the prior. The beta distribution is the conjugate prior for the binomial, meaning that the posterior is known to be a beta distribution. This also allows us to interpret the prior’s parameters as prior data, with \(\alpha - 1\) being the prior number of successes and \(\beta - 1\) being the prior number of failures, and \(\alpha = \beta = 1\) corresponding to no prior observations and thus a uniform distribution. Each \(\theta_n\) will be modeled as conditionally independent given the prior parameters, so that the complete prior is
\[ p(\theta \, | \, \alpha, \beta) = \prod_{n=1}^N \mathsf{Beta}(\theta_n \, | \, \alpha, \beta). \]
The parameters \(\alpha\) and \(\beta\) are themselves given priors (sometimes called hyperpriors). Rather than parameterize \(\alpha\) and \(\beta\) directly, we will instead put priors on \(\phi \in [0, 1]\) and \(\kappa > 0\), and then define
\[ \alpha = \kappa \, \phi \]
and
\[ \beta = \kappa \, (1 - \phi). \]
This reparameterization is convenient, because
\(\phi = \frac{\alpha}{\alpha + \beta}\) is the mean of a variable distributed as \(\mathsf{Beta}(\alpha, \beta)\), and
\(\kappa = \alpha + \beta\) is the prior count plus two (roughly inversely related to the variance).
We will follow Gelman et al. (2013, Chapter 5) in providing a prior that factors into a uniform prior on \(\phi\),
\[ p(\phi) = \mathsf{Uniform}(\phi \, | \, 0, 1), \]
and a Pareto prior on \(\kappa\),
\[ p(\kappa) = \mathsf{Pareto}(\kappa \, | \, 1, 1.5) \propto \kappa^{-2.5}. \]
with the restriction \(\kappa > 1\). In general, for functions \(f\) and \(g\), we write \(f(x) \propto g(x)\) if there is some constant \(c\) such that \(f(x) = c \, g(x)\). The first argument to the Pareto distribution is a bound \(\epsilon > 0\), which in turn requires the outcome \(\kappa > \epsilon\); this is required so that the distribution can be normalized to integrate to 1 over its support. The value \(\epsilon = 1\) is a conservative choice for this problem as we expect in the posterior, \(\kappa\) will be much greater than \(1\). The constraint \(\kappa > 1\) must therefore be included in the Stan parameter declaration, because Stan programs require support on the parameter values that satisfy their declared constraints.
The Stan code follows the definitions, with parameters declared with appropriate constraints as follows.
parameters { real<lower=0, upper=1> phi; // population chance of success real<lower=1> kappa; // population concentration vector<lower=0, upper=1>[N] theta; // chance of success }
The lower-bound on \(\kappa\) matches the first argument to the Pareto distribution in the model block.
model { kappa ~ pareto(1, 1.5); // hyperprior theta ~ beta(phi * kappa, (1 - phi) * kappa); // prior y ~ binomial(K, theta); // likelihood }
The values of \(\alpha = \phi \, \kappa\) and \(\beta = (1 - \phi) \, \kappa\) are computed in the arguments to the vectorized Beta sampling statement. The prior on \(\phi\) is implicitly uniform because it is explicitly constrained to lie in \([0, 1]\). The prior on \(\kappa\) is coded following the model definition.
The full model with all generated quantities can be coded in Stan as in the file
hier.stan; it is displayed in the appendix. It is run as usual.
fit_hier <- stan("hier.stan", data=c("N", "K", "y", "K_new", "y_new"), iter=(M / 2), chains=4, seed=1234);
Warning: There were 3 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help.
Warning: Examine the pairs() plot to diagnose sampling problems
ss_hier <- extract(fit_hier);
Even though we set
results=‘hide’ in the knitr R call here, the error messages are still printed. We set the pseudorandom number generator seed here to 1234 so that we could discuss what the output looks like.
In this case, there’s an error report for a number of what Stan calls “divergent transitions” after warmup. A divergent transition arises when there is an arithmetic issue in evaluating a numerical expression; this is almost always an overflow or underflow in well-specified models, but may simply be arguments out of bounds if the proper constraints have not been enforced (for instance, taking an unconstrained parameter as the chance of success for a Bernoulli distribution, which requires its chance of success to be in \([0,1]\)).
Whenever divergent transitions show up, it introduces a bias in the posterior draws away from the region where the divergence happens. Therefore, we should try to remove divergent transitions before trusting our posterior sample.
The underlying root cause of most divergent transitions is a step size that is too large. The underlying Hamiltonian Monte Carlo algorithm is attempting to follow the Hamiltonian using discrete time steps corresponding to the step size of the algorithm. When that step size is too large relative to posterior curvature, iteratively taking steps in the gradient times the step size provides a poor approximation to the posterior curvature. The problem with a hierarchical model is that the curvature in the posterior varies based on position; when the hierarchical variance is low, there is high curvature in the lower-level parameters around the mean. During warmup, Stan globally adapts its step size to a target acceptance rate, which can lead to too large step sizes for highly curved regions of the posterior. To mitigate this problem, we need to either reduce the step size or reparameterize. We firt consider reducing step size, and then in the next secton consider the superior alternative of reparameterization.
To reduce the step size of the algorithm, we want to lower the initial step size and increase the target acceptance rate. The former keeps the step size low to start; the latter makes sure the adapted step size is lower. So we’ll run this again with the same seed, this time lowering the step size (
stepsize) and increasing the target acceptance rate (
adapt_delta).
fit_hier <- stan("hier.stan", data=c("N", "K", "y", "K_new", "y_new"), iter=(M / 2), chains=4, seed=1234, control=list(stepsize=0.01, adapt_delta=0.99)); ss_hier <- extract(fit_hier);
Now it runs without divergent translations.
Summary statistics for the posterior are printed as before.
print(fit_hier, c("theta", "kappa", "phi"), probs=c(0.1, 0.5, 0.9));
Inference for Stan model: hier. 4 chains, each with iter=5000; warmup=2500; thin=1; post-warmup draws per chain=2500, total post-warmup draws=10000. mean se_mean sd 10% 50% 90% n_eff Rhat theta[1] 0.32 0.00 0.05 0.26 0.32 0.39 3809 1.00 theta[2] 0.31 0.00 0.05 0.25 0.31 0.38 3865 1.00 theta[3] 0.30 0.00 0.05 0.25 0.30 0.37 5130 1.00 theta[4] 0.29 0.00 0.05 0.24 0.29 0.36 10000 1.00 theta[5] 0.29 0.00 0.05 0.23 0.28 0.35 7610 1.00 theta[6] 0.29 0.00 0.05 0.23 0.28 0.34 10000 1.00 theta[7] 0.28 0.00 0.04 0.22 0.27 0.33 10000 1.00 theta[8] 0.27 0.00 0.04 0.21 0.27 0.32 10000 1.00 theta[9] 0.26 0.00 0.04 0.20 0.26 0.31 10000 1.00 theta[10] 0.26 0.00 0.04 0.21 0.26 0.31 10000 1.00 theta[11] 0.25 0.00 0.04 0.19 0.25 0.30 10000 1.00 theta[12] 0.25 0.00 0.04 0.19 0.25 0.30 10000 1.00 theta[13] 0.25 0.00 0.04 0.20 0.25 0.30 10000 1.00 theta[14] 0.25 0.00 0.04 0.19 0.25 0.30 10000 1.00 theta[15] 0.25 0.00 0.04 0.20 0.25 0.30 10000 1.00 theta[16] 0.24 0.00 0.04 0.18 0.24 0.29 6275 1.00 theta[17] 0.23 0.00 0.04 0.17 0.23 0.28 5286 1.00 theta[18] 0.22 0.00 0.04 0.17 0.22 0.28 4323 1.00 kappa 110.35 7.72 182.42 25.39 64.27 210.76 558 1.01 phi 0.27 0.00 0.02 0.24 0.27 0.29 4543 1.00 Samples were drawn using NUTS(diag_e) at Sat Jan 30 20:57:10 2016. For each parameter, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat=1).
Because the Beta prior is conjugate to the binomial likelihood, the amount of interpolation between the data and the prior in this particular case is easy to quantify. The data consists of \(K\) observations, whereas the prior will be weighted as if it were \(\kappa - 2\) observations (specifically \(\phi \, \kappa - 1\) prior successes and \((1 - \phi) \, \kappa - 1\) prior failures).
The parameter \(\kappa\) is not well determined by the combination of data and Pareto prior, with a posterior 80% interval of roughly \((25, 225)\). By the informal discussion above, \(\kappa \in (25, 225)\) ranges from weighting the data 2:1 relative to the prior to weighting it 1:5. The wide posterior interval for \(\kappa\) arises because the exact variance in the population is not well constrained by only 18 trials of size 45. If there were more items (higher \(N\)) or even more trials per item (higher \(K\)), the posterior for \(\kappa\) would be more tightly constrained (see the exercises for an example).
It is also evident from the posterior summary that the lower effective sample size for \(\kappa\) indicates it is not mixing as well as the other components of the model. Again, this is to be expected with a centered hierarchical prior and low data counts. This is an example where a poorly constrained parameter leads to reduced computational efficiency (as reflected in the effective sample size). Such poor mixing is typical of centered parameterizations in hierarchical models (Betancourt and Girolami 2015). It is not immediately clear how to provide a non-centered analysis of the beta prior, because it isn’t supplied with a location/scale parameterization on the unconstrained scale. Instead, we consider an alternative parameterization in the next section.
Figure 5.3 from (Gelman et al. 2014) plots the fitted values for \(\phi\) and \(\kappa\) on the unconstrained scale, which is the space over which Stan is sampling. The variable \(\phi \in [0,1]\) is transformed to \(\mathrm{logit}(\phi) = \log(\phi / (1 - \phi))\) and \(\kappa \in (0, \infty)\) is transformed to \(\log \kappa\). We reproduce that figure here for our running example.
df_bda3_fig_5_3 <- with(ss_hier, data.frame(x = log(phi / (1 - phi)), y = log(kappa))); phi_sim <- ss_hier$phi; kappa_sim <- ss_hier$kappa; df_bda3_fig_5_3 <- data.frame(x = log(phi_sim / (1 - phi_sim)), y = log(kappa_sim)); library(ggplot2); plot_bda3_fig_5_3 <- ggplot(df_bda3_fig_5_3, aes(x=x, y=y)) + geom_point(shape=19, alpha=0.15) + xlab("logit(phi) = log(alpha / beta)") + ylab("log(kappa) = log(alpha + beta)"); plot_bda3_fig_5_3; | https://mc-stan.org/users/documentation/case-studies/pool-binary-trials.html | CC-MAIN-2019-13 | refinedweb | 4,858 | 55.44 |
08 January 2010 05:42 [Source: ICIS news]
By Chow Bee Lin and Peh Soo Hwee
SINGAPORE (ICIS news)--Polyethylene (PE) and polypropylene (PP) prices in Asia may rise as supply will be constrained with polymer giant Saudi Basic Industries Corp (SABIC) cutting its product allocations to the region, possibly pulling ethylene (C2) prices along, market sources said on Friday.
So far, PE and PP values in ?xml:namespace>
Benchmark film grade HDPE and injection and yarn grade PP were sold $80-90/tonne (€56-63/tonne) higher from two weeks ago, but the gains were mainly driven by bullish sentiment triggered by high crude values in the past two weeks, local sources said.
Film grade HDPE was sold up to $1,360/tonne CFR (cost and freight) China CFR China this week, market sources said. Benchmark was offered up to $1,370/tonne CFR China for January shipment, but January deals were mostly cited below $1,300/tonne CFR China, the sources said.
Market sources estimated that Asia would likely get less than half its normal monthly volumes of PE and PP from SABIC in January and February due to some production issues at the company’s petrochemical facilities in
A power outage in late December disrupted the operations of the Yansab, Yanpet and Ibn Rushd petrochemical facilities, SABIC had said in a statement issued early this week, adding that it was working to restore normal operations.
The two crackers at Yanpet with combined ethylene capacity of more than 1.7m tonnes/year had restarted in early January, but the 1.3m tonne/year Yansab cracker was still off line and may only resume operations by the end of the month, according to traders.
A SABIC spokeswoman was not able to provide an immediate update on the facilities when contacted by ICIS news due to weekly holiday in Saudi Arabia.
If production issues at SABIC proved to be extensive, PE and PP supply in
Asian ethylene spot prices could be dragged along.
“SABIC doesn’t export ethylene from the Yanbu area so there’s no direct impact but if PE and MEG (mono ethylene glycol) prices go up, this could indirectly boost prices of the monomer,” an olefins trader said.
Tight supply had prompted some traders to raise their selling targets for spot parcels by $50/tonne to $1,300-1,350/tonne CFR (cost and freight)
End-users, wary of having to pay more for ethylene, were not keen on buying cargoes on a fixed price basis.
MEG daily spot prices were assessed at $970-980/tonne CFR China on Thursday while offers for high density polyethylene (HDPE) were heard above $1,300/tonne
($1 = €0.70)
For more on eth | http://www.icis.com/Articles/2010/01/08/9323829/asia-pepp-may-rise-on-sabic-supply-cuts-to-tug-at-c2-market.html | CC-MAIN-2015-14 | refinedweb | 453 | 51.41 |
If you are an experienced Android application developer, you're probably used to the verbosity of Java 7. As a result, you might be finding Kotlin's concise syntax, which is geared towards functional programmers, slightly unsettling.
One common problem beginners encounter while learning Kotlin is understanding how it expects you to work with Java interfaces that contain a single method. Such interfaces are ubiquitous in the Android world and are often referred to as SAM interfaces, where SAM is short for Single Abstract Method.
In this short tutorial, you'll learn everything you need to know to aptly use Java's SAM interfaces in Kotlin code.
1. What Is a SAM Conversion?
When you want to make use of a Java interface containing a single method in your Kotlin code, you don't have to manually create an anonymous class that implements it. Instead, you can use a lambda expression. Thanks to a process called SAM conversion, Kotlin can transparently convert any lambda expression whose signature matches that of the interface's single method into an instance of an anonymous class that implements the interface.
For example, consider the following one-method Java interface:
public interface Adder { public void add(int a, int b); }
A naive and Java 7-like approach to using the above interface would involve working with an
object expression and would look like this:
// Creating instance of an anonymous class // using the object keyword val adder = object : Adder { override fun add(a: Int, b: Int): Int { return a + b } }
That's a lot of unnecessary code, which is also not very readable. By leveraging Kotlin's SAM conversion facility, however, you can write the following equivalent code instead:
// Creating instance using a lambda val adder = Adder { a, b -> a + b }
As you can see, we've now replaced the anonymous class with a short lambda expression, which is prefixed with the name of the interface. Note that the number of arguments the lambda expression takes is equal to the number of parameters in the signature of the interface's method.
2. SAM Conversions in Function Calls
While working with Java classes having methods that take SAM types as their arguments, you can further simplify the above syntax. For example, consider the following Java class, which contains a method that expects an object implementing the
Adder interface:
public class Calculator { private Adder adder; public void setAdder(Adder adder) { this.adder = adder; } public void add(int a, int b) { Log.d("CALCULATOR", "Sum is " + adder.add(a,b)); } }
In your Kotlin code, you can now directly pass a lambda expression to the
setAdder() method, without prefixing it with the name of the
Adder interface.
val calculator = Calculator() calculator.setAdder({ a, b -> a+b })
It is worth noting that while calling a method that takes a SAM type as its only argument, you are free to skip the parenthesis to make your code even more concise.
calculator.setAdder { a, b -> a+b }
3. SAM Conversions Without Lambdas
If you think lambda expressions are confusing, I've got good news for you: SAM conversions work just fine with ordinary functions too. For example, consider the following function whose signature matches that of the
Adder interface's method:
fun myCustomAdd(a:Int , b:Int):Int = if (a+b < 100) -1 else if (a+b < 200) 0 else a+b
Kotlin allows you to directly pass the
myCustomAdd() function as an argument to the
setAdder() method of the
Calculator class. Don't forget to reference the method using the
:: operator. Here's how:
calculator.setAdder (this::myCustomAdd)
4. The
it Variable
Many times, SAM interfaces contain one-parameter methods. A one-parameter method, as its name suggests, has only one parameter in its signature. While working with such interfaces, Kotlin allows you to omit the parameter in your lambda expression's signature and use an implicit variable called
it in the expression's body. To make things clearer, consider the following Java interface:
public interface Doubler { public int doubleIt(int number); }
While using the
Doubler interface in your Kotlin code, you don't have to explicitly mention the
number parameter in your lambda expression's signature. Instead, you can simply refer to it as
it.
// This lambda expression using the it variable val doubler1 = Doubler { 2*it } // is equivalent to this ordinary lambda expression val doubler2 = Doubler { number -> 2*number }
5. SAM Interfaces in Kotlin
As a Java developer, you might be inclined to create SAM interfaces in Kotlin. Doing so, however, is usually not a good idea. If you create a SAM interface in Kotlin, or create a Kotlin method that expects an object implementing a SAM interface as an argument, the SAM conversion facility will not be available to you—SAM conversion is a Java-interoperability feature and is limited to Java classes and interfaces only.
Because Kotlin supports higher-order functions—functions that can take other functions as arguments—you'll never need to create SAM interfaces in it. For example, if the
Calculator class is rewritten in Kotlin, its
setAdder() method can be written such that it directly takes a function as its argument, instead of an object that implements the
Adder interface.
class Calculator { var adder:(a:Int, b:Int)->Int = {a,b -> 0} // Default implementation // Setter is available by default fun add(a:Int, b:Int) { Log.d("CALCULATOR", "Sum is " + adder(a,b)) } }
While using the above class, you can set
adder to a function or a lambda expression using the
= operator. The following code shows you how:
val calculator = Calculator() calculator.adder = this::myCustomAdd // OR calculator.adder = {a,b -> a+b}
Conclusion
Android's APIs are largely written in Java, and many use SAM interfaces extensively. The same can be said of most third-party libraries too. By using the techniques you learned in this tutorial, you can work with them in your Kotlin code in a concise and easy-to-read way.
To learn more about Kotlin's Java-interoperability features, do refer to the official documentation. And do check out some of our other tutorials on Android app development!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/quick-tip-write-cleaner-code-with-kotlin-sam-conversions--cms-29304 | CC-MAIN-2018-26 | refinedweb | 1,038 | 50.77 |
Hi,
thanks for your contribution. I inserted the code in our repository.
Merry Christmas to all.
Type: Posts; User: bobbicat71
Hi,
thanks for your contribution. I inserted the code in our repository.
Merry Christmas to all.
I developed a solution for vertical panels from the plugin in question.
I commented lines that relate to the icon because I don't use it.
This is the code:
Ext.namespace("Ext.ux");
...
The latest version that you can download from svn repository contains this new feature.
Regards.
I tried the patch, but the same problem that occurs on click event (handler only fires when clicking the label) it's also on blur or fucus.
When I click on a radio or a checkbox the events 'blur' and 'focus' are not called.
The same problem occurs in firefox 2/3, IE 6 and safari. I use Windows XP.
I attach a simple example to reproduce...
I have already a dropdown list but I would use a RadioGroup because in my opinion it's more user friendly...
I have a problem using an Ext.form.RadioGroup as editor in a grid.
What I want is to have two radio buttons in a grid that I use to select the value (such as YES / NO or TRUE / FALSE).
I can see...
If you modify the file "DynamicFilterModelView.js" you need to include this and not the "FieldManager.js".
Hope this helps.
There is no way to do it by configuring. But a way to do it there anyway.
Rewrite the class "DynamicFilterModelView" and delete the following line:
this.fieldStore.sort('label','ASC');
If it...
Filter for "null" is possible but it depends by your back end. For example in our implementation, if I filter for an empty string means that I search on db with the value null.
private...
I'm not sure if I have understood what you want to know.
Each grid must have a static filter that is connected to the ColumnModel.
You can see an example in the file staticFilter.htm under the demo...
Hello, could you tell me what browser are you using?
The code on SVN repository () has been updated. Now the bug at contextMenu is resolved. Furthermore, the example "allTogheter" has been changed; was added...
add support to opera (tested with latest version)
fix clearWhenInvalid behaviour: now when the second parameter of constructor is false when the field loses focus and the text is invalid the field...
Thank you for reporting but the bug that you found is no more present in the code since 7 January 2008. Maybe your code was not updated to the latest version that you can find in the first post.
Thanks for the question that has revealed a bug in the code. Take the latest version in the first post and for a control over the number of characters you can do this:
Example 1 - 3 numeric...
Currently this feature is not implemented.
I am looking for a solution to manage the copy paste.
Thanks for the feedback. I have fixed my code to manage the numeric keypad.
Hi all,
here is a plugin for textfield that adds a mask input to the field.
This is the latest version:
// $Id: InputTextMask.js 293638 2008-02-04 14:33:36Z UE014015 $
... | https://www.sencha.com/forum/search.php?s=58bb28b34a4a327da367cd4f731dedc0&searchid=23318822 | CC-MAIN-2020-16 | refinedweb | 548 | 77.53 |
12 October 2012 04:30 [Source: ICIS news]
SINGAPORE (ICIS)--Nippon Shokubai’s current inventory of maleic anhydride (MA) will be able to fulfil only a portion of its domestic October contract for briquette-type material, while exports had to be cancelled, a company source said on Friday.
The company’s 35,000 tonne/year MA plant at ?xml:namespace>
No definite restart date is set for the MA plant, the source said.
The company’s previously concluded October-loading MA export orders were cancelled after negotiations with customers, the source said.
Spot MA prices were assessed at $1,740-1,810/tonne (€1,340-1,394/tonne) CFR (cost & freight) southeast (SE) Asia in the week ended 5 October, while the October contract prices of MA were assessed at yen (Y) 160-180/kg ($2,042-2,297/tonne) DEL (delivered) Japan, according to ICIS.
($1 = €0.77 / $1 = Y78 | http://www.icis.com/Articles/2012/10/12/9603301/japans-nippon-shokubai-ma-stocks-to-partly-fulfil-oct.html | CC-MAIN-2015-18 | refinedweb | 151 | 50.16 |
Enhancing Client-Side Storage with HTML5
Today's guest post is from Christopher Haupt of Webvanta, an Engine Yard partner. Christopher is co-founder of Webvanta, co-host of the Learning Rails podcast, and frequent contributor at Ruby, Web Development and Design conferences, meet-ups, and publications.
Webvanta is co-sponsoring the North Bay Web Design Conference on April 12, 2011 in Rohnert Park, CA. Check out the speaker lineup and to register to attend by visiting the North Bay Web Design Conference event site.
Our team finds that we get the greatest leverage out of our existing collection of code snippets by organizing them into well structured, easy to maintain libraries of pluggable modules. In the previous article of this two part series, we focused on how to reuse our large collection of JavaScript snippets by making them into jQuery plugins. In this second article we briefly examine the Web Storage technology that has come out of the HTML5 specification process. We then show how simple it is to wrap it within a jQuery plugin.
As your interactive front-end code becomes more sophisticated, it is common to have data supplied from the back-end that is relatively expensive to deliver either because of its size or cost to generate. Until now, the options for holding onto such data locally have been limited. You could store small pieces of data in cookies, leverage optional third-party browser plugins, or use browser specific extensions.
If you hoped to have a cross-browser, standards-based solution that gave you the fewest headaches, your hopes were dashed, especially if you needed to deal with desktop AND mobile browsers.
Now there is a bit of hope. A set of client-side data technologies originally part of the HTML5 specification (now spun off) have been implemented in many recent browser engines. We're going to look briefly at Web Storage as the most stable of the work to date. Two other candidates exist: Web Database and Indexed Database, but they have bogged down in specification politics among the browser builders, so caveat emptor.
Web Storage API
Web Storage introduces two key/value storage containers: localStorage and sessionStorage. Both store data that are tied to a specific domain. The former persists data across browser sessions while the latter erases it when the browser session ends.
This also means that data stored in localStorage is accessible between multiple windows open at the same time, while sessionStorage is confined to an individual window.
Both are implemented today in Chrome (5+), FireFox (3.5+), Safari (4+), Internet Explorer (8+), Opera (10.5+), as well as IOS and Android devices. Per the current draft specification, a mostly arbitrary size limit of 5MB exists for the amount of data that can be stored per domain. When this quota is reached, browsers may optionally prompt the user for permission to increase the limit.
The API is very simple. You access either storage system in JavaScript by making calls on its object that hangs off of the global window context: window.localStorage or window.sessionStorage.
The setItem(key, value) method is used to store data. The key is always a string. Retrieval is just as easy: getItem(key).
You can retrieve the number of keys currently stored within the container with the "length" attribute. It is possible to enumerate the keys by numeric index position with key(index).
Items can be removed from storage with the removeItem(key) method. You can atomically cause the entire storage system to be emptied using the clear() method.
Dealing With Older Browsers
While it is exciting that Web Storage is implemented so broadly, it seems likely that you will still have to deal with older browsers who lack this functionality. You have several options:
- Continue to use server-side storage and deliver data as needed. This is probably a fall-back to existing functionality you have today, using Ajax and various JavaScript functions to negotiate for the data needed at any given moment. If your data is larger than cookie limits of 4KB, you have to do this in either case.
- If your data needs are smaller, you might be using cookies already. There are many scripts and plugins out there that you can easily use.
- If you are developing for a specific browser or platform, browser specific functions or 3rd party plugins such as Adobe Flash may be suitable.
If you decide to use Web Storage, you can easily check for its existence by looking for it attached to the global "window" object:
if (typeof window.localStorage!=='undefined') { }
Checking for sessionStorage is similar.
Plugging in to Web Storage
Let's create a simple jQuery Utility Plugin for using localStorage and bring together what we've learned to date. We might want to use this plugin to implement a larger basic caching strategy for larger client data and accelerate some of our front-end UI.
(function($){ $.webvantaStorage = { Local: { set: function(k,v){localStorage.setItem(k,v)}, get: function(k) {return localStorage.getItem(k)}, remove: function(k) {localStorage.removeItem(k)}, clear: function(k) {localStorage.clear()} } } })(jQuery);
In this example, we've created a minimal "webvantaStorage" namespace and placed our "Local" object literal within it. Local itself implements a really basic version of the Web Storage API.
Once defined, we can use this easily enough:
$.webvantaStorage.Local.set("name", "Chris");
and
var myname = $.webvantaStorage.Local.get("name");
This is all pretty simplistic. What would be more interesting is if we were to check to see if Web Storage is available, and if not, fallback, perhaps to cookies.
Not Re-inventing the Wheel
As it turns out, there are existing jQuery plugins that do just that. One simple and very tiny option is the jQuery Storage plugin by Dave Schindler. Currently, it only handles localStorage and implements fallback to cookies, but it would be trivial to extend it to support sessionStorage.
Another interesting plugin is Andris Reinman's jStorage which implements alternate storage strategies for older browsers beyond just cookie use.
Even more alternatives will pop-up if you do a search, and before long you'll notice that the plugin community is alive and very active within the jQuery world.
Putting It All Together
It doesn't matter whether you need to wrap an external set of functionality to provide an in-house API, or you just want to clean up a set of wildly different JavaScript utilities that developed over a period of several years. jQuery provides a clean, relatively simple to use framework for modularizing your JavaScript code. Indeed, it has helped us to develop a nice toolbox that saves valuable time when tackling new features or projects. 200+ sites in the past two years.
Webvanta is co-sponsoring the North Bay Web Design Conference on April 12, 2011 in Rohnert Park, CA. Check out the speaker lineup and to register to attend by visiting the North Bay Web Design Conference event site.
Share your thoughts with @engineyard on Twitter | https://blog.engineyard.com/2011/enhancing-client-side-storage-with-html5 | CC-MAIN-2015-35 | refinedweb | 1,164 | 54.93 |
This action might not be possible to undo. Are you sure you want to continue?
"
Jennifer Rexford !
1
Goals of this Lecture"
• Help you learn how to:!
• Manipulate data of various sizes! • Leverage more sophisticated addressing modes ! • Use condition codes and jumps to change control flow!
• So you can:!
• Write more efficient assembly-language programs! • Understand the relationship to data types and common programming constructs in high-level languages!
• Focus is on the assembly-language code!
• Rather than the layout of memory for storing data!
2
Variable Sizes in High-Level Language"
• C data types vary in size!
• Character: 1 byte! • Short, int, and long: varies, depending on the computer! • Float and double: varies, depending on the computer! • Pointers: typically 4 bytes!
• Programmer-created types!
• Struct: arbitrary size, depending on the fields!
• Arrays!
• Multiple consecutive elements of some fixed size! • Where each element could be a struct!
3
g. and addl! • Separate ways to access (parts of) a register! • E. addb... addw. and %eax! • Larger sizes (e..g. %ah or %al. or long units! 4 .g. word.Supporting Different Sizes in IA-32" • Three main data sizes! • Byte (b): 1 byte! • Word (w): 2 bytes ! • Long (l): 4 bytes ! • Separate assembly-language instructions! • E. %ax. struct)! • Manipulated in smaller byte.
Byte Order in Multi-Byte Entities" • Intel is a little endian architecture! • Least significant byte of multi-byte entity is stored at lowest memory address! • “Little end goes first”! 1000 1001 The int 5 at address 1000:! 1002 1003 00000101 00000000 00000000 00000000 • Some other systems use big endian! • Most significant byte of multi-byte entity is stored at lowest memory address! • “Big end goes first”! 1000 1001 The int 5 at address 1000:! 1002 1003 00000000 00000000 00000000 00000101 5 .
Little Endian Example" int main(void) { int i=0x003377ff. j<4. for (j=0. j++) printf("Byte %d: %x\n". } Output on a little-endian machine Byte 0: ff" Byte 1: 77" Byte 2: 33" Byte 3: 0" 6 . j. j. p[j]). unsigned char *p = (unsigned char *) &i.
IA-32 General Purpose Registers" 31 15 AH BH CH DH SI DI 87 AL BL CL DL 0 16-bit AX BX CX DX 32-bit EAX EBX ECX EDX ESI EDI General-purpose registers 7 .
C Example: One-Byte Data" Global char variable i is in %al. else i--. %al jle else incb %al jmp endif else: decb %al endif: 8 char i. } . … if (i > 5) { i++. the lower byte of the “A” register. cmpb $5.
%eax jle else incl %eax jmp endif else: decl %eax endif: 9 int i. … if (i > 5) { i++. cmpl $5. } . the full 32 bits of the “A” register. else i--.C Example: Four-Byte Data" Global int variable i is in %eax.
%ecx" • Choice of register(s) embedded in the instruction! • Copy value in register EDX into register ECX! 10 . number “0”) embedded in the instruction! • Initialize register ECX with zero! • Register addressing! • Example: movl %edx. %ecx! • Data (e.Loading and Storing Data" • Processors have many ways to access data! • Known as “addressing modes”! • Two simple ways seen in previous examples! • Immediate addressing! • Example: movl $0..g.
accessing a global variable. dereferencing a pointer.Accessing Memory" • Variables are stored in memory! • Global and static local variables in Data or BSS section! • Dynamically allocated variables in the heap! • Function parameters and local variables on the stack! • Need to be able to load from and store to memory! • To manipulate the data directly in memory! • Or copy the data between main memory and registers! • IA-32 has many different addressing modes! • Corresponding to common programming constructs! • E.g. accessing a field in a struct.. or indexing an array! 11 .
g. %ecx! • Four-byte variable located at address 2000! • Read four bytes starting at address 2000! • Load the value into the ECX register! • Useful when the address is known in advance! • Global variables in the Data or BSS sections! • Can use a label for (human) readability! • E. “i” to allow “movl i.Direct Addressing" • Load or store from a particular memory location! • Memory address is embedded in the instruction! • Instruction reads from or writes to that address! • IA-32 example: movl 2000. %eax”! 12 ..
%ecx! • EAX register stores a 32-bit address (e. 2000)! • Read long-word variable stored at that address! • Load the value into the ECX register! • Useful when address is not known in advance! • Dynamically allocated data referenced by a pointer! • The “(%eax)” essentially dereferences a pointer! 13 ..g.Indirect Addressing" • Load or store from a previously-computed address! • Register with the address is embedded in the instruction! • Instruction reads from or writes to that address! • IA-32 example: movl (%eax).
%ecx! • EAX register stores a 32-bit base address (e. if “age” starts at the 8th byte of “student” record! 14 ..g.g. 2008)! • Read long-word variable stored at that address! • Load the value into the ECX register! • Useful when accessing part of a larger variable! • Specific field within a “struct”! • E. 2000)! • Offset of 8 is added to compute address (e.Base Pointer Addressing" • Load or store with an offset from a base address! • Register storing the base address ! • Fixed offset also embedded in the instruction! • Instruction computes the address and does access! • IA-32 example: movl 8(%eax)...g.
to get 2040)! • Useful to iterate through an array (e.Indexed Addressing" • Load or store with an offset and multiplier! • Fixed based address embedded in the instruction! • Offset computed by multiplying register with constant ! • Instruction computes the address and does access! • IA-32 example: movl 2000(.g.. 2. or 8 (say.g. a[i])! • Base is the start of the array (i. 4)! • Added to a fixed base of 2000 (say.. 4 for “int”)! 15 .4).e.. %ecx! • Index register EAX (say.%eax. “i”)! • Multiplier is the size of the element (e.e. 4.. with value of 10)! • Multiplied by a multiplier of 1. “a”)! • Register is the index (i.
%eax. %eax movl $0. i++) sum += a[i]. %ebx sumloop: EAX: i EBX: sum ECX: temporary movl a(. sum=0. %eax jle sumloop 16 . for (i=0. i<20. global variable movl $0. %ecx addl %ecx. %ebx incl %eax cmpl $19. … int i.Indexed Addressing Example" int a[20].4).
%ebx !movl (%eax).%ebx! .4).%eax.4). %ebx! 17 • Base + displacement ! • (Index * scale) + displacement! • Base + (index * scale) + displacement !movl foo(%edx. %ebx! !movl foo(%eax).%eax. %ebx! !movl (. %ebx movl 1(%eax).Effective Address: More Generally" eax ebx ecx edx esp ebp esi edi eax ebx ecx edx esp ebp esi edi 1 2 4 8 None + 8-bit 16-bit 32-bit Offset = + * Base • Displacement • Base ! ! ! ! Index ! ! ! scale displacement !movl foo.
4. %ecx" • Indirect addressing: address stored in a register! • movl (%eax).%eax. %ecx" • Indexed addressing: instruction contains base address.1). %ecx" • Direct addressing: address stored in instruction! • movl foo. or 8)! • movl 2000(. and specifies an index register and a multiplier (1. %ecx" • Register addressing: data stored in a register! • movl %eax.Data Access Methods: Summary" • Immediate addressing: data stored in the instruction itself! • movl $10. %ecx" 18 . %ecx" • Base pointer addressing: includes an offset as well! • movl 4(%eax). 2.
Control Flow" • Common case! • Execute code sequentially! • One instruction after another! • Sometimes need to change control flow! • If-then-else ! • Loops! • Switch! cmpl $5. %eax jle else incl %eax jmp endif else: decl %eax endif: 19 • Two key ingredients! • Testing a condition! • Selecting what to run next based on result! .
Dest” (“t = a + b”) !! • ZF: set if t == 0! • SF: set if t < 0! • CF: set if carry out from most significant bit ! • Unsigned overflow! • OF: set if twoʼs complement overflow! • (a>0 && b>0 && t<0) ! !|| (a<0 && b<0 && t>=0)! 20 .Condition Codes" • 1-bit registers set by arithmetic & logic instructions! • ZF: Zero Flag! • SF: Sign Flag! • CF: Carry Flag! • OF: Overflow Flag! • Example: “addl Src.
Src1” (compare b.a)! • Like computing a-b without setting destination! • ZF: set if a == b! • SF: set if (a-b) < 0! • CF: set if carry out from most significant bit! • Used for unsigned comparisons! • OF: set if twoʼs complement overflow! • (a>0 && b<0 && (a-b)<0) || (a<0 && b>0 && (a-b)>0)! • Flags are not set by lea. or dec instructions! • Hint: this is useful in the assembly-language programming assignment! ! 21 . inc.Condition Codes (continued)" • Example: “cmpl Src2.
$6! • • • • Not zero: ZF=0 (diff is not 00000)! Negative: SF=1 (first bit is 1)! Carry: CF=1 (unsigned diff is wrong)! No overflow: OF=0 (signed diff is correct)! 00110 .00110 • Positive: SF=0 (first bit is 0)! ?? • No carry: CF=0 (unsigned diff is correct)! • No overflow: OF=0 (signed diff is correct)! 01100 +11010 00110 00110 +10100 11010 10100 +00110 11010 22 • Comparison: cmp $12.01100 ?? 10100 . $-12 ! • • • • Not zero: ZF=0 (diff is not 00000)! Negative: SF=1 (first bit is 1)! Carry: CF=1 (unsigned diff of 20 and 28 is wrong)! No overflow: OF=0 (signed diff is correct)! . $12! 01100 • Not zero: ZF=0 (diff is not 00000)! .11010 ?? • Comparison: cmp $-6.Example Five-Bit Comparisons" • Comparison: cmp $6.
g..g. unsigned arithmetic)! • Below: jb (CF)! • Above or equal: jae (~CF)! • Below or equal: jbe (CF | ZF)! • Above: ja (~(CF | ZF))! • Less/greater (e. signed arithmetic)! • Less: jl (SF ^ OF)! • Greater or equal: jge (~(SF ^ OF))! • Less or equal: jle ((SF ^ OF) | ZF)! • Greater: jg (!((SF ^ OF) | ZF))! 23 ..Jumps after Comparison (cmpl)" • Equality ! • Equal: je (ZF)! • Not equal: jne (~ZF)! • Below/above (e.
..e.ne.-or-equal” ! Signed ! e ! ne ! g ! ge ! ! l le ! o ! no ! • Unconditional jump! • jmp target! • jmp *register! 24 .-or-equal” ! “less..above” ! “...} target ! Comparison ! = " ≠ " > ! ≥ " < ! ≤ ! overflow/carry ! no ovf/carry ! !if (condition) {eip = target}! Unsigned ! e ! ne ! a ! ae ! b ! be ! c ! nc ! “equal” ! “not equal” ! “greater...Branch Instructions" • Conditional jump! • j{l.g.below” ! “.
if (!Test) jump to Else. } else { else-body. jump to Done. then-body. etc. loops. switch.Jumping" • Simple model of a “goto” statement! • Go to a particular place in the code! • Based on whether a condition is true or false! • Can represent if-the-else.! • Pseudocode example: If-Then-Else! if (Test) { then-body. Else: else-body. Done: 25 .
" middle: if (Test) then jump to loop. 26 while (Test) Body. } while (Test). .Jumping (continued)" • Pseudocode example: Do-While loop! do { Body. loop: Body. if (Test) then jump to loop. • Pseudocode example: While loop! jump to middle. loop: Body.
loop: Body. Update) Body Init. done: 27 . Update.Jumping (continued)" • Pseudocode example: For loop! for (Init. if (Test) jump to loop. if (!Test) jump to done. Test.
l} source. eax = eax * ebx • Divide! • div (unsigned) or idiv (signed)! idiv %ebx # edx = edx.w. source2 !dest = source + dest! !dest = dest – source! !dest = dest + 1! !dest = dest – 1! !dest = ~dest + 1! !source2 – source1! • Multiply! • mul (unsigned) or imul (signed)! mull %ebx # edx. dest ! sub{b. decimal arithmetic instructions! 28 .l} dest ! ! neg{b.l} source.l} dest ! ! dec{b.w.w.l} source1.w.eax / ebx • Many more in Intel manual (volume 2)! • adc.l} dest ! ! cmp{b. sbb. dest ! Inc{b.Arithmetic Instructions" • Simple instructions! • • • • • • add{b.w.w.
w.Bitwise Logic Instructions" • Simple instructions! and{b. dest ! or{b.l} source.l} dest ! ! sal{b. dest! ! xor{b. dest (arithmetic) sar{b.w. dest ! not{b.l} source.l} source. dest (arithmetic) • • • • • Logic shift! Rotation shift! Bit scan ! Bit test! Byte set on conditions! 29 !dest = source & dest! !dest = source | dest! !dest = source ^ dest! !dest = ~dest! !dest = dest << source! !dest = dest >> source! • Many more in Intel Manual (volume 2)! .l} source.w.w.w.l} source.w.
dest • General move instruction! • push{w. etc.l} source.l} dest popl %ebx esp esp • Many more in Intel manual (volume 2)! • Type conversion. %esp movl %ebx.! 30 . I/O port.w. %ebx addl $4.Data Transfer Instructions" • mov{b. exchange. compare and exchange. conditional move. %esp esp esp ! • pop{w. (%esp) # equivalent instructions movl (%esp).l} source pushl %ebx # equivalent instructions subl $4. string move.
Conclusions" • Accessing data! • Byte. using the stack! 31 . word. and long-word data types! • Wide variety of addressing modes! • Control flow! • Common C control-flow constructs! • Condition codes and jump instructions! • Manipulating data! • Arithmetic and logic operations! • Next time! • Calling functions.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview. | https://www.scribd.com/document/148909427/14-Assembly-2 | CC-MAIN-2017-04 | refinedweb | 2,088 | 72.63 |
Overview of JSP Directives
JSP pages contain directives that direct the container about the processing of the page, these directives are associated with the compiled servlet that is automatically created by the JSP page. While the directives give processing instructions to the container running the component, the directive does not create any output
- Standard Syntax for the directive is as follows –
<%@ directive attribute=”value” %>
JSP contains three directives –
- Page Directive – To configure page level settings, example – import – “java.util.*”
- Include Directive – To include a file, example – file=” Header.JSP”
- Taglib Directive – Contains custom actions that can be used in the page, example –
<%@ taglib prefix="s"uri="/struts-tags" %>
Various JSP Directives
JSP directives are components of a JSP source code that guide the web container on how to convert the JSP page into its corresponding servlet. Let’s look at a few directives.
1. Page Directive
The page directive is used to instruct the JSP translator about certain aspects of the current JSP page like the content type to be used, like language in which page has to be written, etc.
The page directive has the following syntax –
<%@ page attribute 1="value 1" attribute 2="value 2" %>
Now let’s define a list of attributes used for the page directive –
a. Import – It is used to declare the java types to be used on the current page. Like if we want to use lists in the JSP page and iterate over them then we can import java.util.list, likewise other common packages like IO, util, etc can be imported in likewise way. There are certain implicit imports done, which we need not declare while working with the JSP pages and servlets in JSP Directives which are as follows –
- lang
- servlet
- servlet.http
- servlet.jsp
b. Session – If set to value true, it indicates that the page will participate in the session management, the default value is true also, i.e. when you invoke the JSP page, javax.servlet.http.HttpSession instance will get created.
4.5 (2,185 ratings)
View Course
c. Buffer – It tells the buffer size of out implicit object in kb, it is necessary to mention kb at the end of mentioned buffer size, default value is 8kb or more depending on the JSP container, if this is set to none then it will cause the output to be written directly to the corresponding PrintWriter.
d. AutoFlush – The default value is true, this indicates that the buffer value shall be auto flushed when the buffer is full. A value of false indicates that the buffer is only flushed if the flush method of the response implicit object is called.
e. IsThreadSafe – It is a deprecated practice, not generally used, indicates the thread-safety implemented in the page.
f. Info – GetServletInfo method’s return value is specified here.
g. Errorpage – Incorporated for page error handling.
h. IsErrorPage – Tells whether a page can handle error or not.
i. ContentType – Whenever you send the data to the controller at the backend, the body has associated content type, like if you send JSON, XML, the plain text then the container will get aware of a content type that it shall respond with. The response object will be created likewise.
j. Page Encoding – Default, the value is ISO-8859-1, indicates the character encoding.
k. Language – Used to tell the scripting language used in a page, a default value will be java only.
l. Extends – Used to inherit the superclass like base layout can be inherited in all pages.
m. Trim Directive Whitespaces – Whether the template text has whitespaces or not, default is false.
2. Include Directive
If the content of one JSP Directives page has to be used in the other JSP then we need to incorporate the address of that JSP into it, the number of include statements will be equal to the number of pages you want to import into your current page. The advantage is that, you need not write the whole set of code from that page to this page, hence it prevents memory, time, complexity and overhead for developers when any change is supposed to be done.
Example: You can import the header .jsp, footer.jsp, baseBodyLayout.jsp into all other pages and just you need to give the content to be rendered into the current page with the specific details adhering to the current page only.
Syntax for such inclusion is –
<%@ include file=”url” %>
Please note that the merging of included files happens at the translation time only and not at request time, i.e.
- None of the included JSP code is executed; it is not even compiled yet.
- The files are first merged and then the entire merged output is translated as a unit.
- If the included files are ever changed, there is no general way for the container to know and recompile the entire translation unit.
3. Taglib Directive
It is used to tell the container which tag library a specific JSP requires. It is also used to assign a prefix that is used within the JSP page to identify tags from a specific tag library. Container when gets these taglibs, it locates the code for those taglibs and makes them ready to use JSP.
Syntax to use the taglib is as follows –
<%@ taglib prefix="c" uri="" %>
So this makes an indication to the container that these all tags are used from this tag library and will be prefixed with the c: namespace within this JSP Directives. This URI can be associated with a tag library through a TLD file. The TLD can be mapped using a taglib map in the web.xml file, or via specific placement under the META-INF directory within a JAR archive
The tag directory can also be specified as –
<%@ taglib prefix="wroxtags" tagdir="/WEB-INF/tags" %>
So you can place tag flies at WEB-INF/tags and container will get it from there.
Conclusion
Hence we have seen JSP Directives and what indications do the above-mentioned directives actually give to the container and how container keeps things resolved at the time of usage, these directives are used to add dynamic behavior and whenever dynamic web projects are designed these approaches come worthy. Likewise, the approach has been used in other synonymous frameworks like struts and many URL’s are available there for such usage.
Recommended Articles
This has been a guide to JSP Directives. Here we discuss the concept, various directives and their explanation with examples. You can also go through our other suggested articles to learn more – | https://www.educba.com/jsp-directives/ | CC-MAIN-2019-47 | refinedweb | 1,093 | 57.71 |
19 June 2010 00:16 [Source: ICIS news]
(adds updates throughout)
HOUSTON (ICIS news)--Shares of beleaguered oil giant BP rose slightly on the New York Stock Exchange after news agencies reported on Friday that chief executive Tony Hayward would move away from the daily management of the Gulf of Mexico oil spill recovery.
BP chairman Carl-Henric Svanberg said ?xml:namespace>
Shares of BP rose five cents to $31.76 on Friday.
Members of the US House Energy and Commerce Committee on Thursday grilled
Svanberg said some of
Moody’s on Friday cut its ratings for BP by several notches, citing the company’s continued failure to bring the US Gulf coast oil spill under control amid mounting costs and claims for damages.
The credit ratings agency - which had already downgraded BP on 3 June - said all of its BP ratings remained on review for possible further downgrades.
The leak was estimated at 35,000-60,000 bbl/day.
Coast Guard admiral Thad Allen said 25,000 barrels were recovered during a 24-hour period on Thursday. He said siphoning capacity would increase to 53,000 bbl/day by the end of June and that amount would jump to 60,000-80,000 bbl/day in mid-July.
Oil began spewing from the bottom of the Gulf after a 20 April explosion sank the BP-operated Deepwater Horizon offshore | http://www.icis.com/Articles/2010/06/19/9369451/bp-ceo-to-step-away-from-day-to-day-spill-ops-report.html | CC-MAIN-2014-35 | refinedweb | 230 | 58.72 |
Common Subexpression Elimination. More...
#include <common_subexpression_elimination.h>
Common Subexpression Elimination.
This transforms looks for specific operators (denoted by whitelisted_ops_), and removes unnecessary repetition of that operator.
Consider some operator of X, that reads from blob b_ written to by W. X_a and X_b read the output of X. However, another operator Y, is the same type as X, has the same arguments as X, and reads from the same input b_, written to by W. It's output is the same as X. Y_a, Y_b, and Y_c read from Y.
Then, we can eliminate the common subexpressions X and Y, and merge them to Z, where X_a, X_b, Y_a, Y_b, and Y_c all read from Z.
TODO(benz): Fix the error to not match nodes that write to external output.
Definition at line 28 of file common_subexpression_elimination.h. | https://caffe2.ai/doxygen-c/html/classcaffe2_1_1_common_subexpression_elimination_transform.html | CC-MAIN-2018-47 | refinedweb | 138 | 59.09 |
This post is a direct response to the request made by @Zecca_Lehn on twitter (Yes I will write tutorials on your suggestions). What he wanted to know was how to do a Bayesian Poisson A/B tests. So for those of you that don’t know what that is let’s review the poisson distribution first.
The poisson distribution is useful for modeling count data, particularly over a period of time. Say the number of times someone in the Prussian Calvary gets kicked in the head by a horse and dies as a result over a certain period of time, let’s say a year. In fact, this is a very classic data set that can be modeled by the poisson distribution quite well. So why don’t we go ahead and use this data to see how we can test whether one corps of the Prussian Calvary was better at not getting kicked by horses.
I don’t want you to think that I have gone off the rails with this example. So before we proceed, let’s take a step back and talk about why this data will work. This data has some interesting features, first it is count data, over several time periods, over several groups (corps in this case). Generally, A/B testing is most commonly used in the internet marketing space these days, so let’s look at how the Prussian Horse Kick data compares to internet marketing data. In internet marketing data we have the number of views clicks, etc. which has been collected for a number of pages (typically 2, hence A/B testing), over a time period like a month. So our data actually might look like it could have been generated by a similar process to the horse kick data. So let’s dive in deep to what that process might look like.
The Poisson Distribution
I don’t want to get overly “mathy” in this section, since most of this is already coded and packaged in pymc3 and other statistical libraries for python as well. So here is the formula for the Poisson distribution:
Basically, this formula models the probability of seeing
counts, given
expected count. In other words,
is the mean of the number of counts for the page, corps, or whatever it is that you are looking at. This distribution is useful so long as three things are true:
- What happens in one time period is independent of what happens in any other time period
- The probability of an event (a click, pageview, horse kick, etc.) is the same for every time period
- Events do not happen simultaneously
If you violate any of these three assumptions, you will need to mess around with the basic model that I am going to provide. Like adding in an autocorrelation feature, or some other modeling non-sense that you need to be careful about. It isn’t difficult to do it, but you do need to know that something is going on in order to know how to address it. If you need some help with your particular application feel reach out at ryan@barnesanalytics.com or call (801) 815-2922 to get some consulting for your particular application. I’m more than happy to help out.
That’s Enough Theory, Let’s Clean the Data
Okay so the first step that we’ll need to do is to do some minor cleaning of this dataset so that it will be in a format that our model will be able to digest. So download it from the link above and we’ll load it into python and get started.
import pandas as pd import pymc3 as pm import matplotlib.pyplot as plt import numpy as np df=pd.read_csv('/home/ryan/Documents/HorseKicks.csv')
So for the model’s sake we need to stack this dataset. So that the data has a unit of measurement of corps-year. We’ll start by moving the year variable to the index and then dropping it from the variable from the dataframe, as the extra year variable floating around will mess things up.
df.index = df['Year'] df.drop(['Year'],1,inplace=True)
Our next step would be to “stack” the data. Right now the data is in a pivot-table like format, what we want to do is unpivot this table. The command to do that in python is “stack”. This will collect all of the information except for the counts in the index, which I also don’t like, so I’m going to chain that last command with the reset index command, which will move my variables out of the index. This one-two combo is really powerful when you need to unpivot things in python. So it is worth keeping this combo in the back of your head, for future use. I know that I have it memorized. I also rename my columns from the defaults that python gives to things, just to keep things nice. Here’s how to do that:
df=df.stack().reset_index() df.columns = ['Year', 'Corps', 'Count'] df.dropna(inplace=True)
At this point you can inspect your data with df.head(), and your data should look something like this:
Year Corps Count 0 1875 GC 0 1 1875 C1 0 2 1875 C2 0 3 1875 C3 0 4 1875 C4 0
This is what we need the data to look like in order to do a Bayesian Poisson A/B Test. There is one last bit of data munging that needs to happen. We need to add a numerical index for the Corps. This numerical index is important, because PYMC3 will need to use it, and it can’t use the categorical variable. All of this code just builds this numerical index, I think it is quite clear what is going on in this code.
corps = df['Corps'].unique() corps = pd.DataFrame(corps, columns=['Corps']) corps['i'] = corps.index df = pd.merge(df, corps, on=['Corps'], how='left')
So now our data is cleaned up and ready to use. It should be pretty painless to write a model down and run it.
Let’s Build Our Poisson A/B Model
Okay, as a brief side note, another reason why I chose this dataset to do this analysis with is because of the number of Corps. There are way more than 2 of them. This is the really exciting thing about doing this in a Bayesian Framework, we can build a hierarchical model, and test multiple versions concurrently. This means that we’re not only limited to an A/B test, like we would be in a frequentist setting, but we can do A/B/C/D tests! So let’s write down the model, and I’ll explain what is going on:
with pm.Model() as model: mu=pm.Flat('Avg Kicks', shape=len(corps)) obs=pm.Poisson('Observed', mu=mu[corps_index], observed=df['Count']) trace = pm.sample(1000, tune=1000)
So the first thing that we do is declare that we’re building a PYMC3 model. We give the model a number of parameters to work with, in fact one for each corps. These parameters are given an uninformative prior, so that we aren’t biasing them in anyway. These parameters are the average of a poisson distribution. So here’s where we make an assumption, we assume that each of our counts comes from a Poisson distribution specific to the corps from which the observation was taken. And then we run MCMC over the whole thing.
This procedure ran in under 30 seconds on my old laptop. So if you run it on a newer machine, or gpu, it should crank through it really super fast. There isn’t a lot of data, or parameters for this model to chew on, so it is no wonder that it runs pretty quick. We can examine the results through some of the default plots in pymc3.
pm.traceplot(trace) plt.show() pm.forestplot(trace, ylabels=corps.Corps.values) plt.show()
This code will result in the following two figures:
Clearly, our posteriors show that some of the corps are clearly better at not getting kicked by horses. Notably is looks like C11 and C14 could learn something from C4 and C15. C11 and C14 are our worst offenders, but they are also the most variable in terms of how often they get kicked. We can actually compute the probability that any corps gets more kicks than another, say C11 gets more than C4.
pred = [1 if obj1>obj2 else 0 for obj1,obj2 in zip(trace['Avg Kicks'][:,corps[corps['Corps'] == 'C11']['i']],trace['Avg Kicks'][:,corps[corps['Corps'] == 'C4']['i']])] print(np.mean(pred))
When I do this, I get 100% of the sampled points in the posterior distribution for corps C11 are higher than for C4. This means that corps C4 unambiguously and systematically suffers fewer horse kicks than group C11.
We can also check whether or not C14 gets more horse kicks systematically than C11. This looks like it might be a fairer test. Let’s try this out:
pred = [1 if obj1>obj2 else 0 for obj1,obj2 in zip(trace['Avg Kicks'][:,corps[corps['Corps'] == 'C14']['i']],trace['Avg Kicks'][:,corps[corps['Corps'] == 'C11']['i']])] print(np.mean(pred))
When I did that, there was only a 44% chance that C14 gets more kicks than C11. That means that there probably isn’t a very strong difference between these two groups.
A note on unbalanced data
So @Zecca_Lehn also wanted to know about how these bayesian testing would do on unbalanced data. At first glance, I had no idea what he meant. I mean I did because I run into this problem all the time as I have been working on a credit card fraud detection model recently. But in the context of a Poisson Count model, an unbalanced dataset doesn’t make a ton of sense. Counts are counts. You have none, or you have some.
But as I was thinking about the problem, it dawned on me that you could start observing data at different times. As such, I fudged the data a little bit. You can find my modified version of the horse kick data here. What I did is, I deleted some data so that we start observing the different corps on different years. This creates unbalanced data in the sense that I have unequal data for each of the corps. I think this is the situation @Zecca_Lehn was asking about.
The nice thing is that we don’t need to modify the script that we have just written except to drop the missing observations from the dataset. If we do that we’ll be good to go, and we can just run with the same code. So I snuck it into the code above in anticipation of running it on this modified dataset. So all that I did was modify the line that loads the data to use the new csv file.
These are the resultant plots from the script. If you compare them to the plots that you obtain from the full dataset you will notice that they look similar, however, I did delete some data so the numbers do change slightly. Also, the 95% credible intervals for the parameters have grown larger, and the more data that I deleted, the wider those intervals got.
So the way to think of it is that the dataset doesn’t have as much information so we are less confident in the conclusions that we can draw from the dataset. In this sense, we can actually say something about how performant this model is in the face of an unbalanced data. In essence, it will give you its best “guess” as to what the parameter values should be, but it will be less confident in the “guesses” it supplies as the data for a certain class goes down.
Also we can still perform the probability analysis that we did before. Since I just ran the same script on the modified data we can actually see how the predictions changed in light of this unbalanced data. We still see that there is virtually a 100% chance that C11 is greater than C4, but due to the wider confidence intervals, and shifting of the data due to dropping some observations there would only be about a 9.7% chance that C14’s population mean is greater than C11’s population mean.
Here’s the full code:
#!/usr/bin/env python3 # -*- coding: utf-8 -*- """ Created on Thu Jul 20 07:02:05 2017 @author: ryan """ import pandas as pd import pymc3 as pm import matplotlib.pyplot as plt import numpy as np df=pd.read_csv('/home/ryan/Documents/HorseKicks_modified.csv') df.index = df['Year'] df.drop(['Year'], 1, inplace=True) df=df.stack().reset_index() df.columns = ['Year', 'Corps', 'Count'] df.dropna(inplace=True) corps = df['Corps'].unique() corps = pd.DataFrame(corps, columns=['Corps']) corps['i'] = corps.index df = pd.merge(df, corps, on=['Corps'], how='left') corps_index = df.i.values with pm.Model() as model: mu=pm.Flat('Avg Kicks', shape=len(corps)) obs=pm.Poisson('Observed', mu=mu[corps_index], observed=df['Count']) trace = pm.sample(1000, tune=1000) pm.traceplot(trace) plt.show() pm.forestplot(trace, ylabels=corps.Corps.values) plt.show() pred = [1 if obj1>obj2 else 0 for obj1,obj2 in zip(trace['Avg Kicks'][:,corps[corps['Corps'] == 'C11']['i']],trace['Avg Kicks'][:,corps[corps['Corps'] == 'C4']['i']])] print(np.mean(pred)) #%% pred = [1 if obj1>obj2 else 0 for obj1,obj2 in zip(trace['Avg Kicks'][:,corps[corps['Corps'] == 'C14']['i']],trace['Avg Kicks'][:,corps[corps['Corps'] == 'C11']['i']])] print(np.mean(pred)) | http://barnesanalytics.com/bayesian-poisson-ab-testing-in-pymc3-on-python | CC-MAIN-2019-26 | refinedweb | 2,298 | 72.16 |
I have to write a program using the Monte Carlo Method to estimate the average number of bottles of Boost someone would have to drink to win a prize.
So far, I have prompted the user for the number of trials (1 trial = the times it takes until the winning cap is found). I have to conduct at least 1,000 trials. The problem is, I can't get it to print correctly. If I want 20 trials, it only prints about 5 numbers, and they are printed in increasing order.
Here is what I have so far:
Code Java:
//import the classes import java.io.IOException; import java.util.Scanner; import java.io.File; import java.util.Random; public class BottleCapPrize {); } } } } | http://www.javaprogrammingforums.com/%20loops-control-statements/4859-monte-carlo-method-printingthethread.html | CC-MAIN-2015-35 | refinedweb | 122 | 77.03 |
Retrieving. I am going to dive into why, and outline the situations from easiest to hardest.
Why would you want access to the arguments?
For most people, this need is apparent in the context of a debugger. The exact arguments are usually the reason for a program failing, so debuggers try their best to extract the arguments to a function.
(gdb) bt #0 bar (bar_arg=28) at test_prog.c:5 #1 0x0000000000400576 in foo (foo_arg=5) at test_prog.c:10 #2 0x000000000040059e in main () at test_prog.c:16
Of course, if this is run on a release build, we get nothing:
(gdb) bt #0 0x0000000000400534 in bar () #1 0x0000000000400576 in foo () #2 0x000000000040059e in main ()
I spent a few weeks in 2017 working on improving Python crash reporting. Since then, I’ve been fascinated with being able to gain information about managed languages, without modifying their interpreters, and without requiring any custom software modifications.
For a crash reporting tool, having access to the arguments would let it collect useful debug info in the wild. You can imagine extending a tool like Crashpad to identify arguments with certain types and annotate the crash report with pretty printed information about those types, so that the core dump contains this information. Instead of using auxiliary information from the Python interpreter to derive the execution context as we did, one could simply walk the native stack, and interleave the Python stack at every PyEval_EvalFrameEx call in a deterministic way.
Similarly, there is a dearth of cross-language profilers. The well known tools like perf/BPF/dtrace are really meant for native code. Each interpreter of a higher level language usually has its own profiler that understands the language. I think it would be very cool to have a profiler that could sample native stacks, and when it detects a managed language, through some kind of plugin mechanism, infer what managed code was running as part of the profile gathering. You could then interleave stacks in the profile, showing hotspots across languages. So, if Python was slow because 50% of the time was spent in Python, but the other 50% was spent in the C code, waiting for some resource acquisition, you could see both! There are complex desktop applications out there where having 3 languages in the same process is not uncommon, and it is currently difficult to get comprehensive profiles across them.
Setting the stage
The post mostly focuses on Linux and macOS 64-bit. We will quickly look at x86 where it differs. Windows is very similar in most respects. ARM is not much different when it comes to using registers for argument passing so similar concepts apply.
Assembly/DWARF output is from:
clang version 3.8.1-24 (tags/RELEASE_381/final) Target: x86_64-pc-linux-gnu
This post assumes basic familiarity with assembly language, registers and the stack, function call frames and the concept of unwinding.
There is example code that I will refer to throughout the rest of the post.
The
test_prog.c file is compiled into several different executables (
debug,
debug_opt, …..). Look at the Makefile for the precise build configuration.
#include <stdio.h> void bar(int bar_arg) { printf("The number is %d\n", bar_arg); } void foo(int foo_arg) { bar(foo_arg + 23); printf("unrelated\n"); } int main() { foo(5); }
Argument passing and calling conventions.
In the call stack below,
foo is the caller and
bar is the callee.
foo() -> bar()
All function arguments are assumed to be integers or pointers that fit in a single register.
On x86, arguments are pushed onto the stack in reverse order, followed by the
return address (saved
eip). The callee can access them by indexing from
ebp.
On x86-64, arguments are passed in
rdi,
rsi,
rdx and
rcx and a few other
registers, in that order.
Finally, unless optimizations are enabled,
ebp/
rbp delineates frames. This will
become useful later.
Unwinding the stack
The debugger or other tool usually suspends the thread of interest and starts the unwinding process to retrieve the call stack. This is a comprehensive topic by itself, and I have a work in progress post about that. Here I assume that we can somehow retrieve the stack frame of the function who argument we want to retrieve.
With debug information
This is the easiest case, and debuggers can always show arguments. This assumes
that the executable is built with debug information (
-g switch on gcc and
clang). On Linux and Mac, the DWARF format is used. The debug information
is stored in a section
.debug_info. This debug information is pretty
comprehensive, detailing for the debugger, the locations of functions,
arguments and stack variables.
In the example, here is the relevant information to retrieve
bar_arg, the
first argument for function
bar.
(obtained via
dwarfdump debug)
< 2><0x0000003f> DW_TAG_formal_parameter >>>> DW_AT_location len 0x0002: 917c: DW_OP_fbreg -4 DW_AT_name bar_arg DW_AT_decl_file 0x00000001 /home/nsmnikhil/unwind-arguments/test_prog.c DW_AT_decl_line 0x00000003 DW_AT_type <0x0000008b>
It tells the debugger exactly where the argument is stored. In this case, there
is some collusion between the compiler and the debugger to simplify things. If
we look at the disassembly (
objdump -M intel -d debug)
0000000000400530 <bar>: 400530: 55 push rbp 400531: 48 89 e5 mov rbp,rsp 400534: 48 83 ec 10 sub rsp,0x10 400538: 48 b8 34 06 40 00 00 movabs rax,0x400634 40053f: 00 00 00 400542: 89 7d fc >>>> mov DWORD PTR [rbp-0x4],edi <<<< 400545: 8b 75 fc mov esi,DWORD PTR [rbp-0x4] 400548: 48 89 c7 mov rdi,rax 40054b: b0 00 mov al,0x0 40054d: e8 ce fe ff ff call 400420 <printf@plt> 400552: 89 45 f8 mov DWORD PTR [rbp-0x8],eax 400555: 48 83 c4 10 add rsp,0x10 400559: 5d pop rbp 40055a: c3 ret 40055b: 0f 1f 44 00 00 nop DWORD PTR [rax+rax*1+0x0]
After the prologue, the compiler simply uses 4 bytes of stack space to stash
edi (low 4 bits of
rdi) and emits the DWARF information indicating that
bar_arg can be found at
DW_OP_fbreg -4, and
fbreg is, of course,
rbp.
This is nice because the argument never moves around, regardless of where in the function the debugger is stopped.
What about optimizations?
DWARF is a flexible enough format to represent all kinds of transformations to indicate the memory locations of identifiers.
As long as debug information is enabled, even in optimized builds, the debugger can retrieve arguments at any point in the function. The compiler will simply emit more DWARF as it moves data around throughout the function.
Here is the disassembly for
debug_opt, which is compiled with
-O2, and the
DWARF for
bar_arg:
0000000000400570 <bar>: 400570: 89 f9 mov ecx,edi 400572: bf 44 06 40 00 mov edi,0x400644 400577: 31 c0 xor eax,eax 400579: 89 ce mov esi,ecx 40057b: e9 e0 fe ff ff jmp 400460 <printf@plt> 0000000000400580 <foo>: 400580: 50 push rax 400581: 8d 77 17 lea esi,[rdi+0x17] 400584: bf 44 06 40 00 mov edi,0x400644 400589: 31 c0 xor eax,eax 40058b: e8 d0 fe ff ff call 400460 <printf@plt> 400590: bf 56 06 40 00 mov edi,0x400656 400595: 58 pop rax 400596: e9 b5 fe ff ff jmp 400450 <puts@plt> 40059b: 0f 1f 44 00 00 nop DWORD PTR [rax+rax*1+0x0] 00000000004005a0 <main>: 4005a0: 50 push rax 4005a1: bf 44 06 40 00 mov edi,0x400644 4005a6: be 1c 00 00 00 mov esi,0x1c 4005ab: 31 c0 xor eax,eax 4005ad: e8 ae fe ff ff call 400460 <printf@plt> 4005b2: bf 56 06 40 00 mov edi,0x400656 4005b7: e8 94 fe ff ff call 400450 <puts@plt> 4005bc: 31 c0 xor eax,eax 4005be: 59 pop rcx 4005bf: c3 ret
< 2><0x0000004f> DW_TAG_formal_parameter DW_AT_name bar_arg DW_AT_decl_file 0x00000001 /home/nsmnikhil/unwind-arguments/test_prog.c DW_AT_decl_line 0x00000003 DW_AT_type <0x0000005b>
What’s going on? Well, the compiler did several different optimizations.
- Both
fooand
barwere actually inlined into
main.
- Constant optimization was done, so 5 + 23 was substituted with 28 (
0x1c) and passed to printf as argument 2.
- No prologue and epilogue was generated for the inlined functions.
The DWARF Debugging Inforamtion Entry (DIE) doesn’t say anything about the
location of bar_arg. The lack of a
DW_AT_location indicates we have to look
elsewhere. This is not a DWARF tutorial, so I’ll skip the details. We have to
instead look at the DWARF information for
main:
< 4><0x000000e5> DW_TAG_formal_parameter DW_AT_const_value 0x0000001c DW_AT_abstract_origin <0x0000004f>
where the formal parameter to the now-inlined bar is stated to have a constant value!
What if we had a more complicated program? The compiler would emit a location list for the argument, which describes a mapping from instruction pointer ranges, to offsets from various memory locations/registers where the argument can be accessed at any point.
So, as you can see, it is really easy to retrieve arguments when debug information is available. If you are not using a debugger, you will need to ship a DWARF parser. Fortunately there are several options.
DWARF is a complex format, so various people have implemented custom parsers for very specific use cases. PLCrashReporter has one for unwinding when the program is crashed, and restricted in memory allocations. Mozilla wrote their own LUL optimized for unwinding in a profiler. Finally, if you are writing assembly by hand, writing DWARF information for the assembly is a good way to assist debuggers.
Without debug information
The value of debug information cannot be overstated. Sure, we had to parse DWARF, and that is non-trivial, but it gave all the answers! Life is going to get pretty miserable without it.
With no precise description of arguments, we are going to rely on our knowledge of the stack and registers. First the slightly easy one, x86.
x86 (32-bit)
On 32-bit systems, the conventions differ among operating systems, but
generally, at least some arguments are on the stack. A function
foo() that
wants to call
bar(int, int, int) first pushes the 3 integers onto the stack,
then calls
bar. The call instruction will push the current
eip (which now
points past the
call instruction) on the stack, indicating the return address.
We can leverage that this will hold true regardless of optimizations. Of course, this is not a generic solution, since we don’t know how many arguments are going to be present. If we are looking for specific functions and know their signatures, this can work.
To retrieve arguments, we need to reliably find frames. On x86, these are
delineated by
ebp pushed onto the stack in the prologue of each function, and
then being set to the base of that function’s frame.
N-1th arg ---------- ... ---------- 0th arg ---------- ret ---------- old ebp ---------- <- ebp ….. ---------- <- esp
The very first (currently executing function’s) frame can be retrieved by querying the OS for a thread “context”. I have a separate post in-progress about the specific mechanics, so I won’t cover them here.
With the
ebp known, we can start scanning from
ebp+8 and perform the
manipulations we need based on the argument type. For an integer or pointer
type,
ebp+8 is the first argument,
ebp+12 is the next and so on. The fact
that the stack is immutable except for the currently executing function allows
us to have strong guarantees about these arguments remaining where they are.
There are certain gotchas. Leaf functions may not always have
ebp based
indexing. In addition, we still haven’t gotten to determining the name of the
executing function, in case you only want the arguments for specific functions.
All that gets complicated quickly, and is a topic for another post.
x86-64
Ironically, the most common setup on production systems - x86-64 in release mode - is also the trickiest. I think I can say with reasonable certainty that naive extraction of arguments is very difficult. Even if one knows which function one is looking for and what its arguments look like. At the minimum one would need a disassembler, and some kind of register analysis to track moves. Let’s understand why we end up in this situation.
First, we need the unwind information, to determine the frame boundaries.
Fortunately, unwinding information is always present in ELF and Mach-O, because
non-debugger tools like profilers and crash reporters need it. The most well
known use case is the
backtrace() function. In addition, certain languages
like C++ use it for exception handling 1. This information is a subset of DWARF
that has enough information to identify frame boundaries and the values of
certain registers. Libraries like
libunwind use this. It is well documented
in the ABI (Section 3.7 Stack Unwind Algorithm).
On Linux, the
.eh_frame section is usually just extended DWARF (Use
dwarfdump -F <file> to read the
.eh_frame section.). On Mac, it is usually
Compact Unwind Encoding.
This is the unwind information for
bar in the
release binary with no optimizations:
$ dwarfdump -F release ….. < 2><0x00400530:0x0040055b><><cie offset 0x00000044::cie index 1><fde offset 0x00000070) > …..
The register numbers are standardized. For x86-64
r6 is
rbp,
r7 is
rsp
and
r16 is the return address. Ian Lance Taylor has a good explanation of
the
.eh_frame format if you want to understand this fully. The disassembly helps:
0000000000400530 <bar>: 400530: 55 push rbp 400531: 48 89 e5 mov rbp,rsp 400534: 48 83 ec 10 sub rsp,0x10
cfa refers to the Canonical Frame Address. On x86-64, this is the value of
rsp in the previous frame. That is, right before the
call instruction in
the caller. You can see how the CFA moves as every instruction manipulates the stack.
Getting back to our goal, in this case we have just enough information to
reconstruct the call stack, but no direct references to the arguments. Could
we use the register locations? That is, figure out what value
rdi has at each
instruction? In this particular, certainly not.
Is it always like this? Initially, looking through libunwind’s API there were
constants defined for all the registers. Can’t one simply call
unw_get_reg() with the right constant and get the value of
rdi?
One can try. If you run the
unwind_rdi program, you will quickly experience
disappointment.
rdi never changes!
$ ./unwind_rdi The number is 28 ip = 4008f6, rdi = 7fffd8315c30 rdi fetch success 0 signal? 0 ip = 40091e, rdi = 7fffd8315c30 rdi fetch success 0 signal? 0 ip = 7e06b30732e1, rdi = 7fffd8315c30 rdi fetch success 0 signal? 0 ip = 4006aa, rdi = 7fffd8315c30 rdi fetch success 0 signal? 0 ip = 0, rdi = 7fffd8315c30 rdi fetch success 0 signal? 0 The number is 28 unrelated
Registers are segmented into callee-saved and caller-saved. As engineering
would have it, all the argument passing registers are caller saved.
unw_get_reg() specifically says:
For ordinary stack frames, it is normally possible to access only the preserved (callee-saved) registers and frame-related registers (such as the stack-pointer). However, for signal frames (see unw_is_signal_frame(3)), it is usually possible to access all registers.
That’s because an unwinder only really needs
rip and
rsp to determine the next frame.
We can look at some more unwind information from
release and spot that
callee-saved registers are indeed present sometimes.
< 5><0x004005b0:0x00400615><><cie offset 0x000000a4::cie index 1><fde offset 0x000000d0 length: 0x00000044> 29 │ <eh aug data len 0x0> 30 │ 0x004005b0: <off cfa=08(r7) > <off r16=-8(cfa) > 31 │ 0x004005b2: <off cfa=16(r7) > <off r15=-16(cfa) > <off r16=-8(cfa) > 32 │ 0x004005b4: <off cfa=24(r7) > <off r14=-24(cfa) > <off r15=-16(cfa) > <off r16=-8(cfa) > 33 │ 0x004005b9: <off cfa=32(r7) > <off r13=-32(cfa) > <off r14=-24(cfa) > <off r15=-16(cfa) > <off r16=-8(cfa) >
r13-
r15, which map to the same registers on x86-64 are incrementally available.
It is unclear to me why callee-saved registers are sometimes required. One
reason seems to be that the next frame’s CFA can be defined in terms of some of
them. The other is probably to allow exception handlers to run code, while
maintaining the guarantee that callee-saved registers will have been restored.
The first frame
Surely, we have enough information in the currently executing frame about every
single register, right? We did get an initial value for
rdi. Since the
function is executing, all registers are available. The platform specific
context retrieval methods – either the signal handler on Linux, or
thread_get_context() on macOS – will give you
rdi,
rsi and friends.
But remember, these do not always map to the actual arguments! That assertion
only holds at function entry. In a debugger, you could deliberately stop
at the function entry, and retrieve the arguments.
Unfortunately, crash reporters and profilers are out of luck since they stop at
arbitrary instructions.
rdi could already have been discarded, or
overwritten by this point. The function could’ve called another function, at
which point they were lost. One would need a tool that could parse the machine
code, keep track of the argument registers being moved around at each step and
then potentially reconstruct the memory location. This assumes that no
overwriting has occurred.
At this point, I’ve certainly given up trying to find the arguments without modifying the code!
Finding a way out
If we are willing to modify code, either at runtime, or by compiling it with extra plugins, then we can make some progress towards this task.
Using external information
One option to get the arguments is to use information directly from the managed
language’s runtime. For example, in Python, one can independently retrieve the
list of
PyFrameObjects, as done by Dropbox’s crashpad work and by py-spy.
These frames have a 1:1 mapping to the first argument for every
PyEval_EvalFrameDefault function. The
PyEval_EvalFrameDefault function is
usually exported in the public symbol table if Python is linked as a dynamic
library. When interleaving stacks, it is easy to determine when that function
is being executed. After doing a full unwind, py-spy can also collect the
PyFrameObject list. Every time
PyEval_EvalFrameDefault is encountered, it
can interleave the python context into the native stack.
Using a trampoline
This is another deterministic, but involved option. Again, it usually only works for retrieving arguments for specific functions of interest.
A trampoline is a piece of code that we insert at runtime, that will replace a function’s entry point. That is, we dynamically create a new function out of thin air and redirect various pointers so that this function is called instead of the original. The custom function can now stash away the arguments into specific places (either the stack or callee saved registers). Then it jumps back into the code for the original function.
The profiler will do this replacement in the running program’s address space when it starts up. Then, at unwind time, when it encounters this trampoline, it knows exactly where to look for the arguments.
The downside is that the trampoline is platform and OS specific. vmprof has a nice sketch if you’d like to read more about this.
Some languages make this process easier. Since Python 3.6, one can write a standard C extension, that can replace the default evaluation function with a custom function, that can again stash the arguments. This simplifies the trampoline setup. It is described in PEP-523.
Neither of these options are universal, and instrumenting every function like that would have some serious performance costs. They may make sense for a program under test, but not for in-the-wild augmentation for post-facto collection by a crash reporter.
Conclusion
We’ve seen that the ability to retrieve function arguments lies on a spectrum.
I hope this post gave some insight into the complexity of the problem. I’m
thankful that compiler vendors provide comprehensive information in debug
builds. Without that, a lot of problems would be impossible to solve. At
least for now, the answers are not clear for binaries shipped to users. For
profilers, there are ways out of the problem, even if they are not easy or
generic. Crash reporters will just have to live without deterministic
information. Perhaps some “extended unwind info” or compiler modes that
regularly stash arguments onto the stack even in release builds (
-Oargs?)
is necessary.
(Thanks for Ben Frederickson for reviewing this post!)
- James McNellis has an excellent talk about C++ exceptions on Windows. [return] | http://nikhilism.com/post/2019/retrieving-function-arguments-while-unwinding-the-stack/ | CC-MAIN-2019-18 | refinedweb | 3,408 | 63.19 |
I am a python new comer.
Recently I am implementing quicksort in python.
I heard the variable type called list is mutable, so any changes done to this will take affect in place. However, it is not in my place.
Here is my case, the function called alpartition has been tested and it proved to me that this function can work.(As it is required in the quick sort algorithm ). And the function called test is a recursion one. As we can see from the printed result, all parties of the variable called a has been modified. But it just don't come in together. It seems to me that one change has been done to this variable.
I used to a C programmer and I kind of treat list as a collection of pointer, I guess the fault is due to the misuse of the slicing technique.
Thanks a lot for your kindly help.
Here is the function :
__author__ = 'tk'
import random
def alpartition(my_str):
signal = 0
i = 0
j = len(my_str)
while i < j:
if signal == 0 :
while 1:
j = j - 1
if my_str[j] <= my_str[i]:
break
signal = 1
else:
while 1:
i = i + 1
if my_str[j] <= my_str[i]:
break
signal = 0
my_str[j],my_str[i] = my_str[i],my_str[j]
my_str[j],my_str[i] = my_str[i],my_str[j]
return i
def test(my_str):
if len(my_str)>1:
tick = alpartition(my_str)
print my_str,my_str[0:tick:],my_str[tick+1::],tick #We can see that my_str has correctly been modified
if tick > 0 :
test(my_str[0:tick:])
test(my_str[tick+1::])
a= [86, 44, 31, 7, 9, 90, 93, 12, 59, 34]
test(a)
print a
[34, 44, 31, 7, 9, 59, 12, 86, 93, 90] [34, 44, 31, 7, 9, 59, 12] [93, 90] 7
[12, 9, 31, 7, 34, 59, 44] [12, 9, 31, 7] [59, 44] 4
[7, 9, 12, 31] [7, 9] [31] 2
[7, 9] [] [9] 0
[44, 59] [44] [] 1
[90, 93] [90] [] 1
[34, 44, 31, 7, 9, 59, 12, 86, 93, 90]
Python lists are immutable, however, the slicing operation returns a new list every time. This is why a common idiom in python to get a new copy of a list is to slice it with
[:]. This returns a new list containing all the items of the old lost.
However, there is a subtle point about the mutability or immutability of the elements inside the list. In your case, since they are numbers and are immutable, the new list returned from slicing is an entirely new list and changing it will not affect the original list:
>>> my_list = [1, 2, 3] # Create a list with three numbers >>> my_other_list = my_list[:] # Slice the original list to get a copy >>> my_other_list[0] = 42 # Change the first element of the new list >>> my_other_list, my_list # See the two lists side by side ([42, 2, 3], [1, 2, 3]) # The change to the new list did not effect the original list
But, if the content of the first list were of an mutable type, say a
list. Then the story would be different:
>>> my_list = [[1], [2], [3]] >>> my_other_list = my_list[:] >>> my_other_list[0].append(42) >>> my_other_list, my_list ([[1, 42], [2], [3]], [[1, 42], [2], [3]])
Suffice it to say that in your case the elements of your list are numbers and they are immutable. So, every time you slice the original list, you are getting a new list and any changes you make to it has no effect on the original list. | https://codedump.io/share/afaYPsF3fqck/1/python-list-and-quick-sort-reference-or-pointer | CC-MAIN-2017-13 | refinedweb | 584 | 69.35 |
Using JavaScript's Function Bind
JavaScript’s Function object has a very helpful method,
bind. It is a great tool to give you control of the ever-squirrely
this keyword. Have you ever had trouble predicting what
this will be or making equate to what you want?
bind will help you do that with more power and consistency. It can also help you with partial function application.
The
this Keyword
In JavaScript, what the
this keyword refers to changes often. This can be useful but also unexpected. Generally
this, used within a function, will refer to the context in which that function was called. That calling context might be the global context of
window if an event callback is being called in a browser. That calling context might be an object that contains the function. For more specific cases, MDN has some great docs on the variety of contexts referred to by
this.
Controlling
this with
bind
When writing code, I often am thinking of
this in the context in which I’m writing. In other words, if I’m writing an object and use the keyword
this in a function, I would normally expect
this to refer to the object in which I declared the function. But, again, it is actually the calling context that actually determines the value of
this.
To change this default behavior, I can pre-bind the function’s value of
this to a value of my choosing. This will happen at the time of declaration, which is what I more naturally would expect.
For example, in React we write UI components. In interesting UIs, we’re often handling events like those that occur with user interaction. Normally events in the browser are attached to the DOM and are executed in the context of the
window. This being the case, it’ll be hard for us to create an event handler function in our React Component that can refer back to anything of use in the React Component itself.
As a simple example, we’ll write a
handleClick function that wants to call the Component’s
doLog function for interesting logging:
class MyComponent extends React.Component { doLog() { console.log('Yay, you clicked!') } handleClick() { console.log('this is window?', this == window) console.log('this is component?', this.constructor.name == 'MyComponent') this.doLog() } render() { return ( <div> <h1>Time to start clicking</h1> <button onClick={this.handleClick}>So, click</button> </div> ) } } React.render(<MyComponent />, document.getElementById('app'))
If you click the button,
this.doLog is not available as a function. How could it be?
doLog is defined in
MyComponent, not the
window, which is the original context in which the event callback is executed.
To fix this, one need only pre-bind the
handleClick function. By changing one line, we can fix this:
<button onClick={this.handleClick.bind(this)}>So, click</button>
When this line is executed, it’s in the
MyComponent#render function, thus the
MyComponent context. So
this, at that moment, is
MyComponent.
The other detail that makes this work is that
bind returns a brand new function. That’s how the pre-binding works. So, the
onClick prop that gets given to the
button is a new function where we have said we want to permanently control the value of
this to be whatever we bound it to.
Passing Specific Arguments with
bind
Another great reason to use
bind is to pass specific arguments to a function. Just as
bind can create new functions where the value of
this is pre-determined (bound),
bind can pre-fill (ie, partially apply) function arguments on the newly-created function.
It may not be immediately intuitive why one would want to create a function with parameters just to turn around and permanently make it so an argument to the function equals a specific value. It almost feels like hard-coding a wart-ridden value on something that was previously dynamic and beautiful. Perhaps an example will help.
Again, to the world of React… As in the previous example, we’ll pass an event handler for a click event. Notice, just as above, that we’re passing the function itself (
this.handleClick above) instead of the return value of the function (which would look like
this.handleClick()). In this example, we’ll have several click handlers, each on a list item, where the button will function as a remove button:
class Item extends React.Component { render() { return ( <li> {this.props.text} <button onClick={this.props.onRemove}>Rm</button> </li> ) } } class List extends React.Component { constructor(props) { super(props) this.state = { items: this.props.initalItems } } handleClickRemove(index) { var clonedItems = this.state.items.slice() clonedItems.splice(index, 1) this.setState({ items: clonedItems }) } render() { return ( <ul> {this.state.items.map((item, i) => { return <Item text={item} onRemove={this.handleClickRemove.bind(this, i)} key={item} /> })} </ul> ) } } React.render(<List initalItems={['Do', 'More', 'Reakt']} />, document.getElementById('app'))
In
List, we have the event handler, the
handleClickRemove function, that takes an
index parameter. In order to make this function work as defined, we are using
bind on this line:
return <Item text={item} onRemove={this.handleClickRemove.bind(this, i)} key={item} />
This
bind call is doing a few things for us:
- Pre-binding
handleClickRemoveto the
ListComponent so that
this.setStateworks inside the callback.
- Creating a new function that always has
ias its first parameter. Since this line is executed in a loop,
ichanges. It will be
0for the first item,
1, then
2. This is perfect, as we want the first remove button to remove the first item, and so on.
Isn’t that awesome and useful?
So
bind can help make
this more predictable for you. It will help you send new functions with pre-filled parameters. What else have you used
bind for? | https://jaketrent.com/post/using-javascript-function-bind/ | CC-MAIN-2018-30 | refinedweb | 960 | 59.3 |
Json api iphone application
...of the actual game build (outside of scope here). Map will be served from REST/Json API endpoint You can see example here. [login to view URL];module=%2Fjs%2Findex.js&moduleview=1 /assets/tilemaps/[login to view URL] inside of mocked up api. Authentication using Keycloak oidc (server already standing). lodash js developer immediatly to fix one issue to access keys in json. my budget is 600 've different files I need to load the information in the file in a excel format divided in columns.
I need someone to create an ionic app for my business. The app should be functional and professional. The app should also follow the branding of my business. Please message me if you have any questions.
...for correct matching and another one when a capital is hovering on country. (see screenshot) Note that it should be working also on mobile device browsers! You can use this json: [ { "country": "Albania", "capital": "Tirana" }, { "country": "Andorra", "capital": "Andorra la Vella" }, { &quo...
You will need to create the custom meta fields, everything needs to be stored professionally, there's a LOT of data that needs to be stored correctly, I will discuss more once messaging.
need help from developer for ios development
I need idea smart api for website
HTML5 Player Playlist Not Displaying On iPhone Mobile but the skin and buttons works fine Very easy task for skilled developer Many Thanks
Need iPhone BLE app, connect to Pyhon on Raspberry Pi
App is something like news feed. I have an API which is JSON response. I have a list view which is infinite scroll. Each card inside the list view I have fixed layout with image and text. I want the card data to be changed to video or image or text based on JSON response. Inbox me if you need more details.
import search filter large json file . search edit export data in many ways xls pdf etc.. will provide sample of small file and example data inside it once we discuss. winner of this project who place best bid . thanks
Definitions to be taken...JS. - New website must support all browsers, devices - There should be easy provision to switch between RTL and LTR to support all languages - Website must read text from a json file. This way many languages can be supported. (Optional) - Usage of proper HTML tags and heirarchy is a must for better indexing by search engines.
Hi all, Needed a little script written up in PHP. Making a cURL request and getting data back as $response in PHP. The response is in JSON and needs to be printed in HTML tables.
I am in need of a source code for Both Android and iPhone that is able to coonect to a Bluetooth LE peripheral and send and recieve data and also messure the stimate distance where the peripheral is located
..
I need an expert guy in Json and Joomla for my current project. I will give more details in the message. Please experience person can bid.
I need a android and iphone expert for my current project. If you have knowledge please bid. Details will be shared in message with the selected freelancers.
Blockchain, Solidity, Ethereum, hyperledger, Javascript, JSON, WEB3 * Freelance * 300 à 500€/J * PRIAM est une société de conseil spécialisée en blockchain. Afin de pouvoir répondre à une demande croissante de Proof of Concept, nous cherchons un développeur qui pourrait assister sur l'étude d’application de | https://www.freelancer.com/work/json-api-iphone-application/ | CC-MAIN-2018-43 | refinedweb | 582 | 74.69 |
This class collects all the defs and uses associated with each node in the traversed CFG.
Note that this does not compute reachability information; it just records each instance of a variable used or defined.
Definition at line 73 of file defsAndUsesUnfilteredCfg.h.
#include <defsAndUsesUnfilteredCfg.h>
Call this method to collect defs and uses for a subtree.
The additional defs and uses discovered in the tree will be inserted in the passed data structures.
Called to evaluate the synthesized attribute on every node.
This function will handle passing all variables that are defined and used by a given operation.
Implements AstBottomUpProcessing< ChildUses >. | http://rosecompiler.org/ROSE_HTML_Reference/classssa__unfiltered__cfg_1_1DefsAndUsesTraversal.html | CC-MAIN-2021-39 | refinedweb | 102 | 50.33 |
Process Isolation
The HTTP Server version 2.0 API provides the ability to build a safer, more reliable service by isolating worker processes that are servicing requests on the request queue. The request queue is created and administrated by a controller or creator process that strictly controls access to it. The controller process launches one or more separate worker processes that perform I/O on the request queue. The controller process runs with administrative privilege and configures the request queue, while the lower privilege worker processes access and service requests from the request queue. This architecture supports the policy of applications running under "least privilege" and reduces the possibility of security vulnerabilities introduced by third-party code that may be running in worker processes.
Access to the request queue is granted when the controller process creates the request queue with a name and an Access Control List (ACL). Web applications that are included in the ACL can open an existing request queue by name. The creator process may also be a worker process on the request queue. For more information, see the Named Request Queue topic. The following diagram shows the architecture of a typical HTTP application running with the worker process model:
Individual worker processes within the application are isolated from other worker processes, and the health of each the worker processes can be monitored by the controller process. The controller process is isolated from the worker processes. The components of the HTTP architecture are described below:
- Creator or controller process: The controller process can run with, or without, administrative privileges to monitor the health and configure the service. The controller process typically creates a single server session for the service and defines the URL groups under the server session. The URL group that a particular URL is associated with determines which request queue services the namespace denoted by the particular URL. The controller process also creates the request queue and launches the worker processes that can access the request queue.
- Worker Process: The worker processes, launched by the controller process, perform IO on the request queue that is associated with the URLs they service. The web application is granted access to the request queue by the controller process in the ACL when the request queue is created. Unless the web application is also the creator process, it does not manage the service or configure the request queue. The controller process communicates the name of the request queue to the worker process and the worker process opens the request queue by name. Worker processes can load third party web applications without introducing security vulnerabilities in other parts of the application.
- Request Queue: The request queue is created and configured by the controller process. The controller specifies the processes that are allowed access to the request queue in the ACL when the request queue is created.
- Server Session: The controller process typically creates and configures a single server session for the application. The server session maintains the configuration properties for the entire application. URL groups are created under the server session by the controller process.
- URL Group: The controller process creates the URL groups under the server session, and configures the URL group independent of the server session. URLs are added to the group by the controller process. Requests are routed to the request queue that the URL group is associated with. | https://msdn.microsoft.com/en-us/library/aa364683(v=vs.85).aspx | CC-MAIN-2015-11 | refinedweb | 562 | 51.89 |
Content
All Articles
Python News
Numerically Python
Python & XML
Community
Database
Distributed
Education
Getting Started
Graphics
Internet
OS
Programming
Scientific
Tools
Tutorials
User Interfaces
ONLamp Subjects
Linux
Apache
MySQL
Perl
PHP
Python
BSD
More Test-Driven Development in Python
Pages: 1, 2, 3
A composite is basically an object that contains other objects, where both
the composite object and its contained objects all implement the same
interface. Using the interface on the composite should invoke the same methods
on all of the contained objects without forcing the external client to
do so explicitly. Whew, that was a mouthful.
Here, that interface is the matches method, which accepts a
date instance and returns a bool. Python is a
dynamically typed language, so I don't need to define this interface formally
(which is fine by me).
matches
date
bool
How do I implement the composite? Like this:
class CompositePattern:
def __init__(self):
self.patterns = []
def add(self, pattern):
self.patterns.append(pattern)
def matches(self, date):
for pattern in self.patterns:
if not pattern.matches(date):
return False
return True
The composite pattern asks each of its contained patterns if it matches the
specified date. If any fail to match, the whole composite pattern fails.
I have to confess that I cheated here. I wrote more code than I needed to
pass the test! Sometimes I get ahead of myself. Sorry. It turned out OK this
time because all of the tests are passing, but I need to create a test that should
not match, just to be sure I have everything working correctly:
def testCompositeDoesNotMatch(self):
cp = CompositePattern()
cp.add(YearPattern(2004))
cp.add(MonthPattern(9))
cp.add(DayPattern(28))
d = datetime.date(2004, 9, 29)
self.failIf(cp.matches(d))
Cool. It passes.
It might be a little difficult to see this, but the composite contains a
DayPattern that matches the 28th and I'm matching it against the
29th, which is why I expect the matches method to return
False.
DayPattern
False
So I can match dates again. Big deal--I was already doing that. What about
wild cards?
I'll write a test to match my anniversary with the new classes:
def testCompositeWithoutYearMatches(self):
cp = CompositePattern()
cp.add(MonthPattern(4))
cp.add(DayPattern(10))
d = datetime.date(2005, 4, 10)
self.failUnless(cp.matches(d))
It just works. Why?
There's no YearPattern in the composite requiring the passed-in
date to match any specific year. Wild cards now work by not specifying
any pattern for a given component. Remember when I thought I might need a
class to do the wild card matching? I was wrong!
YearPattern
At this point, I feel really good about the new approach and will just
delete the old tests and code.
I'll also refactor the tests a bit. Did you notice that every one of the new
tests contained a duplicate line? I did. It started to bother me, but that's
what test fixtures are for:
class PatternTests(unittest.TestCase):
def setUp(self):
self.d = datetime.date(2004, 9, 29)
def testYearMatches(self):
yp = YearPattern(2004)
self.failUnless(yp.matches(self.d))
def testYearDoesNotMatch(self):
yp = YearPattern(2003)
self.failIf(yp.matches(self.d))
I've only shown the first two test cases (in the fixture previously known as
NewTests) but now all of the test cases refer to the date as
self.d instead of constructing a local date instance.
It's not a huge refactoring, but it makes me feel better. You do want me to
feel the best I can about my code, don't you? Of course you do.
NewTests
self.d
I did have to change testCompositeWithoutYearMatches to use
this date instead of my anniversary. As cute as it was to throw that date in
there, I decided I'd rather have clean code without duplication than
cuteness.
testCompositeWithoutYearMatches
I also took this opportunity to add some named constants for weekdays:
MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY = range(0, 7)
Now I can use these instead of the hard-coded constants I had in the weekday
tests, and also delete the comments I had explaining what the constants
represented. Intention-revealing code beats comments any day of the week.
Where am I now? Before switching gears, I planned to write a test for a
pattern that matched the last Thursday of every month. It's time to do that
now:
def testLastThursdayMatches(self):
cp = CompositePattern()
cp.add(LastWeekdayPattern(THURSDAY))
self.failUnless(cp.matches(self.d))
Cool. A new class to implement!
The implementation for this class is slightly more complicated than the
others:
class LastWeekdayPattern:
def __init__(self, weekday):
self.weekday = weekday
def matches(self, date):
nextWeek = date + datetime.timedelta(7)
return self.weekday == date.weekday() and nextWeek.month != date.month
Oops. It doesn't pass. Why?
The date I'm trying to match in the test is a Wednesday, not a Thursday! I
need to fix the test (not forgetting to rename it) to add a test where I expect
the match to fail (which I should have done before implementing
matches):
def testLastWednesdayMatches(self):
cp = CompositePattern()
cp.add(LastWeekdayPattern(WEDNESDAY))
self.failUnless(cp.matches(self.d))
def testLastWednesdayDoesNotMatch(self):
cp = CompositePattern()
cp.add(LastWeekdayPattern(WEDNESDAY))
self.failIf(cp.matches(self.d))
Rats. The first test passes but the second one fails. The date created in
setUp is the same for every test case so it will always be a
Wednesday, but I need a date that's not on a Wednesday to make this test pass.
Rather than creating a new date in this test case (and ignoring the one created
in setUp), I'll move both of these tests into a new fixture--one
specific for testing the LastWeekdayPattern class:
setUp
LastWeekdayPattern
class LastWeekdayPatternTests(unittest.TestCase):
def setUp(self):
self.pattern = LastWeekdayPattern(WEDNESDAY)
def testLastWednesdayMatches(self):
lastWedOfSep2004 = datetime.date(2004, 9, 29)
self.failUnless(self.pattern.matches(lastWedOfSep2004))
def testLastWednesdayDoesNotMatch(self):
firstWedOfSep2004 = datetime.date(2004, 9, 1)
self.failIf(self.pattern.matches(firstWedOfSep2004))
Isn't it nice being able to add new functionality without changing existing
classes? This is part of what Bertrand Meyer called the Open-Closed Principle:
"Software entities (classes, modules, functions, etc.) should be open for
extension, but closed for modification." With the new approach of using the
Composite pattern, I can extend the behavior of the system by writing a new
class with a match method and passing an instance of that class
into add. This is one of the most fundamental principles of
object-oriented design that, too often, gets lost in shuffle.
match
add
Now they both pass and the tests use everything they create. Nice.
While moving these tests into a new fixture, I also noticed that I created a
CompositePattern that contained only one pattern. That's kind of
pointless, so I stopped doing it.
CompositePattern
Should I move the test cases that exercise the various pattern classes into
their own fixtures? That's a tendency that many, including me, often have. It's
sometimes useful to resist that urge, though. As Dave Astels, author of Test-Driven Development: A
Practical Guide, puts it: a fixture is "a way to group tests that need to
be set up in exactly the same way." In other words, a fixture is not a
container for all of the tests for a single class, or at least, it doesn't have to
be.
Having said that, I prefer it when all of the test cases in a fixture
exercise the same class. In harmony with Dave's definition of fixtures, I just
don't require that all of the test cases that exercise the same class be
in the same fixture. Make sense?
Suppose that I have four test cases for the class Foo but half
require different setUp code than the other half. I'd split those
cases up into two fixtures, even though they both exercise the same class. If I
then started writing tests for the class Bar and discovered that some
of its tests could use the same setUp code as one of the class
Foo fixtures, I would not just shove those tests into one
of the existing fixtures.
Foo
Bar
Wouldn't that mean I've duplicated the setUp code for the two
fixtures? Duplication is evil! If I thought the duplication was enough to be a
problem, I would Extract
Superclass the duplicated code out of the two fixtures. Yes, you can--and
should--refactor your tests, too.
When starting on a new project, I create one fixture with no
setUp method and add all of my test cases to that one fixture.
Eventually, I reach the point where I need to refactor the fixture, and I do it.
Remember: do the simplest thing that could possibly work first. Then refactor
if necessary.
What, though, is the benefit of ensuring that all of the test cases in a
fixture only exercise one class? Well, besides making it more cohesive (and
obeying the Single
Responsibility Principle (PDF)), think about what might happen when you decide a
class is no longer necessary. You'll need to delete the tests for that class,
too. It's a lot easier to delete a whole test fixture than to look at each test
case in a fixture to see if it exercises the class you just deleted.
Think you won't delete classes? Think again. You saw me delete the
DatePattern class and all of its tests earlier, didn't you? It
wasn't hard. I felt good about it, too.
DatePattern
Pages: 1, 2, 3
Next Page
Sponsored by:
© 2014, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.linuxdevcenter.com/pub/a/python/2005/02/03/tdd_pyunit2.html?page=2 | CC-MAIN-2014-42 | refinedweb | 1,635 | 65.42 |
Data, they are an incredible tool in making data visualization or analysis. But the problem is that they don’t offer interactive plots and that what we will do here using plotly and cufflinks.
Installing Libraries
The first thing you have to do before starting any python code is to make sure to install the needed libraries. In our case, we need to install plotly and cufflinks either using conda or pip depending on your Python version:
Install libraries using conda:
conda install -c plotly plotly conda install -c conda-forge cufflinks-py
Install libraries using pip:
pip install plotly pip install cufflinks
Getting Started
Essentially, Plotly is a company that owns this library and it offers you creating a data visualization in their server like Google Colab, but also provides you an offline version of this library as a python package.
Begin our code by importing this library and other needed libraries such as numpy to create our dataset and Pandas so you can import your data as a data-frame, also cufflinks that convert plotly visualization to a JavaScript code so you can display it on the browser.
import pandas as pd import numpy as np
After that we import some packages from the offline version of plotly and cufflinks:
import cufflinks as cf from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
As we said earlier, plotly plots are interactive with the user and that’s the power of it compared to the other plotting libraries.
To make plotly able to do that we need first to connect Jupyter Notebook to JavaScript since it uses the power of JavaScript programming language in visualization to create an interactive dashboard.
To do that make sure to run this command:
init_notebook_mode(connected=True)
After we import cufflinks we need also to make it run in the offline mode:
cf.go_offline()
Create Our Dataset
Our dataset will be like a normal distribution with a random 400 points with 100 rows and 4 columns, we do that by using the np.random.rand() function in numpy library and import it to pandas as a data-fram using the function pd.DataFrame () and lastly, specify the columns name we just named it as [“A”, “B”, “C”, “D”] inside pandas DataFram function:
df = pd.DataFrame(np.random.rand(100, 4), columns=["A", "B", "C", "D"])
We can get an overview for the dataset using the df.head() function:
df.head()
The Line Plot
Now, after creating our dataset we plot our data as a line plot interactive dashboard using plotly. We make use of the iplot() package that we have imported earlier from plotly.offline and integrate it with pandas df variable:
df.iplot()
As you see above, the data-frame has been converted to an interactive dashboard and when you hover over the graph you see values for every index point and you can zoom in any particular section by selecting it.
And if you want to hide some lines just click on one of the four lines at the right plot (The Legend) and also you can download it as png formate picture, move the graph right or left and much more.
The Scatter Plot
The iplot() essentially is the function that responsible for converting a dataset to a nice and interactive visualization and not limited just for the line plot.
To create a scatter plot using plotly the iplot() function will take some extra arguments such as:
- kind: The type of the plot (scatter plot for example)
- x: The data for X-axis
- y: The data for Y-axis
- mode: The type of points
- size: The size of the points (self-explanatory)
Run the following command on you Notebook:
df.iplot(kind="scatter", x="A", y="B", mode="markers", size=20)
The scatter plot will plot every x point to its corresponding y point, and just like the previous graph go ahead and interact with this visualization such as zoom in or zoom out a certain section or hover over on some points and so on.
The Bar Plot
The bar plot essentially is a graph that represents a categorical column proportionally to its corresponding values. Just like what we did with scatter plot we need to change the kind argument to “bar” in order to convert it to a bar plot.
But before doing so, let’s create another dataset that is suitable for the bar plot because this dataset will not give us a nice bar plot visualization. So we will create a dataset with two columns (category and values) and three columns (A, B, C) using the Pandas library.
df2 = pd.DataFrame({"Category":["A", "B", "C"], "Values":[30, 40, 50]}) df2.head()
Let’s convert this data into a bar plot using this command:
df2.iplot(kind="bar", x="Category", y="Values")
The Box Plot
Box Plot essentially is a representing of a numerical group of data to its quartiles. To make a box plot using plotly just passing the value in the kind argument and we will use our first data df because it has a lot of data to show:
df.iplot(kind="box")
The 3D Surface Plot
Since the 3D surface plot is a diagram of a three-dimensional graph we need to create a new dataset with x, y, z variables, let’s see that in the code:
df3 = pd.DataFrame({"x":[1, 2, 3, 4, 5], "y":[30, 40, 50, 40, 30], "z":[5, 4, 3, 2, 1]}) df3.head()
Plotting 3D surface plot needs one argument which is value “surface” in the kind argument and it can take another one:
- colorscale: The color palette of the 3D graph
df3.iplot(kind="surface", colorscale="rdylbu")
The graph above is a 3D plot that being said you can rotate it and play around with. It worth noting that colorscal argument is optional so no need to specifying it since it has a default colorscal.
The Spread Plot
This kind of graph is used a lot in financial data and analysis or stock data, we make use of the two columns (A and B) of our first dataset df and plot them:
df[["A", "B"]].iplot(kind="spread")
After running the Notebook cell, you will get two graphs a plot and a subplot, so it is like a line plot that compares them against each other and the spread plot that displays the spread against each other.
The Bubble Plot
The bubble plot is pretty similar to the previous one which was the scatter plot except it will plot the points on a big size depending on a variable you specify like the C column in our case. See the code below:
df.iplot(kind="bubble", x="A", y="B", size="C")
The code above is quite similar to the scatter plot that has been discussed earlier except in the size argument we specified a column name instead of a particular number. This kind of graph is generally applied in plots such as happiness factor and nation reports and so on.
Conclusion
In this article, we have seen that plotly is an extremely powerful tool for making nice visualization and an interactive dashboard unlike the other libraries such as matplotlib or seaborn. I suggest visiting the official plotly documentation and see the other option that is available in this tool.
Note: This is a guest post, and the opinion in this article is of the guest writer. If you have any issues with any of the articles posted at please contact at asif@marktechpost.com | https://pythonlearning.org/2020/01/02/interactive-dashboard-using-plotly-and-cufflinks/ | CC-MAIN-2020-16 | refinedweb | 1,259 | 53.14 |
On 14 Apr 1997, Mark Eichin wrote: > > >> C++'s basic things have been unchaged for what, 6-7 years now? > > *snicker* *snort* *splutter* *wipe, wipe, the tea off the keyboard* > > Sorry about that, it's not aimed at you, just at the idea that C++ is > even vaguely "solid" on that kind of time scale. But, well, 5 years > ago C++ didn't even *have* templates. (I know -- Ken Raeburn and I > were still exchanging email with people at AT&T on the design, and > subcontracting to Cygnus on the implementation.) Okay, templates are newer, but I don't really include them in the basic things :> As far as that goes, I have done alot of work with commercial C++ compilers in Os/2 + QNX + Dos, and let me say that across the compiles there is a pretty uniform implementation of C++, yes there are some bumps in some obscure things but nothing fatal. Today, I can write C++ code with templates, exceptions and RTTI and be reasonably confident that my code will compile on at least 4 commercial compilers. If I actuall test with those 4 compiles I will have code that compiles on them all, without that much trouble. > 4 years ago, G++ didn't have *nested types* working; since Cfront 1.0 > didn't even have the concept, this was fair -- it was introduced in > Cfront 2.x, and G++ needed a fair bit of internal change to deal (with > not keeping a global type list, in more subtle ways than seemed > possible at the time :-) I know it was 4 years ago because I'd been > hired by Cygnus at the time, and it was one of my first couple of > projects (which started out as "there are some bugs in nested types" > and exploded from there :-) G++ seems to be just *nasty* with respect to C++, I don't know why it lags so far behind or has so many unfixed problems and so on, maybe 2.8 will fix them maybe it won't, I do not know. Let me tell you that I ported Muse, a 37K Line C++ program from Os/2 to LinuxG++ in about a day, my code didn't uncover too many compiler bugs. That makes Muse a program that compiles on: G++ High C++ IBM VAC++ Borland C++ Watcom C++ That I have tried. It uses templates all over the place, no exceptions or RTTI. I also do not have much in the way of compiler specific ifdefs, the most major is to #define min/max using >? and _max for G++ and HC. > Even now there are lots of "corners" of templates that simply don't > work, even if you stick with G++. (Don't even *start* to consider C++ > as a language from multiple vendors...) > > And the upcoming (what, 6 months worth of voting left, last I > checked?) ISO/ANSI C++ standards are adding *new keywords* like "use" > which are going to break existing code merely by existing :-) Lots This is part of the namespace stuff if I recall. I used a compiler which supported this 12 months ago, High C++. Jason | https://lists.debian.org/debian-devel/1997/04/msg00402.html | CC-MAIN-2016-40 | refinedweb | 525 | 72.5 |
Back
I wanted to convert this list of tuples into the dictionary. I am trying to convert into a dictionary. I can only convert into a dictionary if there is only one value. but I have two values in it. I will demonstrate with more details below:
`List of tuples: [('Samsung', 'Handphone',10), ('Samsung', 'Handphone',-1),('Samsung','Tablet',10),('Sony','Handphone',100)]`
`List of tuples: [('Samsung', 'Handphone',10), ('Samsung', 'Handphone',-1),
('Samsung','Tablet',10),('Sony','Handphone',100)]`
As you can see above, I am trying to identify 'Samsung' as the key and 'Handphone' and '10' as the values with respect to the key.
My desired output would be:
`Output: {'Sony': ['Handphone',100], 'Samsung': ['Tablet',10,'Handphone', 9]}`
In the above, the item 'handphone' and 'tablet' are groups according to the key values which in my case are Sony and Samsung. The quantity of the item is added/subtracted if they belong to the same item and same key (Samsung or Sony).
I would appreciate any suggestions and ideas that you guys have in order to achieve the above output. I ran out of ideas. Thank you.
Good opportunity for defaultdict
from collections import defaultdictthe_list = [ ('Samsung', 'Handphone', 10), ('Samsung', 'Handphone', -1), ('Samsung', 'Tablet', 10), ('Sony', 'Handphone', 100)]d = defaultdict(lambda: defaultdict(int))for brand, thing, quantity in the_list: d[brand][thing] += quantity
from collections import defaultdict
the_list = [
('Samsung', 'Handphone', 10),
('Samsung', 'Handphone', -1),
('Samsung', 'Tablet', 10),
('Sony', 'Handphone', 100)
]
d = defaultdict(lambda: defaultdict(int))
for brand, thing, quantity in the_list:
d[brand][thing] += quantity
Result will be
{ 'Samsung': { 'Handphone': 9, 'Tablet': 10 }, 'Sony': { 'Handphone': 100 }}
{
'Samsung': {
'Handphone': 9,
'Tablet': 10
},
'Sony': {
'Handphone': 100
}
}. | https://intellipaat.com/community/50537/incorporate-duplicates-in-a-list-of-tuples-into-a-dictionary-summing-the-values | CC-MAIN-2021-49 | refinedweb | 272 | 55.27 |
java.io.LineNumberInputStream
java.io.FilterInputStream
None
Deprecated as of JDK 1.1
The LineNumberInputStream class is an
InputStream that keeps track of line numbers. The
line number starts at 0 and is incremented each time an end-of-line
character is encountered. LineNumberInputStream
recognizes "\n", "\r", or
"\r\n" as the end of a line. Regardless of
the end-of-line character it reads,
LineNumberInputStream returns only
"\n". The current line number is
returned by getLineNumber(). The
mark() and reset() methods are
supported, but only work if the underlying stream supports
mark() and reset().
The LineNumberInputStream class
is deprecated as of JDK 1.1 because it does not perform any byte to character
conversions. Incoming bytes are directly compared to end-of-line characters.
If you are developing new code, you should use LineNumberReader
instead.
public class java.io.LineNumberInputStream
extends java.io.FilterInputStream {
// Constructors
public LineNumberInputStream(InputStream in);
// Instance Methods
public int available();
public int getLineNumber();
public void mark(int readlimit);
public int read();
public int read(byte[] b, int off, int len);
public void reset();
public void setLineNumber(int lineNumber);
public long skip(long n);
}
The input stream to
use.
This constructor creates a LineNumberInputStream
that gets its data from in.
The number of bytes that can be read without blocking.
If any kind
of I/O error occurs.
FilterInputStream.available()
This method returns the number of bytes of input that can be read without
having to wait for more input to become available.
The current line number.
This method returns the current line number.
The maximum number
of bytes that can be read before the saved position becomes invalid.
FilterInputStream.mark()
This method tells the LineNumberInputStream
to remember its current position. A subsequent call to reset()
causes the object to return to that saved position and thus reread a
portion of the input. The method calls the mark()
method of the underlying stream, so it only works if the underlying stream
supports mark() and reset().
The next byte of data or -1 if the end of the stream is encountered.
FilterInputStream.read()
This method reads a byte of input from the underlying stream.
If "\n",
"\r", or "\r\n"
is read from the stream, "\n"
is returned. Otherwise, the byte read from the underlying stream is returned
verbatim. The method blocks until the byte is read, the end of stream is
encountered, or an exception is thrown.
An array of bytes to be filled from the stream.
An offset into the byte array.
The number of bytes to read.
The actual number of bytes read or -1 if the end of the stream is encountered
immediately.
FilterInputStream.read(byte[], int, int)
This method reads up to len bytes of input into the
given array starting at index off. If
"\n", "\r", or
"\r\n" is read from the stream,
"\n" is returned. The method does this by
repeatedly calling read(), which is not efficient,
especially if the underlying stream is not buffered. The method
blocks until some data is available.
If there was
no previous call to this FilterInputStream's
mark() method or the saved
position has been invalidated.
FilterInputStream.reset()
This method calls the reset()
method of the underlying stream. If the underlying stream supports mark()
and reset(), this method sets
the position of the stream to a position that was saved by a previous call
to mark(). Subsequent bytes
read from this stream will begin from the saved position and continue normally.
The method also restores the line number to its correct value for the mark
location. The method only works if the underlying stream supports mark()
and reset().
The new line
number.
This method sets the current line number of the LineNumberInputStream.
The method does not change the position of the stream.
The number of bytes to skip.
The actual number of bytes skipped.
FilterInputStream.skip()
This method skips n bytes of
input. Note that since LineNumberInputStream
returns "\r\n"
as a single character, "\n",
this method may skip over more bytes than you expect.
Method
Inherited From
clone()
Object
FilterInputStream
equals(Object)
finalize()
getClass()
hashCode()
markSupported()
notify()
notifyAll()
read(byte[])
toString()
wait()
wait(long)
wait(long, int)
FilterInputStream,
InputStream,
IOException,
LineNumberReader | https://docstore.mik.ua/orelly/java/fclass/ch11_34.htm | CC-MAIN-2021-39 | refinedweb | 699 | 58.28 |
First off, this is my first article on Code Project. Actually, it is my first article ever! I can't wait to get some feedback, and I welcome any criticism. I am by no means a C# expert, so please feel free to express your expert opinions.
The problem that I am going to address in this article is how to pass some Transact-SQL text to a specified SQL Server instance, and ask it to parse the code, returning any syntax errors. This would be useful in a case where your application allows the user to enter some T-SQL text to execute, or where T-SQL gets executed dynamically from script files, or whatever. The possibilities are endless, just bear in mind the security implications that this might have if this article inspires you to implement such a design.
Say, for example, you have a system which enables one user (with special access privileges, of course) to write T-SQL code and store it on the system in the form of scripts (in the database or in files). Then another user of the system would come in and choose one of these scripts based on the name and description provided by the programmer, and then click a button to execute it. Obviously you need some mechanism to check the validity of the code before allowing it to be stored on the system. This is where my solution would hopefully be useful.
I did some research and decided to include some background on the inner workings of SQL Server, or any other DBMS for that matter. So what really happens when your applications execute queries on the database? Is there a specific process that the DBMS follows to return the requested data, or to update or delete a subset of data? What happens under the hood of your preferred DBMS is quite complicated, and I will only explain or mention key processes on a high level.
SQL Server is split into multiple components, and most of these components are grouped to form the Relational Engine and the Storage Engine. The Relational Engine is responsible for receiving, checking and compiling the code, and for managing the execution process, while the Storage Engine is responsible for retrieving, inserting, updating or deleting the underlying data in the database files. The component that I want to touch base with is the Query Processor, which is part of the Relational Engine.
As the name suggests, it is the Query Processor's job to prepare submitted SQL statements before it can be executed by the server. The Query Processor will go through three processes before it can provide an Execution Plan. This execution plan is the most optimal route chosen by the DBMS for servicing the query. The three processes mentioned include:
The parser checks for syntax errors including correct spelling of keywords. The normalizer performs binding, which involves checking if the specified tables and columns exist, gathering meta data about the specified tables and columns, and performing some syntax optimizations. Programmers frequently use the term Compilation to refer to the compilation and optimization process. True compilation only affects special T-SQL statements such as variable declarations and assignments, loops, conditional processing, etc. These statements provide functionality to SQL code, but they do not form part of DML statements such as SELECT, INSERT, UPDATE or DELETE. On the other hand, only these DML statements need to be optimized. Optimization is by far the most complex process of the Query Processor. It employs an array of algorithms to first gather a sample of suitable execution plans, and then filters through them until the best candidate is chosen.
After the optimal execution plan is determined and returned by the Query Processor, it is stored in a cache. SQL Server will automatically determine how long to keep this execution plan within the cache as it might get reused often. When an application executes a query, SQL Server checks if an execution plan exists in the cache for the query. SQL Server generates a cache key based on the query text, and searches for the same key in the cache. Queries need to be recompiled and reoptimized when metadata changes such as column definitions or indexes, but not for changes in parameters, system memory, data in the data cache, etc.
Finally, the Query Processor communicates the execution plan to the Storage Engine and the query is executed.
I have created and included a simple code editor application, but keep in mind that the main purpose of this article is to provide you with a parse function, and only code snippets and notes revolving around this point will be covered here. There are lots of very useful articles out there for building WPF applications. I will assume that you have some experience with Visual Studio and C#. I have included the example app which was written in Visual C# Express 2010 as a WPF Application. Knowledge of WPF is not required as I will explain the relevant C# code in detail.
Essentially, I want my application to have an Execute button and a Parse button (like in MS SQL Server Management Studio). Pressing the Execute button, SQL Server will go through the whole process as described above to first prepare the statements and then determine the execution plan before it will be executed. For the parse button, naturally it should only parse the query. I am going to create a class that will encapsulate all my ADO.NET objects, and provide methods Execute and Parse for wiring functionality to the buttons. I would also provide methods for connecting and disconnecting to the SQL Server instance with a specified connection string. The class is called SqlHandler.
SqlHandler
This class encapsulates and hides the following objects:
SqlConnection conn
SqlCommand cmd
SqlDataAdapter adapter
List<SqlError> errors
You need to include using System.Data.SqlClient; and using System.Data; to the using directives list at the top of the code file, as I am sure you know. The conn object is used for connecting to the database. The ConnectionString property directly gets and sets the conn.ConnectionString property. This allows you to get or set the Connection string from outside the class. The cmd object is used to execute commands, and adapter is used to obtain query results from the database. The object errors is a generic list of type SqlError. This list will be used to capture and return errors generated whild executing or parsing T-SQL code.
using System.Data.SqlClient;
using System.Data;
conn
ConnectionString
conn.ConnectionString
cmd
errors
SqlError
Most of you reading this article would already be familiar with these ADO.Net classes. Most of the time I am developing applications with ADO.NET; I only use a few selected properties and methods. The SqlConnection class contains a FireInfoMessageEventOnUserErrors property and an InfoMessage event that are less well known and less often used (in my opinion at least). I had to discover them myself by digging through the objects as I could not find a relevant article explaining how to accomplish what I wanted. Eventually through trial and error I got a working solution.
SqlConnection
FireInfoMessageEventOnUserErrors
InfoMessage
FireInfoMessageEventOnUserErrors is a boolean property. When set to false (default), the InfoMessage event will not be fired when an error occurs, and an Exception will be raised by the ADO.NET API. When set to true, an Exception will not be thrown, but the InfoMessage event will be fired. For my code to work, I had to enable this event to catch all the messages through the SqlInfoMessageEventArgs event argument object. The following code snippet shows how to set this property and event in the constructor:
SqlInfoMessageEventArgs
conn.FireInfoMessageEventOnUserErrors = true;
conn.InfoMessage += new SqlInfoMessageEventHandler(conn_InfoMessage);
conn_InfoMessage is the name of the event handler method that will be called when the event fires. It is important to note that although this looks like an asynchronous operation, it is in fact synchronous. This means that when the T-SQL query is executed by passing it to cmd.ExecuteNonQuery or to adapter.Fill, the event will be fired before continuing execution. This allows us to suck up all the messages into the errors list before returning from the Execute and Parse methods of our class where ExecuteNonQuery and Fill is called. The snippet below describes how the messages are caught in the event handler.
conn_InfoMessage
cmd.ExecuteNonQuery
adapter.Fill
Execute
Parse
ExecuteNonQuery
Fill
private void conn_InfoMessage(object sender, SqlInfoMessageEventArgs e)
{
//ensure that all errors are caught
SqlError[] errorsFound = new SqlError[e.Errors.Count];
e.Errors.CopyTo(errorsFound, 0);
errors.AddRange(errorsFound);
}
It is important to mention that the event will be fired for every error that the T-SQL script might contain. For instance, if your script contains two errors, the conn_InfoMessage event handler will be called twice! I only discovered this while testing my application where I tried to parse a script containing multiple errors. The initial result was that my Parse method always returned only one error, while SSMS reported the correct amount of errors for the same script. Only when I inserted a message box in the event handler I discovered how it works. The reason why this was misleading is because the second argument of our event handler, the e object, which is of type SqlInfoMessageEventArgs has an Errors property. This property is of type SqlErrorCollection, which to me implied that it contains multiple SqlError objects. Naturally I assumed that this collection will contain all the errors at once. After a few code modifications I got the desired result. What happens now is that every time the event is fired, an SqlError array is created and the e.Errors collection of SqlError objects will be copied to this array. Even though this collection contained exactly one item every time I have tested my code, I make sure that all the SqlError objects are captured just to be safe. This whole array is then copied to the errors list, which is a private field within my class definition. This list is used to aggregate all the errors before returning it to the client code. Another point worth mentioning is that the errors list has to be cleared every time Parse or Execute is called.
e
Errors
SqlErrorCollection
e.Errors
The first parameter of this method, sqlText contains the T-SQL code to be executed. The second parameter is an SqlError array object. Take notice of the out keyword. This means that the parameter is an out parameter, and we have to set it's value somewhere in the method. This allows the method to return both a DataTable object (through the normal return type and return statement), and an array containing our SqlError objects. The client code will be responsible for checking the length of the array to determine if any errors were generated.
sqlText
out
DataTable
public DataTable Execute(string sqlText, out SqlError[] errorsArray)
{
if (!IsConnected)
throw new InvalidOperationException("Can not execute Sql query while the connection is closed!");
errors.Clear();
cmd.CommandText = sqlText;
DataTable tbl = new DataTable();
adapter.Fill(tbl);
errorsArray = errors.ToArray();
return tbl;
}
First we need to tests whether the connection is open or not using the IsConnected property, and throw an exception if it is not. Next, the errors list is cleared to prevent reporting errors previously encountered. The query is then executed using adapter.Fill(tbl) where tbl is a reference to a new DataTable object. This table will be filled with data if the T-SQL code returns any data. As mentioned earlier, the InfoMessage event will be raised synchronously, so the next line after calling Fill will only be executed after all errors were raised through the event. All errors (if any) are copied to a new array of SqlError objects. This array is assigned to the out parameter errorsArray, allowing the client of our class to check if any errors were encountered. Remember that no exceptions will be thrown when you set FireInfoMessageEventOnUserErrors to true.
IsConnected
adapter.Fill(tbl)
tbl
errorsArray
This method accepts one parameter, sqlText which contains the T-SQL code to be parsed. It returns an array containing SqlError objects. The client code should test the length of this array to determine if any errors were generated.
public SqlError[] Parse(string sqlText)
{
if (!IsConnected)
throw new InvalidOperationException("Can not parse Sql query while the connection is closed!");
errors.Clear();
cmd.CommandText = "SET PARSEONLY ON";
cmd.ExecuteNonQuery();
cmd.CommandText = sqlText;
cmd.ExecuteNonQuery(); //conn_InfoMessage is invoked for every error, e.g. 2 times for 2 errors
cmd.CommandText = "SET PARSEONLY OFF";
cmd.ExecuteNonQuery();
return errors.ToArray();
}
Again, we throw an exception if the connection is not open, and we clear the errors list. SQL Server has an option "PARSEONLY" that we will use to prevent further processing of our T-SQL code beyond the parse phase. Before our sqlText string is executed, the PARSEONLY option is set to ON. Afterwards it is set back to OFF. There is a potential pitfall here: what if the client code is a console-type application, and the user executed the command SET PARSEONLY ON to explicitly prevent further execution beyond the parse phase. When the client code then calls the Parse method, PARSEONLY will be set back to OFF before the method returns, without the user's knowledge. Workarounds for this problem will not be explored further in this article, because the implementation will differ as per requirements of the project.
PARSEONLY
ON
OFF
SET PARSEONLY ON
The ConnectionString property of our SqlHandler class "forwards" the ConnectionString property on the SqlConnection object that it encapsulates. In the constructor, the ConnectionString is initialized to a "template" connection string. You have to manually insert the Data Source and Initial Catalog values in the string. The Connect method accepts a string argument containing a connection string. This connection string will replace the existing connection string on the SqlConnection object.
Data Source
Initial Catalog
Connect
My sample project contains the SqlHandler class, and a small test application. The application provides some basic text editor functionality such as opening files, saving files, cut, copy and paste. Furthermore, it implements the SqlHandler object's methods to enable connecting and disconnecting from a SQL Server instance, and executing and parsing SQL code. The layout of the main window was designed to be familiar looking, with the menu and toolbar at the top, the text area in the middle, and an error grid and status bar at the bottom. When you build and run the application, a Connection dialog window will pop up. On this window you have to enter a valid connection string to connect to a SQL Server instance. Keep in mind that this application is not multi-threaded. As a result, entering a bad connection string will cause the interface to "hang" while the connection times out and eventually returns with an error message.
I have created a region in the SqlHandler class for housing custom RoutedUICommand objects for binding my own commands to the user interface. I put them in their own separate region because they have nothing to do with the rest of the class. These command objects are all static, and the class also defines a static constructor for initializing them. These commands could also have been placed in a separate class.
RoutedUICommand
Type your T-SQL text in the text area in the middle of the window. To parse the code, press the Parse button, or press the F6 key on the keyboard. To execute the code, press the Execute button, or press F5 on the keyboard. Both the Parse and Execute functions will report errors in the errors grid at the bottom of the application. The errors grid is nested within an expander which will pop up automatically when errors are generated. When you execute a query that returns a result set, a result viewer window will appear. Parsing and executing will be disabled when the application is not connected to a SQL Server instance, as defined by the command bindings.
When you parse a query that references invalid database objects such as tables or columns that does not exist, no errors will be returned. Remember from the Background section that Parsing does not include Binding.
Compliments to the author of the icons set which can be downloaded here for free.
Visual Studio has some nifty little tools that can make your life easier. One of them is the tool that inserts appropriate code snippets where it is expected by pressing the Tab key. This is useful, for example, when you are registering the InfoMessage event. Type the following line of code: conn.InfoMessage +=. You should see a little pop up box...
conn.InfoMessage +=
Press Tab once and it will complete the line for you based on the required delegate for the event. Press Tab again and it will generate the event handler method for you. The event handler will already be set up to contain the correct arguments, all you have to do is add your code.
SQL Server Pro, 23/10/1999, Inside SQL Server: Parse, Compile, and Optimize [online] Available at: [Accessed on 20th June. | http://www.codeproject.com/Articles/410081/Parse-Transact-SQL-to-Check-Syntax?fid=1737157&df=90&mpp=10&sort=Position&spc=None&tid=4294106&noise=1&prof=True&view=None | CC-MAIN-2016-44 | refinedweb | 2,863 | 54.63 |
The IO LibraryEdit
Here, we'll explore the most commonly used elements of the
System.IO module. and write an entire file without having to open it first.
BracketEdit
The
bracket function comes from the
Control.Exception module. It helps perform actions safely.
bracket :: IO a -> (a -> IO b) -> (a -> IO c) -> IO c WriteMode).
A File Reading ProgramEdit
We can write a simple program that allows a user to read and write files. The interface is admittedly poor, and it does not catch all errors (such as reading a non-existent file). Nevertheless, it should give a fairly complete example of how to use IO. Enter the following code into "FileRead.hs," and compile/run:
module Main where import System.IO import Control.Exception main = first to see if the first character is a `q.' If it is, it returns a value of unit type.
Note
The
return function is a function that takes a value of type
a and returns an action of type
IO a. Thus, the type of
return () is
IO ().
and a list and returns the first
elements of the list).
The
doWrite function asks for some text, reads it from the keyboard, and then writes it to the. | http://en.m.wikibooks.org/wiki/Haskell/Hierarchical_libraries/IO | CC-MAIN-2015-14 | refinedweb | 204 | 76.72 |
Issues
ZF-6119: Zend_Cache_Frontend_Page improvement for cache_with_cookie_variables and sessions
Description
Using sessions in an application may have a possibly undesired effect on Zend_Cache_Frontend_Page:
When trying to cache a page via the Zend_Cache_Frontend_Page frontend, and with options cache_with_session_variables = true and cache_with_post_variables = false, then no cache-id will be generated. Reason for this is that the session generates a volatile cookie with a predefined session name. Even if no other data is set in the cookies, then no cache-id is generated. This is expected behaviour, but it may be undesired. After all, if you refrain from using cookies, then you should be able to generate a cache-id.
An improvement should be built into Zend_Cache_Frontend_Page::_makePartialId(). If cache_with_session_variables is true, then the session name ("PHPSESSID" per default) index should not be taken into account when trying to create a partial Id for the $_COOKIES superglobal.
Posted by Dolf Starreveld (dolfs) on 2009-04-04T17:43:47.000+0000
I'd go one step further and say that the namespace meta data (SESSION['_ZF') etc. should also not be taken into account. This too is data that the innocent developer may not be aware of as present in the SESSION array and including it can cause the ID to change for every page.
The problem becomes even more lethal when combined with the various approaches found on the Web to implement an inactivity timeout. All invariably involve setting a "last Activity time" into the session on each request. (Either directly, or indirectly through rememberMe which calls regenerateID, etc.). Each page will cause the serialized version of the SESSION to change, causing a new key, causing a cache miss. All suggested solutions fail for this scenario (meaning they effectively negate the page cache if, for other reasons, the key must depend on the SESSION content).
Another approach involves subclassing Zend_Cache_FrontEnd_Page to generate ids differently, but unfortunately, makeID (called in start), is private, forcing you to completely override start (which is not private), which seems rather inefficient and undesirable.
Posted by Dolf Starreveld (dolfs) on 2009-04-04T18:14:27.000+0000
Pardon in the above comment, the remark about the visibility of makeId pertains to 1.6.x versions.
Posted by Fabien MARTY (fab) on 2009-07-17T11:03:31.000+0000
change Assignee because I'm inactive now. | http://framework.zend.com/issues/browse/ZF-6119 | CC-MAIN-2015-22 | refinedweb | 386 | 54.22 |
Important changes to forums and questions
All forums and questions are now archived. To start a new conversation or read the latest updates go to forums.mbed.com.
3 years ago.
How to measure 1 and 0 in digital signal ?
I have an wind speed sensor which gives me digital signal of 1 and 0, with "1" on 3.3 V and "0" on 0 V. i use MBED NXP LPC1768 I want to count how many "1" is in my signal in 0.5 seconds This is my code below :
Problem is in that I have for output in counter numbers like 87642,72348,93484 which I think it's not right.
Has anyone have solution ?
I would much appreciate that.
1 Answer
3 years ago.
Hello Stjepan,
Below is a code counting pulses (rising edges) in 500ms.
#include "mbed.h" Serial pc(USBTX, USBRX); InterruptIn input(p9); Timeout timeout; volatile bool measuringEnabled = false; volatile int counter; //ISR counting pulses void onPulse(void) { if(measuringEnabled) counter++; } // ISR to stop counting void stopMeasuring(void) { measuringEnabled = false; } // Initializes counting void startMeasuring(void) { counter = 0; timeout.attach(callback(&stopMeasuring), 0.5); measuringEnabled = true; } int main() { input.rise(callback(&onPulse)); // assign an ISR to count pulses while(1) { startMeasuring(); while(measuringEnabled); // wait until the measurement has completed pc.printf("counter = %d\r\n", counter); } }
TIP FOR EDITING: You can copy and paste a code into your question as text and enclose it within
tags as below (each tag on separate line). Then it's going to be displayed as code in a frame.
<<code>> and <</code>>
<<code>> #include "mbed.h" DigitalOut led1(LED1); int main() { while (1) { led1 = !led1; wait(0.5); } } <</code>>
So you are taking a polling approach where you spin in a loop and continuously check the input. This could work if you don't have anything else to do in the program. A more common approach would be to set a rising edge interrupt on p9. The hardware will look for the low to high transition for you and then run specified function every time a low to high transition is detected.
You don't say what sensor you are using. We would need to know exactly what output it generates in order to decode it. Should we be counting pulses? So X number of pulses in 500ms = Y wind speed? This means you want to count the number of transitions from low input to high input in 500ms. Make a variable to keep track of current pin state. When the pin state changes from low to high, increment your counter.
Another possibility is the output time high vs time low represents the wind speed. In which case you just need to add another counter and increment it when the input = 0. At the end of 500ms you get a ratio of time_high : time_low.posted by Graham S. 19 Jun 2017 | https://os.mbed.com/questions/78331/How-to-measure-1-and-0-in-digital-signal/ | CC-MAIN-2020-29 | refinedweb | 482 | 75.4 |
In the previous tutorial, we have known that when an error occurs within a method of program, the method creates an exception object and hand over it to the runtime system (JVM).
The process of creating an exception object and handing it to runtime system is called throwing an exception in java. This exception object contains information about exception name, type of exception, and state of the program where exception occurred.
After a method throws an exception, JVM searches for a method in the call stack that contains a block of code that handles the exception. This process is called catching an exception.
A block of code that catches the exception thrown by JVM is called exception handler. This whole mechanism is called exception handling. In java, the basic concepts of exception handling are throwing an exception and catching it.
Hence, Java provides five essential keywords to handle an exception. They are: try, catch, finally, throw, and throws. These keywords can be used to handle exceptions properly.
In this tutorial, we will know the first two keywords try and catch block with example program. So, let’s proceed.
Try Block in Java
A keyword “try” is a block of code or statements that might throw an exception. That’s why a try block is also known as exception generated block. The Java code that may generate an exception during the execution of program, must be placed within a try block.
That is, we should place exception generated code (risky code) inside try block. We should not keep normal code inside try block. Suppose there are three statements inside try block.
First statement may occur exception and is on top inside try block. The other two statements are normal and are below the first statement. If exception occurred in statement 1, other two normal statements will not be executed. Therefore, the length of code inside try block should be as much as less.
The three possible forms of try block are as follows:
1. try-catch: A try block is always followed by one or more catch blocks.
2. try-finally: A try block followed by a finally block.
3. try-catch-finally: A try block followed by one or more catch blocks followed by a finally block.
Catch Block in Java
A keyword “catch” is a block of code that handles the exception thrown by the try block. That’s why it is also known as exception handler block. A catch block that catches an exception, must be followed by try block that generates an exception.
The general syntax of try-catch block (exception handling block) is as follows:
Syntax:
try { // A block of code; // generates an exception } catch(exception_class var) { // Code to be executed when an exception is thrown. }
In the above syntax, exception_class represents a type of exception class or subclass and var represents a variable name that refers to exception stack where exception details are stored.
Exception Handling Mechanism using Try-Catch block
A systematic representation of relation between try and catch block in java is shown in the below figure.
The try-catch block is a technique used to catch and handle the exception. If an exception occurs in the try block, the rest of code in try block is not executed and the control of execution is transferred from the try block to catch block that handles exception thrown by try block.
A catch block acts as an exception handler that takes a single argument. This argument is a reference of an exception object that can be either the same class exception or superclass exception.
If the catch block argument matches with the type of exception object thrown by try block, the exception is handled and statements in the catch block are executed. But if catch argument is not matched, the exception is not caught and the default exception handler will terminate the program abnormally.
In case no exception is thrown by java try block then the catch block is ignored and the control of execution is passed to the next statement after the catch block.
Rules of using Try and Catch block in Java
There are some rules for using try-catch block in java program. They are as follows:
1. Java try-catch block must be within a method.
2. A try block can not be used without a catch or finally block. It must be followed by at least one catch block otherwise, the compilation error will occur.
3. A catch block must be followed by try block. There should not be any statement between end of try block and beginning of catch block.
4. A finally block cannot come before catch block.
Control Flow of Try Catch Block in Java
Let’s understand the control flow inside the try-catch block with a suitable example. Consider the below code.
try { statement 1; statement 2; statement 3; } catch(exception_class var) { statement 4; } statement 5;
In the above code, there are three statements inside try block, one statement block inside catch block, and one statement outside try-catch block. Let’s see different cases.
Case 1: Suppose no exception occurs inside try block then statement 1, statement 2, and statement 3 will be executed normally. But, the catch block will not be executed because no exception is thrown by try block.
After complete execution of try block, the control of execution will be passed to the next statement. Now, statement 5 will execute normally. Thus, the control flow will be like this:
statement 1 ➞ statement 2 ➞ statement 3 ➞ statement 5 ➞ Normal Termination of program.
👉 If no exception occurs within try block, Except catch block, all remaining code will be executed normally.
Case 2: Suppose an exception occurs in statement 2 inside try block and the exception object created inside try block is matched with argument of catch block. What will be the control flow in this case?
a. Inside try block, statement 1 will be executed normally.
b. When the exception occurred in statement 2, the control of execution immediately is transferred to catch block and statement 4 inside the catch block will be executed.
c. After executing statement 4, statement 3 in try block will not be executed because the control never goes back to execute remaining code inside try block.
d. After complete execution of catch block, statement 5 will be executed normally. Thus, the control flow will be like this:
statement 1 ➞ statement 4 ➞ statement 5 ➞ Normal Termination.
Case 3: Suppose an exception occurred at statement 2 and exception object created is not matched with argument of the catch block. In this case, what will be the control flow?
a. Statement 1 will be executed normally within try block.
b. If an exception occurred in statement 2 and exception object created does not match with argument of catch block, program will be terminated abnormally and the rest of code will not execute.
For example, suppose an exception object is created for ArithmeticException class and catch block has a reference of NullPointerException exception object as an argument, in this case, both are not matched and program will be terminated abnormally.
So, the control flow will be like this:
statement 1 ➞ Abnormal Termination.
Case 4: Suppose an exception occurred in statement 4 inside the catch block. In this case, what will be control flow?
Note that an exception occurs not only inside try block but also can occur inside catch and finally block. Since the exception has occurred in statement 4 inside catch block and statement 4 is not inside the try block, the program will be terminated abnormally. Thus, control flow is like this:
statement 1 ➞ statement 2 ➞ statement 3 ➞ statement 4 ➞ Normal Termination.
👉 If an exception occurs in any statement and statement is not inside the try block then program will be terminated abnormally.
Case 5: Suppose an exception occurs in statement 5. In this case, what will be control flow?
Since statement 5 is not inside try block, program will be terminated abnormally. The control flow will be as follows:
statement 1 ➞ statement 2 ➞ statement 3 ➞ Abnormal Termination.
Java Exception Handling Example Program
Let’s create a small java program where we will perform some illegal operation like division by zero without using java try-catch block.
Program source code 1:
public class TryCatchEx { public static void main(String[] args) { System.out.println("11"); System.out.println("Before divide"); int x = 1/0; System.out.println("After divide"); System.out.println("22"); } }
Output: 11 Before divide Exception in thread "main" java.lang.ArithmeticException: / by zero
As you can see in the above example, the rest of the code is not executed and program is terminated abnormally due to the generation of ArithmeticException at line int x = 1/0;.
Suppose if 100 lines of code are in the program after exception then all the code after exception will not be executed. To overcome this situation, we will use java try-catch block. So, let’s see how?
Program source code 2:
public class TryCatchEx1 { public static void main(String[] args) { System.out.println("11"); System.out.println("Before divide"); try { int x = 1/0; System.out.println("After divide"); } catch(ArithmeticException ae) // Here, ae is a reference variable of exception object. { System.out.println("A number cannot be divided by zero"); } System.out.println("22"); } }
Output: 11 Before divide A number cannot be divided by zero 22
As you can see in the above code, exception is handled properly using try-catch block and rest of code is also executed.
Now, let’s see different kind of example programs based exception handling using try-catch block with a brief explanation.
Program source code 3:
public class TryCatchEx2 { public static void main(String[] args) { System.out.println("111"); try { int x = 12/0; System.out.println("Result of x: " +x); System.out.println("333"); } catch(ArithmeticException ae) { System.out.println("Hello world"); } System.out.println("444"); } }
Output: 111 Hello world 444
In the preceding code, exception occurred in first line inside try block. Since exception object created is matched with argument of catch block, the control immediately is passed to catch block without executing the rest of code inside try block. Inside catch block, statement is executed normally.
Program source code 4:
public class TryCatchEx3 { public static void main(String[] args) { int x = 100, y = 0; try { System.out.println("111"); int z = x/y; System.out.println("Result of z: " +z); } catch(ArithmeticException ae) { System.out.println("Hello Java"); } System.out.println("333"); } }
Output: 111 Hello Java 333
Program source code 5:
public class TryCatchEx4 { int x = 30, y = 0; void divide() { System.out.println("I am in method"); try { System.out.println("I am in try block"); int z = x/y; System.out.println("Result of z: " +z); } catch(NullPointerException np) { System.out.println("I am in catch block"); } } public static void main(String[] args) { TryCatchEx4 obj = new TryCatchEx4(); System.out.println("I am in main method"); obj.divide(); } }
Output: I am in main method I am in method I am in try block Exception in thread "main" java.lang.ArithmeticException: / by zero
In the above code, exception object created in try block does not match with argument of catch block. Therefore, exception is not handled by catch block, and the program is terminated abnormally.
Program source code 6:
public class TryCatchEx5 { public static void main(String[] args) { try { System.out.println("111"); System.out.println("222"); } catch(ArithmeticException ae) { System.out.println("333"); } System.out.println("444"); } }
Output: 111 222 444
Program source code 7:
public class TryCatchEx6 { public static void main(String[] args) { System.out.println("111"); try { System.out.println("222"); int y = 1/0; } catch(ArithmeticException e) { try { System.out.println("Hello"); int x = 20/0; } catch(NullPointerException np) { System.out.println("333"); } } System.out.println("444"); } }
Output: 111 222 Hello Exception in thread "main" java.lang.ArithmeticException: / by zero
Program source code 8:
public class TryCatchEx7 { public static void main(String[] args) { try { int a[] = {20, 30, 40, 50}; a[10] = 5; } catch(ArrayIndexOutOfBoundsException a) { System.out.println("Array Index Out Of Bounds Exception"); } } }
Output: Array Index Out Of Bounds Exception
Program source code 9:
public class TryCatchEx8 { public static void main(String[] args) { try { Class c = Class.forName("ArithmeticException"); // This method returns the Class object associated with the class or interface with the given string name. } catch(ClassNotFoundException cn) { System.out.println(cn.getMessage()); } } }
Output: ArithmeticException
Program source code 10:
public class TryCatchEx9 { public static void main(String[] args) { try { String input = "Scientech Easy"; int a = Integer.parseInt(input); System.out.println("Value of a: " +a); } catch(NumberFormatException n) { System.out.println(n.getMessage()+ " is not an integer."); } } }
Output: For input string: "Scientech Easy" is not an integer.
Final words
Hope this tutorial has covered almost all important points related to try-catch block in java with example programs. I hope that you will have understood how to handle exception using Java try-catch block?
Thanks for reading!!!
Next ⇒ Multiple Catch block in Java⇐ PrevNext ⇒ | https://www.scientecheasy.com/2020/05/java-try-catch-block.html/ | CC-MAIN-2020-24 | refinedweb | 2,172 | 56.55 |
Cosmos is an acronym for C# Open Source Managed Operating System. Despite C# being in the name, VB.NET, Fortran, and any .NET language can be used. We chose the C# title because our work is done in C#, and because VBOSMOS sounds stupid.
Cosmos is not an Operating System in the traditional sense, but instead, it is an "Operating System Kit", or as I like to say "Operating System Legos", that allows you to create your own Operating System. However, having been down this path before, we wanted to make it easy to use and build. Most users can write and boot their own Operating System in just a few minutes, and using Visual Studio.
Cosmos lets Visual Studio compile your code to IL and then Cosmos compiles the IL into machine code.
This article was originally written in 2008. Cosmos has come a long way since then and many of the screens, especially the build process, look significantly different now. If you like what you see here, I strongly suggest you check out the more recent builds as we have made a lot of progress since then.
Cosmos comes in two flavors, a user kit and a dev kit. The user kit hides the kernel source away from the user, and presents a new project type in File, New. Building a new Operating System is as simple as File, New, writing a few lines of code, and pressing F5 to build and run it. The user kit is a bit old though, and is really only designed to interest people in obtaining the dev kit.
The dev kit is available as a project on CodePlex, and the Cosmos website is..
To get the dev kit, simply sync the sources from the CodePlex project. It might appear daunting at first, 43 projects? A lot of these projects are demos, tests, and playgrounds.
Note that Visual Studio Express cannot handle the solution folders. Because of this, Express users should use the Flat solution file, but first, you must update the Flat solution file from the main solution using the Flat file updater utility.
In the Boot folder, and then the Demo folder, you will see a CosmosBoot project. This is the empty shell used in the user kit for the new project template. This is the minimal Operating System that Cosmos can build. You can copy this file and change it to your needs. Let's take a quick look.
Properties and References are standard parts of any .NET project. Let's look in Program.cs.
Note on images: CodeProject limits images to 600 pixels in width. I've chosen to crop the width to retain readability.
Init is the entry point that will be called after the system is booted. This is where you put your code. Some sample code has been generated already, and you can change it. Be sure to leave the first two lines alone, they initialize memory, hardware, etc.
Init
Let's take a look at something a little more complex though. Here is a demo called Guess. It is a simple application which asks the user to guess a number. As you can see, it is all standard C# code, and aside from the first two lines for booting, the code can function unchanged on Windows.
Now, let's run it. Are you expecting some complicated process involving batch files and CD-R's? If so, you will be disappointed. Simply press F5 in Visual Studio to run it. Instead of the project running immediately, you will see a Cosmos Builder Options window.
The builder supports many environments including physical hardware. Virtual Machines are easier to develop against, and are certainly easier to take screenshots of, so I will use VMWare for this article.
With these options set, I clicked the Build button (the Enter key also proceeds). Next, a build progress window will appear. Cosmos is now compiling the IL to machine code, linking and preparing the boot image.
When this is done, the window will disappear, and VMWare will automatically be launched with an ISO mounted.
Select "Power on this virtual machine", and the Guess demo will boot.
This is the demo, booted directly to hardware. There is no need for Windows, Linux, or any other Operating System to run this code. During the boot process, you may notice SysLinux. The SysLinux however is not a Linux distribution, but instead, just a boot loader used to read files from the disk. We use SysLinux instead of Grub because it has support for PXE and other options. SysLinux is only used for the initial loading of the Cosmos binaries, and as soon as Cosmos is loaded, SysLinux is unloaded, and is not used to run Cosmos code.
Ease of use, or what I like to call "Nail the basics", is a priority for us. However, graphics and other things will follow in the future. Currently, we are working on a wide range of file systems including FAT and ext2. We are also working on Ethernet support, and can already send basic UDP packets, and are working on TCP support as well.
Debugging Operating Systems is typically inconvenient and hinders progress. Because of this, just as we did with making the build and boot process, we strived to make the debugging process easy. The debugger is still a work in progress, but is already quite advanced for an Operating System debugger.
Adding the Cosmos.Debug namespace gives the code access to communicate directly with the debugger.
Cosmos.Debug
In the Guess demo, I am adding a Debugger.Send, which will write a debug string to the debugger. This can be used to track code execution, but can also be used for watching variables. In this case, I will use it to write out the magic number.
Debugger.Send
Notice I have also added a Debugger.Break. This will force the program to execute a breakpoint at that location.
Debugger.Break
Now, let's run the Guess again. Which demo is booted is selected in Visual Studio simply by selecting it as the startup project for the solution.
This time in debugger options, Source and User Only is selected. Source tells the debugger to debug C# code, rather than the lower level IL. And, User Only tells the debugger not to trace into the Cosmos Kernel or the .NET libraries. This speeds up execution, but also lets me focus on tracing just the demo code.
Note that this time it did prompt us for the number? This is because it hit the breakpoint. Another window will appear as well, this is the Cosmos Debugger.
The message and breakpoint are displayed in the trace log, and the code is selected for the breakpoint. Breakpoints occur on the next statement after the requested break.
Now, we can use the Step button (F11, just like Visual Studio) to step through the code. Each step is recorded in the trace log, and previous items can be selected to walk backwards through the code. The trace log functions similar to the call stack window in Visual Studio.
Press continue (F5), and the code will run again until a breakpoint is encountered in the code, or requested from the debugger. After Continue, a new button will appear that allows a forced instant break. After Continue, the code will continue, and we will be prompted for the number.
We can also turn tracing on and off for specific sections of code, using the Cosmos.Debug namespace.
This will cause the trace log to be populated with all the statement executions between the TraceOn and TraceOff, without needing to step through each statement manually.
TraceOn
TraceOff
This time Cosmos & User is selected to show more details in the trace log. Normally, this option is only needed by developers working on the Cosmos kernel source.
Cosmos is an open source project, as included as part of its name. I hope this article gives an easy to understand introduction. If you are interested in Operating System development, I hope you will try Cosmos!
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
System.Exception: Plug needed. System.Void System.String..ctor(System.Char, System.Int32)
vid Cosmos.IL2CPU.ILScanner.ScanMethod(MethodBase aMethod, Boolean aIsPlug)
vid Cosmos.IL2CPU.ILScanner.ScanQueue()
vid Cosmos.IL2CPU.ILScanner.Execute(MethodBase aStartMethod)
vid Cosmos.Compiler.Builder.RunEngine(Object aParam)
System.Exception: Plug needed. System.String System.Number.FormatDecimal(System.Decimal, System.String, System.Globalization.NumberFormatInfo)
bij Cosmos.IL2CPU.ILScanner.ScanMethod(MethodBase aMethod, Boolean aIsPlug)
bij Cosmos.IL2CPU.ILScanner.ScanQueue()
bij Cosmos.IL2CPU.ILScanner.Execute(MethodBase aStartMethod)
bij Cosmos.Compiler.Builder.Builder.RunEngine(Object aParam)
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/29523/Cosmos-C-Open-Source-Managed-Operating-System?msg=3319785 | CC-MAIN-2014-49 | refinedweb | 1,482 | 66.84 |
Manage data in CDMA phones from LG, Samsung, Sanyo and others
BitPim uses SourceForge for project management. You can get to all the source, trackers (bugs, feature requests etc), mailing list and more at sourceforge.net/projects/bitpim
The source is stored in Subversion at sourceforge.net/svn/?group_id=75211.
You can check it out using the Subversion tools for your platform. Note that
the main code is at.
There is a bitpim-devel mailing list for developers where the techie action happens. You should also subscribe to bitpim-cvs-checkins if you want notification of all changes that happen to the source.
BitPim is written in a programming language named Python. In addition to running well on many different platforms, Python also has that most important feature of being easy to read. It is also very easy to learn, and VERY productive.
Here are three pages that help you navigate your way through the code.
All the code coloured in, and using a cross referencer. Library and function calls are hyperlinked to their definitions. (The cross referencer isn't perfect ... yet)
Documentation generated from appropriate comments in the source code.
When we used CVS, this page showed the most recent 500 changes with diffs so you can see what has been happening recently. If you know of a tool that does the same thing for Subversion then please let us know.
If you have standalone code that implements some feature that isn't in Python, feel free to contribute that. It can be used as the basis for Python code, or as a test suite. The hard part is figuring out what to do and how to do it, and you will have already solved that :-)
You can do your development on Windows, Linux or Mac. If you do any work on the user interface, you should try your code out on at least two of the platforms since there are often minor platform specific differences that should be taken into account.
You will need to download and install the list of packages below. For
Python packages that don't come with a binary installer, there is usually a
setup.py file in the top level of what you downloaded. Simply
type
python setup.py install and the package will be
installed. You will need to have administrative/root access.
If you want to work on USB code, or using the USB module then you will need C compilers and some other tools. Please post on bitpim-devel for further details.
You muse use Python 2.5. Linux already comes with Python. MacOS X 10.3 does as well. For other
platforms, grab it from. If you are on Linux with an older version then you can
install 2.5 alongside your existing version by building from source rpms
on python.org
wxPython is the graphics toolkit used. Grab it from. Note that you must use version 2.8.7.1 and you must use the Unicode version built for Python 2.5.
Linux users should use the GTK2 version and will probably need to rebuild from source on all versions of Linux. The simplest way is to download the GTK2 source rpm from the binaries download of wxPython and then do one of the following depending on your distro. For all commands it is assumed that you are root. If you want to do the building as a non-root user, you need to setup your rpm build environment as detailed here. After the rpm is built, scroll back a bit in the console to see exactly where the built file ended up.
Remember to delete any existing wxPython rpms from your rpmdir before building, or make sure you specify the correct version number in the install lines (rpm -U)
RPM based distrowget rpmbuild --rebuild --define 'pyver 2.5' wxPython2.8-2.8.7.1-1.src.rpm rpm -U rpmdir/wxPython2.8-gtk2-*.rpm rpmdir/wxPython-common-*.rpm
Gentooemerge rpm wget rpmbuild --rebuild --define 'pyver 2.5' wxPython2.8-2.8.7.1-1.src.rpm rpm -U --nodeps rpmdir/wxPython2.8-gtk2-*.rpm rpmdir/wxPython-common-*.rpm
Debian
Debian stable is way behind the times, so you may find something appropriate in testing. Alternatively, the instructions below should work.apt-get install alien apt-get install libgtk2.0-dev freeglut3-dev python2.5-dev wget rpmbuild --rebuild --define 'pyver 2.5' wxPython2.8-2.8.7.1-1.src.rpm cd rpmdir alien packagenames.rpm dpkg -i whatever alien called them
pySerial is used to interface with the serial port. You must use version 2.2. Grab it from pyserial.sourceforge.net
PyWin32 is used by pySerial to do the underlying nasty work of accessing the Windows API. Your must use Build 210. Grab it from sourceforge.net/projects/pywin32
APSW is the wrapper used for the SQLite database that stores some BitPim data. You must use APSW version 3.3.13-r1 with SQLite version 3.5.4. Grab it from initd.org/tracker/pysqlite/wiki/APSW.
Check out the BitPim code from Subversion. I recommend checking it out
to a directory named
c:\projects\bitpim or something similar
on other platforms. The subversion section has
pointers to various graphical and command line clients you can use.
You need paramiko if you want to use BitFling. You must use version 1.7.1 (Amy). You can get it from
You need pyCrypto if you want to use BitFling. You must use version 2.0.1. It can be downloaded from
On Windows you don't have to build anything. On Linux and Mac you should build the USB module if you use an LG phone with a straight USB cable (ie not USB to serial). On all platforms the native C version of the string matcher is faster than the Python implementation. The simple way to build everything is:
$ python packaging/buildmodules.py
Note that you will need Swig on your path and libusb installed to build the USB module. To build the string matcher you need to have a C compiler in your path (MinGW on Windows - you only need MinGW itself not MSYS).
This is how to build them the manual way
-
Build usb library (optional - Linux & Mac only)
- You need to build the USB library if you want direct USB support. Run the relevant build script in the
native/usbdirectory. Note that you will need Swig installed (version 1.3.19 or above), as well as the header files and library (devel) parts of libusb, and a C compiler. (The relevant usb package is named libusb-dev on Debian, libusb-devel on Redhat and just plain libusb on Gentoo. You can also download it from libusb.sourceforge.net where you will also find the Mac version.).
-
Compile string matcher (optional)
-
The string matcher code uses the Jaro Winkler algorithm. There is both a C and Python implementation in the module. If you have a large phonebook then you will want the C version as it is much faster.windows> python setup.py build --compile=mingw32 windows> copy build\lib.win32-2.3\jarow.pyd Mac/Linux# python setup.py build ; cp build/*/jarow.* .
python bp.py should run BitPim from a console.
There is a developer console (python interpreter with access to all the internals) builtin. Find the current config file (Edit > Settings will tell you the name) and add a key in the default section add:
console = 1
You can also add
import pdb; pdb.set_trace()
Note that the debugger won't behave well if you have the developer console turned on.
You can put it in an if statement or similar trigger. Once the
tracepoint is hit, type
up and you will be in the code and
can step, print variables or anything else that takes your fancy. Type
help for a list of commands.
The author actually uses xemacs with a few print statements every now and then.
Ensure you do a Subversion update every now and then to pick up
updates that other people have made to the code. If you would like to
supply a patch, then do
svn diff and capture the output.
Don't worry about your patch being perfect, or even working. We will happily adapt, rewrite or repurpose it. You will have done the hard work of figuring out something useful and broadly how to do it :-)
All patch submissions are considered to be under the BitPim (GPL) license. [Please check the LICENSE file in the source tree as there are some relaxations, for example allowing linking with OpenSSL]. Do not submit them if you do not agree with the full terms of the license. Your copyright is however retained, or you are free to sign over your copyright to the project.
Usually your first step is adding or improving support for your phone.
You can find documentation in the
dev-doc directory.
If you would like to work on other things, then have a look at the todo list. Mail the bitpim-devel list if you intend to embark on any of those, or mention what areas you want to work on and ask for suggestions. | http://www.bitpim.org/developer.html | crawl-001 | refinedweb | 1,534 | 75.61 |
MLOCK(2) System Calls Manual MLOCK(2)
mlock, munlock - lock (unlock) physical pages in memory
#include <sys/mman.h> int mlock(const void *addr, size_t len); int munlock(const(2).
The mlock() and munlock() functions return the value 0 if successful; otherwise the value -1 is returned and the global variable errno is set to indicateINVAL] The address given is not page aligned or addr and size specify a region that would extend beyond the end of the address space. [ENOMEM] Some portion of the indicated address range is not allocated. Sun's implementation, multiple mlock() calls on the same address range require the corresponding number of munlock() calls to actually unlock the pages, i.e., mlock() nests. This should be considered a consequence. OpenBSD 6.4 November 15, 2014 OpenBSD 6.4 | https://resin.csoft.net/cgi-bin/man.cgi?section=2&topic=mlock | CC-MAIN-2019-09 | refinedweb | 133 | 53.81 |
Subject: Re: [boost] [auto_buffer] Interest check: container with preallocated buffer
From: Stewart, Robert (Robert.Stewart_at_[hidden])
Date: 2009-05-19 09:09:54
Thorsten Ottosen wrote
On Monday, May 18, 2009 9:05 AM
> Dmitry Vinogradov skrev:
>
> > Does any container exist to offer functionality like
> Boost.Array but
> > allowing to store from 0 to N elements? Is there any
> interest in such
> > container?
> >
> > PS. It's similar to fixed_string but as a generic container.
>
> Please see
>
>
>
>
auto_buffer looks really useful, Thorsten, but I'd like to pick at it a little.
boost::default_grow_policy is troubling. First, s/grow/growth/ in the name. Did you mean to that policy as appropriate for all of Boost? You put it in the boost namespace. Third, new_capacity() ought to be "reserve" because that more clearly associates it with auto_buffer<>::reserve() and the well understood behavior of, for example, std::vector<>::reserve().
auto_buffer's "StackBufferPolicy" template parameter should be named "SizingPolicy" because it has to do with computing the buffer's size. The "GrowPolicy" template parameter should be named "GrowthPolicy."
The auto_buffer nested value "N" is confusing as it will change based upon the SizingPolicy, but won't match that policy's "value" if using boost::store_n_bytes, and it isn't the number of elements in, or capacity of, the auto_buffer, particularly when memory is allocated from the allocator. Renaming it to "stack_capacity" should work, though.
Is there a compile time assertion for N > 0 for store_n_objects? How about that store_n_bytes<N>::value >= sizeof(T)? Those will help to diagnose misuse.
There should be a debug build assertion that GrowthPolicy::reserve(n) returns a value >= n to help diagnose mistakes.
"optimized_const_reference" would be better named "by_value_type" or something. For small types, optimized_const_reference isn't a reference at all, so the name is misleading. Better to name it for the usage rather than the type.
What's the point of push_back() with no argument? What's the motivation for that versus making the caller explicitly default construct a T and pass that to push_back(by_value_type)? Are you simply optimizing the copying? If so, isn't it better to mimic vector and gain the optimization with move semantics?
Shouldn't pop_back_n() be an algorithm rather than a member function? Is there really efficiency to be gained by making it a member and is there a good use case for that?
Rather than reserve_precisely(), which should be named "reserve_exactly" if retained, why not just document reserve() as allocating what GrowthPolicy::reserve() returns and leave the behavior to the policy? That leaves room for reserve() allocating exactly what's requested, or rounding up according to some computation, as the policy dictates.
The remarks for the uninitialized_*() functions use the phrase, "depending on the application," but that doesn't explain when the user is responsible to initialize or destroy the n elements. Change the remarks to note that if T is POD, the user need not initialize/destroy the affected elements, whereas not doing so for non-POD types results in undefined behavior.
The unchecked_*() functions deserve remarks that explain the danger in their use and that the precondition is not checked. If you don't already, though, you should check the precondition in debug builds to help clients use these dangerous functions correctly.
The description of should_shrink(), on the default_grow_policy class, mentions shrink_to_fit() which is not mentioned elsewhere in the documentation. I find shrink_to_fit() in Boost.Interprocess, but nowhere else. Did you forget something?
shrink_to_stack() might be a useful addition. You probably inferred the purpose: to shrink the size to what will fit on the stack and move the contents to the stack from allocator-supplied memory, if used. That seems the logical converse of dynamically growing beyond the stack allocation, though I admit I can't think of a motivating use case beyond simply trying to keep memory consumption low in a long running function.
There are numerous things to fix in the docs, plus more to add, of course, but I don't want to address those points now. I think I can find some time to help you flesh out the docs so you can submit auto_buffer for review, if you're interested.
_____ | https://lists.boost.org/Archives/boost/2009/05/151652.php | CC-MAIN-2019-43 | refinedweb | 698 | 56.05 |
At 08:29 PM 7/25/2005 -0500, Ian Bicking wrote: >Right now Paste hands around a fairly flat dictionary. This dictionary is >passed around in full (as part of the WSGI environment) to every piece of >middleware, and actually to everything (via an import and threadlocal >storage). It gets used all over the place, and the ability to draw in >configuration without passing it around is very important. I know it >seems like heavy coupling, but in practice it causes unstable APIs if it >is passed around explicitly, and as long as you keep clever dynamic values >out of the configuration it isn't a problem. > >Anyway, every piece gets the full dictionary, so if any piece expected a >constrained set of keys it would break. Even ignoring that there are >multiple consumers with different keys that they pull out, it is common to >create intermediate configuration values to make the configuration more >abstract. E.g., I set a "base_dir", then derive "publish_dir" and >"template_dir" from that. Apache configuration is a good anti-example >here; its lack of variables hurts me daily. While some variables could be >declared "abstract" somehow, that adds complexity where the unconstrained >model avoids that complexity. *shudder* I think someone just walked over my grave. ;) I'd rather add complexity to the deployment format (e.g. variables, interpolation, etc.) to handle this sort of thing than add complexity to the components. I also find it hard to understand why e.g. multiple components would need the same "template_dir". Why isn't there a template service component, for example? >When one piece delegates to another, it passes the entire dictionary >through (by convention, and by the fact it gets passed around >implicitly). It is certainly possible in some circumstances that a >filtered version of the configuration should be passed in; that hasn't >happened to me yet, but I can certainly imagine it being necessary >(especially when a larger amount of more diverse software is running in >the same process). > >One downside of this is that there's no protection from name >conflicts. Though name conflicts can go both ways. The Happy Coincidence >is when two pieces use the same name for the same purpose (e.g., it's >highly likely "smtp_server" would be the subject of a Happy >Coincidence). An Unhappy Coincidence is when two pieces use the same >value for different purposes ("publish_dir" perhaps). An Expected >Coincidence is when the same code, invoked in two separate call stacks, >consumes the same value. Of course, I allow configuration to be >overwritten depending on the request, so high collision names (like >publish_dir) in practice are unlikely to be a problem. I think you've just explained why this approach doesn't scale very well, even to a large team, let alone to inter-organization collaboration (i.e. open source projects). > For instance an application-specific middleware that could plausibly be > used more widely -- does it consume the application configuration, or > does it take its own configuration? But even excluding those ambiguous > situations, the way my middleware is factored is an internal > implementation detail, and I don't feel comfortable pushing that > structure into the configuration. That's what encapsulation is for. Just create a factory that takes a set of application-level parameters (like template_dir, publish_dir, etc.) and then *passes* them to the lower level components. Heck, we could even add that to the .wsgi format... # app template file [WSGI options]So that's the issue I'm concerned about. I think the right way to fix it is parameterization; that way you don't push a global (and non type-checkable) namespace down into each component. Components should have an extremely minimal configuration with fairly specific parameters, because it makes early error checking easier, and you don't have to search all over the place to find how a parameter is used, etc., etc. | https://mail.python.org/pipermail/web-sig/2005-July/001586.html | CC-MAIN-2016-44 | refinedweb | 649 | 53.92 |
My issues well at all. This post gives a more complete picture.
Our goal is to determine the meaning of this C program:
#include <stdio.h> int fermat (void) { const int MAX = 1000; int a=1,b=1,c=1; while (1) { if (((a*a*a) == ((b*b*b)+(c*c*c)))) return 1; a++; if (a>MAX) { a=1; b++; } if (b>MAX) { b=1; c++; } if (c>MAX) { c=1; } } return 0; }
int main (void) { if (fermat()) { printf ("Fermat's Last Theorem has been disproved.\n"); } else { printf ("Fermat's Last Theorem has not been disproved.\n"); } return 0; }
This program is a simple counterexample search; it terminates if it is able to disprove a special case of Fermat’s Last Theorem. Since this theorem is generally believed to be true, we would expect a counterexample search to run forever. On the other hand, commonly available C compilers emit terminating code:.
Before proceeding, let’s clarify a few things. First, I am not asking this question:
How could I change this code so that the compiler will respect its termination characteristics?
This is easy and there are several reliable ways to do it, for example using volatile variables or inline assembly.
Second, I am not interested in “answers” like this:
It is obvious what is going on here: the compiler sees that one path out of the function is dead, and then deduces that the only remaining path must be live.
This observation is not untrue, but it’s a little like explaining that World War II happened because people couldn’t all just get along. It completely fails to get at the heart of the matter.
Third, there are no integer overflow games going on here, as long as the code is compiled for a platform where an int is at least 32 bits. This is easy to see by inspecting the program. The termination problems are totally unrelated to integer overflow.
Program Semantics
A program’s meaning is determined by the semantics of the language in which it is written. The semantics tells us how to interpret constructs in the language (how wide is an integer? how does the “if” operator work?) and how to put the results of operations together into an overall result. Some computer languages have a formal mathematical semantics, some have a standards document, and some simply have a reference implementation.
Let’s look at a few examples. I’ll continue to use C, and will be quite informal (for a formal take on the meaning of C programs, see Michael Norrish’s PhD thesis). To keep things simple, we’ll look only at “self-contained” programs that take no inputs. Consider this program:
int main (void) { return 3; }
it means {{3,””}}. The notation is slightly cluttered but can be read as “the program has a unique interpretation which is to return 3 and perform no side effects.” To keep things simple, I’m representing side effects as a string printed to stdout. Of course, in the general case there are other kinds of side effects.
Here’s a slightly more complex program:
int main (void) { int a = 1; return 2 + a; }
it also means {{3,””}} since there is no possibility of integer overflow.
Not all programs have a unique meaning. Consider:
int main (void) { unsigned short a = 65535; return a + 1; }
The meaning of this program is {{65536,””}, {0,””}}. In other words, it has two meanings: it may return 65536 or 0 (in both cases performing no side-effecting operations) depending on whether the particular C implementation being used has defined the size of an unsigned short to be 16 bits or to be larger than 16 bits.
Another way that a C program can gain multiple meanings is by performing operations with unspecified behavior. Unlike implementation defined behavior, where the implementation is forced to document its choice of behavior and use it consistently, unspecified behavior can change even within execution of a single program. For example:
int a;
int assign_a (int val) { a = val; return val; }
int main (void) { assign_a (0) + assign_a (1); return a; }
Because the order of evaluation of the subexpressions in C is unspecified, this program means {{0,””}, {1,””}}. That is, it may return either 0 or 1.
This C program:
#include <stdio.h>
int main (void) { return printf ("hi\n"); }
means {{0,””}, {1,”h”}, {2,”hi”}, {3,”hi\n”}, {-1,””}, {-2,””}, {-3,””}, …}. The 4th element of this set, with return value 3, is the one we expect to see. The 1st through 3rd elements indicate cases where the I/O subsystem truncated the string. The 5th and subsequent elements indicate cases where the printf() call failed; the standard mandates that a negative value is returned in this case, but does not say which one. Here it starts to become apparent why reasoning about real C programs is not so easy. In subsequent examples we’ll ignore program behaviors where printf() has something other than the expected result.
Some programs, such as this one, don’t mean anything:
#include <limits.h>
int main (void) { return INT_MAX+1; }
In C, overflowing a signed integer has undefined behavior, and a program that does this has no meaning at all. It is ill-formed. We’ll denote the meaning of this program as {{UNDEF}}.
It’s important to realize that performing an undefined operation has unbounded consequences on the program semantics. For example, this program:
#include <limits.h>
int main (void) { INT_MAX+1; return 0; }
also means {{UNDEF}}. The fact that the result of the addition is not used is irrelevant: operations with undefined behavior are program cancer and poison the entire execution. Many real programs are undefined only sometimes. For example we can slightly modify an earlier example like this:
int a;
int assign_a (int val) { a = val; return val; }
int main (void) { assign_a (0) + assign_a (1); return 2/a; }
This program means {{UNDEF}, {2,””}}. Showing that a real C program has well-defined behavior in all possible executions is very difficult. This, combined with the fact that undefined behavior often goes unnoticed for some time, explains why so many C programs contain security vulnerabilities such as buffer overflows, integer overflows, etc.
One might ask: Is a C program that executes an operation with undefined behavior guaranteed to perform any side effects which precede the undefined operation? That is, if we access some device registers and then divide by zero, will the accesses happen? I believe the answer is that the entire execution is poisoned, not just the parts of the execution that follow the undefined operation. Certainly this is the observed behavior of C implementations (for example, content buffered to stdout is not generally printed when the program segfaults).
Finally we’re ready to talk about termination. All examples shown so far have been terminating programs. In contrast, this example does not terminate:
#include <stdio.h>
int main (void) { printf ("Hello\n"); while (1) { } printf ("World\n"); return 0; }
Clearly we cannot find an integer return value for this program since its return statement is unreachable. The C “abstract machine,” the notional C interpreter defined in the standard, has an unambiguous behavior when running this program: it prints Hello and then hangs forever. When a program behaves like this we’ll say that its meaning is {{⊥,”Hello\n”}}. Here ⊥ (pronounced “bottom”) is simply a value outside the set of integers that we can read as indicating a non-terminating execution.
Assuming that signed integers can encode values up to two billion (this is true on all C implementations for 32- and 64-bit platforms that I know of), the semantics that the abstract C machine gives to the Fermat program at the top of this post is {{⊥,””}}. As we have seen, a number of production-quality C compilers have a different interpretation. We’re almost ready to get to the bottom of the mystery but first let’s look at how some other programming languages handle non-terminating executions.
Java
Section 17.4.9 of the Java Language Specification (3rd edition) specifically addresses the question of non-terminating executions, assigning the expected {{⊥,””}} semantics to a straightforward Java translation of the Fermat code. Perhaps the most interesting thing about this part of the Java Language Specification is the amount of text it requires to explain the desired behavior. First, a special “hang” behavior is defined for the specific case where code executes forever without performing observable operations. Second, care is taken to ensure that an optimizing compiler does not move observable behaviors around a hang behavior.
C++
C++0x, like Java, singles out the case where code executes indefinitely without performing any side effecting operations. However, the interpretation of this code is totally different: it is an undefined behavior. Thus, the semantics of the Fermat code above in C++0x is {{UNDEF}}. In other words, from the point of view of the language semantics, a loop of this form is no better than an out-of-bounds array access or use-after-free of a heap cell. This somewhat amazing fact can be seen in the following text from Section 6.5.0 of the draft standard (I’m using N3090):
A loop that, outside of the for-init-statement in the case of a for statement,
- makes no calls to library I/O functions, and
- does not access or modify volatile objects, and
- performs no synchronization operations (1.10) or atomic operations (Clause 29)
may be assumed by the implementation to terminate.
[ Note: This is intended to allow compiler transformations, such as removal of empty loops, even when termination cannot be proven. –end note ]
Unfortunately, the words “undefined behavior” are not used. However, anytime the standard says “the compiler may assume P,” it is implied that a program which has the property not-P has undefined semantics.
Notice that in C++, modifying a global (or local) variable is not a side-effecting operation. Only actions in the list above count. Thus, there would seem to be a strong possibility that real programmers are going to get burned by this problem. A corollary is that it is completely clear that a C++ implementation may claim to have disproved Fermat’s Last Theorem when it executes my code.
We can ask ourselves: Do we want a programming language that has these semantics? I don’t, and I’ll tell you what: if you are a C++ user and you think this behavior is wrong, leave a comment at the bottom of this post or send me an email. If I get 50 such responses, I’ll formally request that the C++ Standard committee revisit this issue. I haven’t done this before, but in an email conversation Hans Boehm (who is on the C++ committee) told me:
If you want the committee to revisit this, all you have to do is to find someone to add it as a national body comment. That’s probably quite easy. But I’m not sure enough has changed since the original discussion that it would be useful.
Anyway, let me know.
Haskell
Haskell has a bottom type that is a subtype of every other type. Bottom is a type for functions which do not return a value; it corresponds to an error condition or non-termination. Interestingly, Haskell fails to distinguish between the error and non-terminating cases: this can be seen as trading diagnostic power for speed. That is, because errors and infinite loops are equivalent, the compiler is free to perform various transformations that, for example, print a different error message than one might have expected. Haskell users (I’m not one) appear to be happy with this and in practice Haskell implementations appear to produce perfectly good error messages.
Other Languages
Most programming languages have no explicit discussion of termination and non-termination in their standards / definitions. In general, we can probably read into this that a language implementation can be expected to preserve the apparent termination characteristics of its inputs. Rupak Majumdar pointed me to this nice writeup about an interesting interaction between a non-terminating loop and the SML type system.
C
Ok, let’s talk about termination in C. I’ve saved this for last not so much to build dramatic tension as because the situation is murky. As we saw above, the reality is that many compilers will go ahead and generate terminating object code for C source code that is non-terminating at the level of the abstract machine. We also already saw that this is OK in C++0x and not OK in Java.
The relevant part of the C standard (I’m using N1124) is found in 5.1.2.3:
The least requirements on a conforming implementation are:
- At sequence points, volatile objects are stable in the sense that previous accesses are complete and subsequent accesses have not yet occurred.
- At program termination, all data written into files shall be identical to the result that execution of the program according to the abstract semantics would have produced.
- The input and output dynamics of interactive devices shall take place as specified in 7.19.3. The intent of these requirements is that unbuffered or line-buffered output appear as soon as possible, to ensure that prompting messages actually appear prior to a program waiting for input.
Now we ask: Given the Fermat program at the top of this post, is icc or suncc meeting these least requirements? The first requirement is trivially met since the program contains no volatile objects. The third requirement is met; nothing surprising relating to termination is found in 7.19.3..)
So there you have it: the compiler vendors are reading the standard one way, and others (like me) read it the other way. It’s pretty clear that the standard is flawed: it should, like C++ or Java, be unambiguous about whether this behavior is permitted.
Does It Matter if the Compiler Terminates an Infinite Loop?
Yes, it matters, but only in fairly specialized circumstances. Here are a few examples.
The Fermat program is a simple counterexample search. A more realistic example would test a more interesting conjecture, such as whether a program contains a bug or whether a possibly-prime number has a factorization. If I happen to write a counterexample search that fails to contain side-effecting operations, a C++0x implementation can do anything it chooses with my code.
Linux 2.6.0 contains this code:
NORET_TYPE void panic(const char * fmt, ...) { ... do stuff ... for (;;) ; }
If the compiler optimizes this function so that it returns, some random code will get executed. Luckily, gcc is not one of the compilers that is known to terminate infinite loops. (Michal Nazarewicz found this example.)
In embedded software I’ll sometimes write a deliberate infinite loop. For example to hang up the CPU if main() returns. A group using LLVM for compiling embedded code ran into exactly that problem, causing random code to run.
When re-flashing an embedded system with a new code image, it would not be uncommon to hang the processor in an infinite loop waiting for a watchdog timer to reboot the processor into the new code.
Another plausible bit of code from an embedded system is:
while (1) { #ifdef FEATURE_ONE do_feature_one(); #endif #ifdef FEATURE_TWO do_feature_two(); #endif #ifdef FEATURE_THREE do_feature_three(); #endif } fputs("Internal error\n", stderr);
If you compile this code for a product that contains none of the three optional features, the compiler might terminate my loop and cause the error code to run. (This code is from Keith Thompson.)
Finally, if I accidentally write an infinite loop, I’d prefer my program to hang so I can use a debugger to find the problem. If the compiler deletes the loop and also computes a nonsensical result, as in the Fermat example, I have no easy way to find the latent error in my system.
Are Termination-Preserving Compilers Uneconomical?
The C and C++ languages have undefined behavior when a signed integer overflows. Java mandates two’s complement behavior. Java’s stronger semantics have a real cost for certain kinds of tight loops such as those found in digital signal processing, where undefined integer overflow can buy (I have heard) 50% speedup on some real codes.
Similarly, Java’s termination semantics are stronger than C++0x’s and perhaps stronger than C’s. The stronger semantics have a cost: the optimizer is no longer free to, for example, move side effecting operations before or after a potentially non-terminating loop. So Java will either generate slower code, or else the C/C++ optimizer must become more sophisticated in order to generate the same code that Java does. Does this really matter? Is is a major handicap for compiler vendors? I don’t know, but I doubt that the effect would be measurable for most real codes.
Worse is Better
Richard Gabriel’s classic Worse is Better essay gives the example where UNIX has worse semantics than Multics: it permits system calls to fail, forcing users to put them in retry loops. By pushing complexity onto the user (which is worse), UNIX gains implementation simplicity, and perhaps thereby wins in the marketplace (which is better). Pushing nonintuitive termination behavior onto the user, as C++0x does, is a pretty classic example of worse is better.
Hall of Shame
These C compilers known to not preserve termination properties of code: Sun CC 5.10, Intel CC 11.1, LLVM 2.7, Open64 4.2.3, and Microsoft Visual C 2008 and 2010. The LLVM developers consider this behavior a bug and have since fixed it. As far as I know, the other compiler vendors have no plans to change the behavior.
These C compilers, as far as I know, do not change the termination behavior of their inputs: GCC 3.x, GCC 4.x, and the WindRiver Diab compiler.
Acknowledgments
My understanding of these issues benefited from conversations with Hans Boehm and Alastair Reid. This post does not represent their views and all mistakes are mine.
“…if you are a C++ user and you think this behavior is wrong, leave a comment at the bottom of this post or send me an email…”
Aye.
Actually, I think your example where you have unspecified behaviour leading to possible undefinedness is just undefined. There was a post to this effect to comp.std.c by Mark Brader (or perhaps he e-mailed me) while I was writing my thesis. I’ll have to check my archives to see if I can find the argument.
Of course, a program may be partially undefined because it may be undefined in the face of certain inputs from the environment, and not others.
Hi Michael- If you find the code/email please send it along! It’s depressing how difficult it can be to reason about simple examples like this. I plan to write a longish blog post just on undefined behavior at some point…
I had a look and couldn’t find it, sadly. It must have been before 1997, when I started getting serious about keeping my e-mail. The argument was typical standardese lawyering about the language in the standard.
I’ve clarified a few points in your Haskell paragraph here:
Specifically, the bottom values that may be substituted for each other are only those so-called “imprecise exceptions”, where if the compiler can show that every possible evaluation order of side-effect free code produces some exception, then the optimizer is free to reduce the entire computation to *any* of the possible exceptions.
This is particularly useful when considering parallel code, where different possible execution paths lead to different partial functions.
The semantics for imprecise exceptions, and when bottom values may be collapsed, are given in this paper, as implemented in GHC :
Hi, just a note that your unsigned short example is better phrased with unsigned int. Unsigned short arithmetic is never done in C as it always extends first to unsigned int, so that code actually returns (65536,””) or (0,””) depending on sizeof(unsigned int).
Another comment: I didn’t understand “content buffered to stdout is not generally printed when the program segfaults”. True, but it is buffered! On the other hand
setlinebuf(stdout);
printf (“hi\na very long string”);
*NULL;
is undefined, but is guaranteed to print _at least_ “hi\n” (possibly more if the buffer is shorter than the non-\n-terminated very long string).
While Haskell98 offers no such facilities for distinguishing errors from nontermination (and in the theory, we call all of this bottom), GHC in fact can detect some infinite loops:
Also, errors in pure code can be caught in the IO monad; one might call these lazy exceptions.
Haskell does not have an explicit bottom type, although it can basically be denoted using type parameters in a certain way. E.g. error has type [Char] -> a meaning that it takes a list of chars (a string) and returns a value that belongs to any and all types (‘a’ is a type variable). I talk about how to translate this concept into Java and C++ in .
As for the termination rules in Haskell I recommend
tl;dr Basically, the semantics are that pure parts of Haskell pretend that ALL non-termination occurs “simultaneously” and the top level IO bit can ambiguously pick one. Of course, that’s not what happens at the implementation level and that kind of semantic wouldn’t work at all well in an imperative language where we expect to to have explicit control of effects.
Hi Paolo- Regarding your buffering example, I don’t believe you are right. The compiler is free to move the potentially-undefined operation in front of a side effecting operation. Let’s look at an example:
volatile int x;
int a;
void foo (unsigned y, unsigned z, unsigned w)
{
x = 0;
a = (y%z)/w;
}
Here “x=0” is a side effecting operation, taking the place of the printf() in your code. The % and / operators may crash the program if a divide-by-zero is performed.
If your position is correct, we would expect the store-to-volatile to be performed before any operation that may crash the process. But here’s what a recent gcc (r162667 for x64) gives:
[regehr@gamow ~]$ current-gcc -O2 -S -o – undef2.c
movl %edx, %ecx
movl %edi, %eax
xorl %edx, %edx
divl %esi
movl $0, x(%rip)
movl %edx, %eax
xorl %edx, %edx
divl %ecx
movl %eax, a(%rip)
ret
As you can see, a divide instruction precedes the store to x. Therefore, the program may crash before x is stored to. LLVM behaves similarly. I believe the compilers are correct: in the C standard it does not say (as far as I know) that side effects must have completed before an undefined behavior has its way with the program’s semantics.
Don, Edward, James — thanks for the clarifications! To call my knowledge of Haskell superficial would be an insult to superficial people everywhere :).
“…if you are a C++ user and you think this behavior is wrong, leave a comment at the bottom of this post or send me an email…”
Aye also! (I’m a C and C++ compiler *developer*. As far as I know, our compiler, which will go unnamed here, does not have this bug.)
C++ has too many undefined behaviours where the responsibility is pushed onto the programmers to just know what is going on. And normally they don’t, resulting in strange behaviour and security issues.
This one absolutely takes the biscuit. There is no way anyone would expect this sort of loop to be optimised away.
I agree with you that this is a bug in the standard and should be fixed.
“…if you are a C++ user and you think this behavior is wrong, leave a comment at the bottom of this post or send me an email…”
Aye
Just FYI, kencc on Plan 9 does preserve the termination semantics.
“unsigned short” example. If (in agreement 5.2.4.2.1/1 and 6.2.6.2)
1. sizeof (int) = sizeof (unsigned short) = 1,
2. CHAR_BIT = 17,
3. USHRT_MAX = 131071 > 65535 (no padding bits),
4. INT_MAX < 65536 (for example, no padding bits, 65535) then (in agreement 6.3.1.8/1 -- integer promotions, and 6.5/5 -- not in the range of representable values): return a + 1; -> undefined.
“…if you are a C++ user and you think this behavior is wrong, leave a comment at the bottom of this post or send me an email…”
Aye! (… Or should I say ‘Ow!’?)
Wow; excellent article. Definitely that behavior is wrong—the surprise can’t possibly be worth any optimizations it enables! I think, though, this is actually a case of “worse is worse.”
I agree that C/C++ compilers should not discard simple infinite loops. That simply confuses the programmer for little optimization benefit.
I think the more interesting question here is when a compiler is permitted to move instructions across a loop. If a loop has no side effects, is the compiler permitted to move a side-effecting instruction from after the loop to run before the loop? Here I’m assuming a loop such that the compiler can not prove how many iterations will run. It’s easy to imagine that in some cases moving the instruction would give better instruction scheduling and a better result over all if it happens that the loop does not run for long. However, moving the instruction will give surprising results if the programmer is writing a simple-minded delay loop on an embedded system.
To put it another way, in a real program that is not using loops for timing or to wait for some sort of signal, moving the instruction is on average better. But there are unusual but plausible programs which will break if the instruction is moved.
I think the standard permits the instruction to be moved. These issues do not arise in most other languages which run farther from the hardware.
.)”
Hans Boehm’s interpretation is much, much worse than the one used by compiler vendors: if the termination mentioned in the second requirement is supposed to refer to termination of the actual program, then a conforming implementation is permitted to compile any C code into any machine code it likes, so long as the first and third requirements are met and the resulting program does not terminate.
“…if you are a C++ user and you think this behavior is wrong, leave a comment at the bottom of this post or send me an email…”
Aye | https://blog.regehr.org/archives/161/comment-page-1 | CC-MAIN-2020-34 | refinedweb | 4,448 | 61.46 |
Created attachment 21689 [details]
Input spreadsheet (created in Excel)
Using the sample "recalculate all" code from this page:
does recalculate the cells, but does not seem to correctly handle the formulas in all cases.
If you call cell.setCellForumla after evaluating the cell, it seems to work fine.
Here's a simple test case.
1) Run the code below. It will open the attached simple.xls, change one cell, and save it as changed.xls.
2) Open changed.xls in Excel.
3) Change the same cell (C1, which should now contain 25).
4) Note how the calculated cell (D1) does not recalc.
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.util.Iterator;
import org.apache.poi.hssf.usermodel.HSSFCell;
import org.apache.poi.hssf.usermodel.HSSFFormulaEvaluator;
import org.apache.poi.hssf.usermodel.HSSFRow;
import org.apache.poi.hssf.usermodel.HSSFSheet;
import org.apache.poi.hssf.usermodel.HSSFWorkbook;
public class Recalc
{
public static void main (String[] args)
{
try
{
File ssFile = new File ("simple.xls");
FileInputStream ssIn = new FileInputStream (ssFile);
HSSFWorkbook wb = new HSSFWorkbook (ssIn);
HSSFSheet sheet = wb.getSheetAt (0);
HSSFRow row = sheet.getRow (0);
HSSFCell cell = row.getCell ((short) 2);
cell.setCellValue (25);
// recalc
HSSFFormulaEvaluator evaluator = new HSSFFormulaEvaluator(sheet, wb);
for (Iterator rit = sheet.rowIterator(); rit.hasNext();)
{
HSSFRow r = (HSSFRow)rit.next();
evaluator.setCurrentRow(r);
for (Iterator cit = r.cellIterator(); cit.hasNext();)
{
HSSFCell c = (HSSFCell)cit.next();
if (c.getCellType() == HSSFCell.CELL_TYPE_FORMULA)
evaluator.evaluateFormulaCell (c);
}
}
FileOutputStream ssOut = new FileOutputStream ("changed.xls");
wb.write (ssOut);
ssOut.close();
}
catch (Exception x)
{
System.err.println (x);
}
}
}
I tried your test code on the latest svn trunk, and got a different error. All formulas (in D1:D6) displayed as '#VALUE!' (my excel is 2007). I found one difference which was that POI currently writes tRef PTGs involved in a shared formula back as tRefV PTGs. There is a one line change to RefVPtg that will fix that problem, after which all D1:D6 display ok:
$ svn diff src/java/org/apache/poi/hssf/record/formula/RefVPtg.java
Index: src/java/org/apache/poi/hssf/record/formula/RefVPtg.java
===================================================================
--- src/java/org/apache/poi/hssf/record/formula/RefVPtg.java (revision 638958)
+++ src/java/org/apache/poi/hssf/record/formula/RefVPtg.java (working copy)
@@ -32,6 +32,7 @@
public RefVPtg(int row, int column, boolean isRowRelative, boolean isColumnRelative) {
super(row, column, isRowRelative, isColumnRelative);
+ setClass(CLASS_VALUE);
}
There have been a lot of changes in POI formula evaluation since v3.0, so I'm not sure if the same patch will fully solve the problem in that version. Furthermore, I know I haven't fully isolated this '#VALUE!' problem that I observed in the latest POI trunk, because a simple test case would not reproduce it (i.e. writing a spreadsheet formula just with tRefV PTGs instead of tRef PTGs did not upset excel). There must be some other detail (in combination with the wrong PTGs) that causes '#VALUE!' to appear.
If you attempt to apply this patch directly to v3.0, please post back if if doesn't fix your problem.
The patch so far is tiny, but there are a few issues that need much more investigation:
1 - Up until now, I had not seen any evidence of why POI bothers with PTG token classes at all. All junit tests continue to run when that code (setClass/getPtgClass) is disabled. The attached spreadsheet and test code seems to be the first concrete example of why it might be necessary.
2 - It does not make sense to have ptg-class based java sub-classes of Ptg in the presence of a method "setClass(byte)" which can change the ptg-lass.
3 - POI unpacks shared formula records, but doesn't seem to re-pack them together when the spreadsheet is re-written.
I'm actually using version 3.0.2-FINAL. (3.0 was the closest in the drop-down.)
Do I still need the patch?
I should also point out that the problem is a bit tricky. I started with a very large, complex spreadsheet - and kept removing data until the problem went away. The attached xls is the simplest version in which I could reproduce it.
If you simplify that spreadsheet at all - remove a row or column for example - the code works fine.
(In reply to comment #2)
I just tried the patch in 3.0.2-FINAL. Both the before and after behaviour was as noted above. I.E. it should work for you.
Perhaps the observation of '#VALUE!' in the formula cells is due to my version of Excel. Which version are you using? Just for reference, can you describe more clearly what your Excel does with 'changed.xls'? My observation of Excel 2007 is:
- The formulas all appear as '#VALUE!'.
- The correct formula text is still visible in the formulas.
- Pressing the <enter> key after selecting the formula causes it to evaluate properly. This action seems to translate the tRef PTGs into tRefV PTGs (observable after re-saving).
(In reply to comment #3)
> I should also point out that the problem is a bit tricky...
> If you simplify that spreadsheet at all - remove a row or column for example -
> the code works fine.
That makes sense. The specific bug that this one-line-fix addresses is the loss of the ptg-class when translating a 'shared formula'. From what I remember on previous bugs (bug 44449), excel has a minimum number of rows before it will use a shared. I noticed a threshold of 6 but I'm not sure if that's universal. POI does not erroneously disturb the ptg-class when reading/writing non-shared formulas.
I started looking at this last night, but didn't finish before Josh also took a look...
I've added a unit test to svn -
src/scratchpad/testcases/org/apache/poi/hssf/usermodel/TestFormulaEvaluatorBugs.java
Like Josh, if I start with your file, excel gives #VALUE if I change things. If I start with an empty file, it seems fine.
Interestingly, gnumeric and openoffice have no such problems with the files.
With Josh's one line fix applied, the file from my unit test works fine in excel. So, I've committed Josh's fix to svn
My Excel behaves exactly the same as yours. I have version 2003 (11.8206.8202).
I'll try the patch (I don't have the POI source yet; I had just downloaded the jars. Thanks. | https://bz.apache.org/bugzilla/show_bug.cgi?id=44636 | CC-MAIN-2020-40 | refinedweb | 1,073 | 52.15 |
Renders a Sprite for 2D graphics.
//This example outputs Sliders that control the red green and blue elements of a sprite's color //Attach this to a GameObject and attach a SpriteRenderer component
using UnityEngine;
public class Example : MonoBehaviour { SpriteRenderer m_SpriteRenderer; //The Color to be assigned to the Renderer’s Material Color m_NewColor;
//These are the values that the Color Sliders return float m_Red, m_Blue, m_Green;
void Start() { //Fetch the SpriteRenderer from the GameObject m_SpriteRenderer = GetComponent<SpriteRenderer>(); //Set the GameObject's Color quickly to a set Color (blue) m_SpriteRenderer.color = Color.blue; }
void OnGUI() { //Use the Sliders to manipulate the RGB component of Color //Use the Label to identify the Slider GUI.Label(new Rect(0, 30, 50, 30), "Red: "); //Use the Slider to change amount of red in the Color m_Red = GUI.HorizontalSlider(new Rect(35, 25, 200, 30), m_Red, 0, 1);
//The Slider manipulates the amount of green in the GameObject GUI.Label(new Rect(0, 70, 50, 30), "Green: "); m_Green = GUI.HorizontalSlider(new Rect(35, 60, 200, 30), m_Green, 0, 1);
//This Slider decides the amount of blue in the GameObject GUI.Label(new Rect(0, 105, 50, 30), "Blue: "); m_Blue = GUI.HorizontalSlider(new Rect(35, 95, 200, 30), m_Blue, 0, 1);
//Set the Color to the values gained from the Sliders m_NewColor = new Color(m_Red, m_Green, m_Blue);
//Set the SpriteRenderer to the Color defined by the Sliders m_SpriteRenderer.color = m_NewColor; } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2018.2/Documentation/ScriptReference/SpriteRenderer.html | CC-MAIN-2021-49 | refinedweb | 245 | 51.28 |
Default interrupt vectors shared by Cortex-M based CPUs. More...
Default interrupt vectors shared by Cortex-M based CPUs.
Definition in file vectors_cortexm.h.
#include "cpu_conf.h"
Go to the source code of this file.
Number of Cortex-M non-ISR exceptions.
This means those that are no hardware interrupts, or the ones with a negative interrupt number.
Definition at line 49 of file vectors_cortexm.h.
Use this macro to define the parts of the vector table.
The entries in the vector table are sorted in ascending order defined by the (numeric) value given for
x. The Cortex-M base vectors are always defined with
ISR_VECTOR(0), so the CPU specific vector(s) must start from 1.
Definition at line 41 of file vectors_cortexm.h.
Default handler used as weak alias for not implemented ISR vectors.
Per default, all interrupt handlers are mapped to the dummy handler using a weak symbol. This means the handlers can be (should be) overwritten in the RIOT code by just implementing a function with the name of the targeted interrupt routine.
Hard fault exception handler.
Hard faults are triggered on errors during exception processing. Typical causes of hard faults are access to un-aligned pointers on Cortex-M0 CPUs and calls of function pointers that are set to NULL.
Non-maskable interrupt handler.
Non-maskable interrupts have the highest priority other than the reset event and can not be masked (surprise surprise...). They can be triggered by software and some peripherals. So far, they are not used in RIOT.
This function is the default entry point after a system reset.
After a system reset, the following steps are necessary and carried out: | http://riot-os.org/api/vectors__cortexm_8h.html | CC-MAIN-2018-43 | refinedweb | 278 | 59.9 |
Agenda
See also: IRC logs day
1, day 2
Photos: Jeremy presenting XSCH, meeting room from front, meeting room from back
Fabien Gandon's summary.
Jeremythe TAG's reluctance to decide httpRange-14 is becoming a hinderance to the progress of a couple of our task forces
<danbri2> +1
DavidAddison Phillips asked if SWBPD would consider RFC 3066
<RalphS> RFC 3066bis and the Semantic Web [Addison Phillips, 2005-03-02]
Phil about to talk about ODA ...
(ontology driven architecture)
phil: at last f2f proposed to send
message to IT community who have heard about SW ...
... to say: yes you can use SW to build systems ....
... here is primer on potential, links to technologies, benefits etc. ...
... SE has published early draft of a note ...
... since then have had discussion around content ...
... posted latest version today to mailing list ...
... but has some potential problems - can discuss here today ...
... propose to start there. ...
<RalphS> Latest version of SE Draft [Phil 2004-03-03]
phil: Phil thanks all who helped with the note so far ...
have made substantial changes over last 2-3 days ...
<aharth>
phil goes through older version of the note ...
describes what has changed in more recent version in response to comments ...
Phil: ... still some issues around style and
administratvie content ...
... abstract and intro have been cleaned up, more focussed
MikeU: ... document lacked focus, no clear
objective, no target audience, too abstract (referring to old version) ...
... needed to be more meaty ...
... one way forward - specifiy audience ...
... objectives: to get people excited about possibilities for sw technlogies ...
... MDA from UML is powerful framework, we believe it can be augmented with SW technologies ...
... makes it possible to publish & discover ontologies etc...
... then discussed benefits ...
... with automated consistency checking get better quality of software, ...
... maintenance costs reduced because of tie between model & software ...
DavidW: ... aligned with Mike in general ...
... section 3 (meat) did not reflect title ...
... expected to see more focus on ODA ... application of RDF/OWL to this aspect ...
... comments (1) concerns about tone/style (2) concern about direction of note ... latter more important than former ...
... uncomfortable with note moving forward as it stands .... but lots of value in this note ...
... would like more focus on ODA ... find direction to take this interesting note fowd.
Phil: issue of direction ... there is agreement
that this thing is valuable. ...
... issue of charter for WG and whether directional note in this context is appropriate ...
... second point: grounding this note in current tecnologies ....
latest version includes section about why OWL is relevant here.
DavidW: would surprise me if some form of
ontological approach wasn't used in case tools / case research / other form
of MDA in the past ...
... need literatrue survey to find areas where ontlgy approaches were taken ...
then contrast with where OWL makes it better ...
i.e. here is a big win for SW technlgies, here is background to people trying to do this thing in the past, cf. with OWL.
Phil: met to discuss this last night ... have
literature refs ...
... after discussion agreed more merit.
ChrisW: was working in this 5 years ago ...
... 30 years of work in using declarative technologies in developing software ...
... what makes it different now is that, although OWL is nothing new wrt KR technologies. ...
... but joining it with the web ... global, accessible, more relevant now, greater chance to succeed now ...
... similar to Java, nothing new but you have global accessibility to a standard ...
... same goes for SW technlogy.
Guus: assuming we get something out there
...
is it convincing and concrete enough to have impact ...
how related to ODM work? ... talk about how to use these things in practise ...
(strawman) give me a reason why we should publish this ?
JeffP: good idea to have as many comments as
poss for current draft note ...
... agree with Mike's comments in that current note is too general ...
jjc: I am not part of agreement, do not
beleieve document can be rescued in any way ...
dissent because: no clarity about any content, section 3 is the important, but only contains hype ...
... nothing of value in the note ...
... abstract does not relate to content ...
... this wg should not publish this doc ...
... document was circulated too early ...
... where is the value? What is worth doing further work on? Needs convincing.
<danbri>
<danbri> via
MikeU: maybe could pull out some interesting
points from the first draft to put into another doc ...
... potential for note on SW technologies for SE ...
... to jjc - on topics of automated se & MDA of value to write about relation of sw technliges to this?
jjc: not appropriate for this WG.
... lots of other interesting ways that this could be explored ...
... in contrast the OMG docuemnt lacks directional big picture issues, but in terms of useful content it was worthwhile for our user community ...
... i.e. if someone asked me how to use swtechnl for se would direct them to OMG doc. ...
danrbi: scope for middle ground?
jjc: no.
phil: question is who contributed to current
document ...
contributors were phil, jeff pan and daniel oberle.
Guus: to evan - should there be a link between the SE draft and the ODM draft?
evan: yes. should this be an 'how to use ODM?'
doc ... no.
... there is a good place for a position paper document ... saying there is huge potential, technologies are there, we just need to try it ...
... and ODM is trying to do that ...
... but we do need something that is more esciting than the ODM draft ...
(jjc nods)
jjc: not sure about hypothetical question ...
evan's description of doc scope sounded more positive ...
... but a long way from being convinced that appropriate for this WG to write a position paper ...
... but a position paper giving a roadmap doc is valuable ... but what is appropriate forum. ...
Steve: needs to understand what is and is not
appropriate for this WG. ...
thought it was about defining best practises and advancing deployment, draw on work already done, on that basis describing best way ...
then how can we further deployment ...
... impression from the SE draft is that it is an exhortation to start doing things, rather than review of what has been done ...
... need to start doing things in practise, then write about doing them (rather than other way around).
DavidW: sensitive to charter, also needs of
user community ...
charter says: guidelined that are not based on former practise is out of scope ...
new research is out of scope ...
but in practise have a user community trying to figure stuff out ...
there are real world probs where semweb as whole and business commnuity could benefit from more standard ways of immigrating semweb into se ...
guidance to user community is why we are here ...
agrees that doc is no go in current form ...
but has strong feeling that doc on ontological additions to se practise ... building on 30 years of research ... focussed note on how some of previous approaches could benefit from a semweb approach ...
would be a good note and encourage TF to go there.
ChrisW: only skimmed the doc ...
sees big opportunity, supports idea of this TF ...
se community has lots of momentum into area of overlap with semweb ....
(but maybe doesn't know it)
important time to bridge to that sommunity if we want some
influence there ...
... the time is right, the technlgy is right ...
need to take advantage of opportunity to connectg to the community ...
if not they will invent their own technology and we lose a customer.
jjc: there is a case for making some sort of
document in this area ...
... maybe easier to connect in timely fashion without going through W3C process ...
... what about other forum e.g. se conference ...
... not opposed to idea that something is publishable ... but still needs to be convinced.
Guus: be happier if note was based on identifying relationship between standards in se community and standards in the web world, abstract from that, high level view for what that could meean in the future ...
MikeU: general note probably not in scope ... but could have more focussed note that is in scope.
Guus: happier with document doc that is about linking standards then adds a 'vision' section to say where we could go with this ...
(instead of just a 'vision' doc)
Evan: not sure what you mean ... because ODM is about linking standards.
guus: could build tools based on ODM that translate UML to OWL ...
so note could talk about this sort of thing then adda v ision section ... ?
<Zakim> aliman_scribe, you wanted to say that it sounds like what is needed is a workshop???
Alistair:I don't know what I'm talking about,
but this sounds like networking with people, setting up workshops,
outreach
...sounds like out of scope?
<Zakim> jjc, you wanted to record dissent
phil: has been excitiment about workshops ...
will probably happen anyway ...
but why do through W3C? people pay attention to W3C ...
lots of people looking for advice in this area .. look to
W3C as authoritative.
but look at it from the outside from a professional who is despserate for guidance who knew that W3C had been playing around with this stuff ...
but then didn't publish anything because of procedure .. looks bad. ...
what would be of benefit is if control of current SE draft is passed to someone else?
danbri: W3C has been changing ... used to prepare things in private fora ... drafts like this not findable buy the public ...
process is evolving to do the work in public ... but there is lack of guidelines for building drafts in public view.
draft has potential but needs more practidcal stuff in
addition to vision ...
... there are use cases from e.g. extreme programming ....
... pracitcal examples from collaborative SE .. ?
guus: suggest that a purely visionary document is out of scope for this WG, outside our charter. ...
<danbri> (my ref: pragamatic programmer,
could live with a document that contains a visionary section but contains practical links between communities, clearly extablished pragmatics which gives some beef to the vision, acceptable?
Phil: yes.
<danbri> ...chat w/ Dave Thomas,
Evan: what do we mean by best practices & deployment? don't agree with jjc, has wider view about the goal of this group.
guus: discuss tomorrow afternnon.
... have clear charter wrt this.
<danbri> (results of hacking w/ Dave Thomas, )
Evan: But have goal to see deploymnent of
current tools in new domain ... which is goal of having paper e.g. SE draft
...
... test is we need examples of people already using this in this domain but nobody is doing that.
<danbri> also: is relevant
<tbaker> - the Charter
MikeU: i.e. hard to talk about best practise when nothing is being practiced.
DavidW: uncomfortable with purely visionary work .. but try to find middle ground ...
i.e. there is significant body of former work ... so could put out a doc that says: here's how to take the ontological approach *with OWL* ...
i..e. here is a big win for you by using semweb technlgies ...
Phil: hears that we have agreement to proceed,
but need to ground in current technololgies and real world expectations.
... objections?
jjc: some characterisations of the possible path for this document sound ok ... certainly not against some of the characterisations that have been suggested ...
liked Guus's characterisation: grounded in relations between work already done in SE community, work already done in SW commnty, links between ...
<danbri> (maybe we could tweak the taskforce charter to capture whatever this concensus is...?)
Guus: hear consensus to move fwd with this doc
in this direction.
... propses action to phil to updated SETF charter accordingly.
<danbri> danbri: (said) maybe we could tweak the taskforce charter to capture whatever this concensus is...?
<em> this is a 'bridge' document between communities... the more study the bridge in terms of concrete connections the more weight this bridge can support in bringing people over and understanding how these communities relate
Phil: thinks doc would benefit from someone
else taking charge.
... any volunteers?
jeff agrees to take the lead with the SE draft.
<danbri> (applause for jeff)
ACTION: Phil to update SETF charter in light of new focus for SETF draft note
<danbri> JeffP, I'm moblogging a photo of the room so you can visualize us ;) should show up soon in
Tom: introduces document using slides
<DanC_> (hmm... identify terms with URIs... rather use URIs for terms? URIs like rdf:type don't identify terms; they are terms)
Bleeding edge? means where definitive answers not yet available
<danbri> (DCMI is Dublin Core Metadata Initiative, see )
In third section each issue is treated with two paragraphs
indicating different positions and links to further reading
DCMI documents can fit into this VM note
re other vocabs that are online in a "pre-SemWeb" way...
(many are thesauri; i wonder the relationship to SKOS...)
ACTION: TomB to post URL to his VM TF slides
<DavidW> I like TomB's ideas for a 3rd party endorsement model for vocab extensions
<DanC_> simplest way for DCMI to endorse such a statement is to say it in a document they publish, seems to me
<DavidW> Endorsement is different from original assertion
Shared formalisms -last slide - particularly between foaf dc and skos communities
<DanC_> not necessarily, DavidW
LoC issue to do with endorsement is current concrete problem facing DC community
LoC = Library of Congress
<DanC_> endorsement at the document level is straightforward. Endorsement at the statement level is more tricky.
Tom finishes talk.
<Zakim> danbri, you wanted to mention from SWIF F2F on tuesday (DCMI can just say things re MARC on their site; but can explore digital
DanBri draws attention to talk by José
<Zakim> aliman, you wanted to talk about note scoping
Ralph points out that signing is new work, and hence out of scope for this paper
Alistair: Tom wants us to discuss scoping of current note
<danbri> (also I should've said, just pls take a look thru Jose's slides, if you missed his talk... was only an aside re this current agenda item)
Alistair: however title seems inappropriate e.g. "managing a vocab for SW - review of current practice"
<DanC_> +1 title should say "this document asks more questions than it answers"
Alistair: best practice may not be cuirrent practice
TomB: the middle bit of doc is good practice
Alistair: I've just changed my mind ...
Ralph: I hope this TF will propose best practice
<danbri> (re how we do stuff in FOAF scene, last thing I wrote on this was in )
Ivan: these questions come up a lot, examples of how people approach these questions would be very valuable
Mike: suggest title should be "Managing Vocabs on the SW" not "for"
Jeremy: I think "best practices" means "best current practices"
<Zakim> DanC_, you wanted to offer to fly by some TAG issues that seem VM-related, now that VM moved to today
Alistair: howabout "Managing SW Vocabs"
DanC: couple of TAG issues related: numbers 8 14 35
<danbri>
<danbri>
<danbri>
<danbri>
DanC: namespace
Document 8
... this is vocab management related ....
... TAG has been discussing RDF Schema's XML Schema, RDDL, HTML docs
... the XML Schema validation service will follow RDDL docs
DavidW: what do you want this TF to do?
DanC: I'm just drawing your attention to these
... httpRange 14
... what is the range of http deref function?
... this is the hash versus slash issue
... RDF in XHTML 35 will be elsewhere on agenda
RalphS: all three of these block deployment of
applications and some of our TFs
... I would propose that WG find a TF that is responsible for each of these three
... I suggest we give actions to TF to develop WG position on each of these
JJC: seconded
<danbri> (re -35/xhtml, that is a part of the namespace doc issue)
<DanC_> (oops; I forgot one... "social meaning" has a home in the TAG issues list )
DavidW: xhtml35 is in rdfhtml tf
TomB: I would like to get a short note out
quick, and not get hung up on these issues
... going beyond recording current TAG position is something we should do later
DanC: acknowldgeding existence of issue is fine,
Danbri: http-range14 is a bigger block than the
others, since namespace docs can be changed without as much disruptive as
namespace URIs
... prioritizing
... as namespace owner with URI ending in /
... with limited attention we should work on this issue not namespace docs
... easier to change namespace doc than namespace URI
<DanC_> (you could be more explicit, danbri: it's easier to change a namespace document than to change all the documents that refer to it)
Guus: asking DanBri should we be taking a position and reporting that to TAG?
<danbri> (nice formulation, danc)
Ralph: I would like it to be more explicit, the WG should acknowledge its responsibility to state a position
<danbri> (I don't think VM TF first Working Draft needs to wait for a position on http-range-14)
Ralph: we should not punt this to TAG
<danbri> +1
<jjc> +1
Jeremy: ask for straw poll on httpRange 14
Guus: prefer to have discussion tomorrow
DavidW: issue httpRange14 deferred to tomorrow 12 and 1
TomB: I want to ensure we set milestones for VM
note
... March is difficult
... is it reasonable to have first draft by mid-May
Guus: mid May is a bit far away
TomB: still awaiting some input
<DavidW> We are over time for this TF now and need to deal with immediate planning issues. DanC has raised the issue and we will determine a WG consensus on it tomorrow when we have time.
<danbri> (re timing/contribs, I only have time to commit in April... march is taken for EU bids; may is also uncertain)
<DanC_> (er... it's an editor's draft now.)
TomB: if input came mid-April, then we could circulate an editors draft by end-April as 'candidate working draft'
<danbri> (new terminology, but also used in DAWG...)
<DanC_> i.e. a proposal from the editor to the WG to publish as WD
Guus: who in particular are we waiting for? (inputs to VM note)
TomB: pillars are DCMI Foaf skos and relevant TAG issues
DavidW: what is a realistic time? (asking TomB)
TomB: for foaf we need text
<DanC_> DanC: [after FYI re tag issues]. I encourage you, while working on vocabularies, either as a WG (ala SKOS) or individually (ala foaf, ...) to be aware of your approach to these issues and think about whether you'd advise others to do likewise or not.
DanBri: I can't do it in March, but can in April
<danbri> (well, maybe last week of march...)
Alistair: can we do it faster
TomB: I want input from DanBri and Libby, and they are not available in March
Alistair: but the foaf bits are only an hour's work
DanBri: have you factored in procrastination time
Guus: it would be good to have this out soon
Jeremy: let's publish without foaf just a tbd
TomB: draft is currently in Wiki
<danbri> there is
<danbri> (latest?)
<danbri> ah, Tom: I have a version that goes beyond that
<danbri> I see " TASK: DanBri or Libby - Describe W3C usage of the word "namespace""
DanC: I read through this, but I can see it taking it significant time, it's not ready to go
<danbri> a big job in itself
TomB: agreed
Guus: with a midMay schedule for WG vote
... timeline is that we aim for WG vote midMay, 'candidate working draft' two weeks before
Alistair: trying to get input from the group on
technical bits
... presents SKOS Core Vocabulary Specification W3C Working Draft In Preparation
Alistair: policy statements: naming (how we form URIs), persistence (URI should stay for a time), change (how URIs change), maintainance (how vocabularies evolve)
<DanC_> (hmm... hasn't really been enacted)
danbri: vocabulary definition should end with a hash
<DanC_> <owl:Ontology rdf:...</> ...
<danbri> aside to report RDFS namespace practice: [[
<danbri> <owl:Ontology
<danbri> rdf:about=""
<danbri> dc:
<danbri> ]]
<DanC_> is that # really in there, danbri? that surprises me
<danbri> yes
Alistair: uppercase/lowercase convention for classes and properties
<danbri> alistair, is my review comments on this; quite a lot of comment re policy aspects (+ draft text)
danc: points out that the Persistence Policy is a draft
<DavidW> jjc, aliman has raised it as an issue and has an intention of covering it as part of the document review
<Zakim> jjc, you wanted to ask about RDF and OWL vocab management??
ACTION: Ralph to inform the W3C Communications Team that we intend to cite as "W3C URI Persistence Policy"
<DanC_> what change was just agreed by the editor?
ACTION: Alistair to change the wording of the link to "the persistence policy at URL http:"
<RalphS> (some change which I hope doesn't result in a URI-in-your-face)
<DanC_> ew ew ew. please don't do "policy at..." i.e. don't use in-your-face URIs, please.
<RalphS> i.e. I hope the editor takes the intent of the ACTION wording and not the precise letter of that wording
Alistair: re change, three levels of stability: unstable, testing, and stable
<danbri> tom: see 'dcmi namespace policy'
<DanC_> google nominates
<DanC_> Namespace Policy for the Dublin Core Metadata Initiative (DCMI) Date Issued: 2001-10-26
<danbri> (alistair, see 8) Policy Statements in my )
em: Is the persistence policy in prose or in machine-readable format? I'd like to see stability statements made in machine-readable form within the schema declarations
<Zakim> danbri, you wanted to try to smmarise my comments from
ACTION: Alistair to think about machine-readable change policies
<Zakim> DanC_, you wanted to note and how I got stumped on TAG issue 41 and ontaria policy
danc: port document focus is thesauri, various tag issues are related to the issues raised in the draft
tbaker: articulate the larger context in which maintainance of terms is embedded
<Zakim> jjc, you wanted to talk about life after WG
ralphs: javascript probably doesn't conform to pubrules; is it important to use in the SKOS Core spec?
<DanC_> (jjc, I'm not sure how OWL got out without a persistence policy in its namespace document... current pubrules prohibit that)
<danbri> [[[
<danbri> I am not personally in a position to make such pledges. Something
<danbri> milder:
<danbri> [[
<danbri> The Working Group is committed to a public, consensus-driven
<danbri> design environment for SKOS, and to this end conducts SKOS-related
<danbri> discussion in public, in particular drawing on feedback from the
alistair: presenting maintenance section that describes the procedure to change skos (consensus...)
<danbri> Semantic Web Interest Group mailing list public-esw-thes@w3.org .
<danbri> ]]
<danbri> ]]]
<tbaker> 1+ the points seem right
alistair: should we leave the four things (naming, persistence,...) in the documents?
ACTION: Alistair to change the links to examples
<DanC_> (I think the lack of established norms is the raison de etre of this BP WG)
<danbri> (+1 danc; I think we're test driving some VocabManagement ideas via this spec too)
<DanC_> (quite)
ralphs: what's the maintainance policy of a Note?
jjc: should we link to the process document (which we got past of it)
danbri: three draft paragraphs to connect w3c processes with skos processes
guus: publish all skos core and skos guide together?
alistair: review status of guide: appraoved for first working draft
<DanC_> (ah! now I know why we're not talking about thesaurus porting very much.)
guus: planning on timeline for three documents?
alistair: 24th to go for all three documents
... third one reviewed once, second draft in review at the moment
... target 17 March for updated versions of all 3 docs for discussion on 24 March
guus: what is a good application as convincing
application for semantic web?
... business area and non-profit areas
libby: slides, might answer these questions
... weblog has descriptions about applications, using grddl and xslt
... doap vocabulary
... weblog difficult to use
... doap (description of a project) seems to be popular and fits well
<danbri> re DOAP, see
libby: better workflow would be that people use
doap to describe their projects themselves and TF links them
... what should be the criteria for inclusion?
... presents doap descriptions from the weblog in a facetted browser
... maybe combine DOAP submission with swig mailing list
<Zakim> DanC_, you wanted to say moving DOAP out to the projects sounds great, but let's not let the tools discussion dominate. I think the Nature RSS paper is the best answer to Guus's
mike: there should be a form to create the descriptions
<danbri> (see for a form that generates DOAP project descriptions)
danc: good strategy using doap, but tools
discussion shouldn't dominate
... rss nature paper best answering guus' question
libby: what should be the policy for inclusion?
... open source downloads and applications available online
guus: criteria from the semantic web challenge (maybe a subset)
<danbri> see
<danbri>
mike: don't exclude big impact semantic web applications because they're not download-able
<Zakim> danbri, you wanted to be liberal re inclusion
danbri: swc site uses frames
<Zakim> RalphS, you wanted to comment on persistence
ralphs: persistence is of varying importance to different user groups. For me, the most important group is those thinking about adopting SemWeb so listing current work, not stuff that has gone 404, is most useful.
<danbri> (re Frames, see )
<Zakim> pepper, you wanted to ask if TM apps qualify...
steve: do topic map applications qualify?
... ie. omnigator
<Zakim> em, you wanted to suggest a criteria
<DanC_> omnigator... hmm... I can imagine a convincing case for its inclusion.
<danbri> RSS/Nature paper was in D-Lib magazine, DanC. See
?: what about non-english applications?
<BalajiP> Nature RSS paper:
<pepper> well, the omnigator is a bit special, DanC_, because it *does* support RDF
<pepper> (it's also free, in case anyone was wondering)
<Zakim> FabGandon, you wanted to talk about different uses of the list and process
mike: the selection shouldn't be too restrictive
FabGandon: also concerned with the process (review in the group?)
davidw: possible to seperate between open and
closed source and profit vs. non-profit
... maintained and not maintained
guus: discuss the process of inclusion tomorrow?
<DanC_> The Role of RSS in Science Publishing
<DanC_> D-Lib Magazine
<DanC_> December 2004
<DanC_> Volume 10 Number 12
<pepper> a tribe of competing 'street' standards :-)
ben: xhtml wg is working on rdf/a
... we hope
... we're getting an update today
... re GRDDL we are considering moving towards REC
... is currently a CG Note
... the only one so far existing at W3C
... a bit of a no-mans land
... discussion that having it as WG-based REC would help
guus: what's status of comments on GRDDL?
... what would REC-track take care of?
ben: we need to make sure list of usecases is complete
<danbri> HTML WG arrives
<DanC_> i.e. Steven Pemberton,
stevenp updates us on status
<danbri> stevenp: "we are discussing draft for LC WD of XHTML 2"
Steven: the xhtml2 wd has been updated with the rdf stuff and we're discussing that now
<danbri> WG:
<DanC_> (which of the links atop is the relevant one?)
Steven: the mapping to triples is in there, though not in the depth taht it is in the rdfa document
<RalphS> [Editor's] draft XHTML 2.0 24 February
<RalphS> (Member restricted)
<Steven> We weren't aware
Steven: rdf/a document needs to be updated
<Steven> but no prob
cool
<ChrisW> what section should we be looking at?
<danbri> meta and rdf is.
MarkBirbeck: history - decided it would be better to separte the metadta stuff from the html stuff - this is the rdfa document. xhtml2 draft represents the final thinking
<DanC_> (ah... ok, found some examples in 22. XHTML Metainformation Attributes Module )
stevenp: the only change is that only xhtml:about is inherited, everything else has to be declared explicitly
<DanC_> (hmm... "Sorry, Forbidden" at )
MarkB: thought they were finished with rdfa and
then issues like making a page it's own foaf page or own rss feed.
... href becomes special to solve this problem
<Zakim> danbri-laptop, you wanted to ask HTML guys about test cases and QA of RDF/A design (RDF syntaxes are hard to test) (when they arrive here...)
danbri: being sure you're right - in rdf wg first one was prose only, rdf core used testcases - do you have time to do this sort of thing?
mark: has tried to use the testcases, but many of them don;t carry over
danbri: the methodology rather...
mark: right, tried to convert rdf document
testcases to rdfa and then see if we get to the same n-trioples, but would
like some more tests
... using real documents eg foaf, be good to agree on some documents to use
<danbri> (yup i think having real, use-casey examples + their ntriples would be better than using obscure corner cases)
ivan: richard ishida and others have a groupo
called international tagset
... these might be close to what your inetrested in
ralph: is thrilled to see a document updated.
his expactation was that rdfa wopuld get merged into the working draft and
then disappear as a separate document - I think you're saying that it has
life as a separate document. priority for him is the working draft not rdf/a
... is the material from the standalone document that we reviewed going in as is or substantially different?
stevenp: not substantially diffrent but b-nodes stuff didn't make it in to this draft
<ChrisW> can someone post the RDF-A url
<benadida>
<benadida> (rdf/a)
ralph: jeremy implemenbted the oct draft and
seemed to end up with many more triples than expected - was that a prose
mismatch to number of triples ... etc
... repeating jeremy's tests would be good - have you looked at that?
<danbri> (ralph's point is exactly why I'd like a test case collection...)
steven: think all you have to do is change the namespace. otherwise it should just work
mark: the audience for rdf/a and xhtml2 is
quite different
... not indepth explanation of rdf in xhtml2 draft
... point of rdf/a was to have somethign that we could all discuss...
... this should not stop xhtml2 going on its way but steven and I do want to finish rdf/a
... xhtml2 document not sufficient to determine if you get 3 triples or 5
steven disagrees
guus: was your intention to make the drafft less precise, or is this an error
mark: deliberate, for a different audience
steven: suggests that you look at it and feedback to them
ralph: it's important that the syntax is tied
to the level of mathematical precision that rdf has
... and that level was in the oct draft
... it needs to be there somewhere
mark: the whole reason for us producing rdf/a was to have these discussions
ralph: his expectation was that rdf/a was a vehicle for discusssion but that it would be reintegrated into the maintream of xhtml2
mark: not necessaily a problem but one motivation for taking it out was so that other languages could use it e.g. svg
Mark: svg has a 'metadata' element, which as far as i can see is completely wasted...
ralph: fully supports that, although there may be objections that its out of scope
guus: we're not trying to bring you more work...and we can probably help
danbri: compromise perhaps: link to the examples and to an exactly equivalent rdf/xml version
ralph volunteers to help
ralph: the taskforce on behalf of the wg should provide some input
ben: maybe usecases, rdfa/xhtml2, rdf/xml could form a document
danc: happy to review it when it's on the TR page
stevenp: not going to release a working draft
version
... suspect 2 last calls will be needed anyway
<DanC_> (er... who ended up with the action there?)
<benadida> (I ahve the action)
<DanC_> (tx)
stevenp: one thing that's emerged from
discussions today is that saying what meta and rel relationship is to rdf
... link/meta [somethings] are now the same things [something] (sorry
<Steven>
ericm: using rdf/a to declare this stuff would be an excellent testcase, and educational
mark offers to show a real live xhtml2 document
<danbri> (ben, I'll help if needed w/ schema stuff...)
mark: an rss reader needs multiple lists of rss feeds, like OPML, but with more information
<benadida> (dan, sounds great, taking that action)
<benadida> (danbri, that is)
<danbri> (I don't think it's an action yet, but if it's one I can do in April, I'll certainly take it...)
mark: shows an xhtml2 document, with some meta names, a 'nl' navigable links' tag; meta statements in teh body of the document (inheriting from a previous href immediately above it); has an image inline - the document is the metadata
(I didn;t record an action)
<DanC_> MarkB_, would you please mail that example to public-rdf-in-xhtml-tf ?
<benadida> (the action is to provide help to XHTML WG in defining an RDF/OWL schema for the special properties defined in their doc)
[discussion of escaped / unescaped html in meta]
guus: timelines and plans need to be diiscussed - any more qs about rdf/a?
danbri: is it possible to validate that peopel haven;t left out namepace declarations?
<DanC_> (hmm... dunno what relaxng does with qnames in content)
mark not sure.
<Steven> mimasa?
gavin(?): [missed the detail, sorry]...
<jjc> schema validation includes qnames in attributes
<jjc> (XML Schema)
ben: this is great, looking foreward to giving you some testcases
ACTION: DanBri help write an rdf schema for the additional xhtml2 namespace elements
though he may need a telecon and be pinged with the specifics
<Gavin> gavin asked about how to distinguish between metadata on resource referred from parent element href which is inherited versus metadata on parent element
ben: our plan is to cheer you on with rdf/a and
fully endorse this for xhtml2; prior htmls endorsing GRDDL, maybe a WG note
or a rec
... need to discuss what this would need
<Gavin> gavin understands that if href is inherited as an about, that you'll be able to associated the metadata with the linking element itself (instead of the referenced resource) by doing something like about="" on the meta element
guus: also thinking of writing a very short
note about the 2 possible routes for rdf in html, with soem examples
... are you planning anything similar? if so we should coordinate
stevenp: that fits
jeremy: from what I've gathered is the most
unwelcome change form the oct doc is the dropping of support for bnodes, so
we can;t serialize all rdf graphs
... I think worth highlighting as a signifant change
<benadida> (I had not realized the bnode change either)
stevenp: if a significant problem, review it and discuss how we can get it back in
MarkB: I hadn't realized that bnode support got dropped until Steven mentioned it just now; I will trace back what happened. I think we did have a simple solution...I think we can incorprate it fairly easily
<DanC_> (hmm... something like "pronoun" rather than bnode?)
stevenp: would like to find a way of expressing it that doesn't use rdf technical terms like 'bnodes' - should be something your grandmother would understand
<danbri> re bNodes, FOAF use case I tihnk will need bnode support to capture common FOAF idioms;
mark: at one point we had xpointer thing, then bnode as an attribute, then object or thing or thingy, need to retrace thought process
guss: feedback on this before last call?
<DanC_> (er... hey... you can't make last call comments to yourself)
<tvraman> raman: virtually here --- rdf:role=observer
stevenp: last call is as soon as we can, so a last call comment makes sense
jeremy: we had a rquest from html wg to help with rdf/a - now you need more help with schema document - which is cool - but would like a better communication process
stevep: weekly 30 min call?
guus: the rdf in html tf calls obvious point of contact
mark: not been clear sometimes if dicsusing something I need to be there for or if you are discussing GRDDL for example
ben: that mailing list rdf in html has been almost all rdf/a. we'd love to have you for those (or part of them)
ACTION: BenA set a time for the RDF-in-XHTML TF telecons
guus: we will send you the minutes of this meeting
danbri: was suprised by the bnode suipport
being lost, bnodes very important
... do we expect RDF/A documents to be GRDDLable? and is there anything we need to do
... jeremy and maxf have made xslt...what do we need to do with the schema?
... decide that that's the way to make these work together?
danc: qnames may make this tricky
ACTION: the rdf in html tf to discuss whether GRDDL needs to work on XHTML2 documents
<DanC_> the practicality of using GRDDL on RDF/A documents is impacted by the use of qnames in content
(was that an action?)
<danbri> (jjc, do you have an xslt that can do the qname thing? does it require xslt2?)
<benadida> Ralph suggested naming the various documents so we know which ones we're referring to
ralph: is confused because is rdf/a was standalone, then would just reference tit from xhtml2 document
<jjc> my rdf/a thing was xslt2, but then I've forgotten xslt1
mark: rdf/a not even a working draft so can;t
reference it, timelines are wrong.
... ideally it would have been published first and referenced
<RalphS> it appears that we need two names to be able to refer to a standalone module and a module of XHTML2
Mark: there are actually 3 modules of XHTML2
[HTML WG leaving]
----
GRDDL discussion - future of GRDDL
ben: what need to happen to GRDDl document to
have it go in a recommendation direction [?]
... we want feeback form the WG on bringing GRDDL to rec track
danC: the process of note/rec etc is a means
tgo an end for danc - get peopel to publish rdf data. rec track makes it
easier to justify the time on testcases etc
......q: is this a 'best practice' for the sweb?
... wondering if rec might bring people out of the woodwork
... to get feedback
davidw: personally thinks GRDDL is cool, and like to see it rec track, but not sure if it's in charter
ralph: chater allows for recommendation, specifically for the embedding issue
davidw: withdraws comment
jjc: I need to talk to colleagues about this, can't respond at this time
danbri: comments would be good to hear e.g. from jena devlopers
ben: yes re comments, but we don't have anything else for pre-xhtml2 without it
ralph: this could be addressed by a note
<DavidW> Can someone please show me where in the SWBP Charter we are allowed/expected to solve the RDF embedding question?
ralph: the TF is inclined to move forward with a new version of the document; we need to say in there what plans are
<jjc> The Working Group will, in conjunction with the HTML Working Group, provide a solution for representing RDF metadata within an XHMTL document.
<jjc> 1.2.2
<DavidW> "Produce a Working Group Note on guidelines for transforming an existing representation into an RDF/OWL representation."??
ralph: does the wg share the same opinion as the TF?
guus: what does the TF think it will take in resources to go to rec produce new draft - enough recources?
danc: would like to go around the table and see if it's important to people
jeremy: thinks if GRDDL was rec track it would
make a difference to whether to implement it
... guesses not that high priority for jena team
<Zakim> danbri, you wanted to ask (if we want actual discussion of GRDDL detail) whether GRDDL works fine w/ XSLT1 and XSLT2; spec doesn't mention version currently, perhaps we consider
danbri: if its developers vs publishers, he
would be hard on the developers
... likes the idea of a big push, e.g for foaf data
... would like to know how hard it is for developers tgo do it, what it entails
gavin: makes me think of blogs...does it make sense to ask member and non-members to see if e.g. rdf comments would be useful?
danc: talked briefly about this - depends if
blogs produce xhtml or not
... worth asking them
pepper: still not sure why we can;t just use RDF/A
ben: rdf/a only works for xhtml2, not xhtml
danbri: other things e.g. topicmaps could use it
danc: could just change schema documents and then harvest rdf out of them
tom: dc has an old spec for rdf in html using metatags. not sure how many people actually embed it. would need to check
ACTION: Tom Baker ask DC colleagues if many use rdf inside html
davidw: definitely a use out there - for transforming large volumes of web data
jeremy: what's the takeup of the note?
ben: feeling in the community that GRDDL is not
'offical'
... maybe all we need to do is endorse it but personally think people are waiting for rec track
gavin: if we could get all the rdf out of comment blocks then it would be very valuable
danc: feedback we've had is would we need to fix all the html? if so, that's a big problem
fabian: 90% of the documents we deal with are proprietory, so don;t embed it; for educational materials we deal with it would be good added value
ralph: chicken and egg problem - we're not
seeig the demand for it because they don't know about it
... at best we can ask them to change once
... how fast do we think people whop want to put metadata in documents will move to xhtml2?
... dangerous to retrofit something to existing documents - the authors of the documents didn't necessrily agree to the new contract implied, especially if a random document
gavin: what would be the second change?
ralph: if we ask them to trty something experimental to see if it should become a rec, imples another change
jeremy: worth waiting another year for RDF/A, if only change once
pepper: if GRDDL worked with HTML not XHTML then it would really take off
danc: at the moment just uses xslt so won;t
work
... not sure how it would work, need a parse for bad html
guus: not enough convincing usecases at the moment - would the TF provide those?
ACTION: Gavin find out from his community and contacts if they have usecases
danbri: was thinking what it would be like to wait for xhtml2 in order to use rdf in html: I'd choose GRDDL if have to choose one
jeremy: xhtml2 docuiments are immediately deployable because css can be read by existing browsers
Danbri: even the linking stuff? it would be nice if links between pages in xhtml2 still worked in existing browsers
guus: suggests TF waits for the input form those two actions and maybe goes to find more usecases itself
<danbri> jjc, that's v interesting; i'd like to see the detail if you've got a pointer
guus: we can use the final slot tomorrow to discuss it if we have any more information
Chris: I would like to propose to move two
documents to Note
... first is "Representing Specified Values in OWL" -- aka "value partitions"
Guus: move to Note requires that the WD has been stable and any changes are minor
Chris: current WD was published on 3 Aug
... editorial changes are cleaning up terminilogy
+Elisa
+Natasha
<DanC_> (who else has read it?)
<libby> deborah
<libby> (sorry)
<DanC_> Representing Specified Values in OWL: "value partitions" and "value sets"
<DanC_> W3C Editors Draft 02 March 2005
+Deb
<DanC_> (2 march... today's the 3rd)
<Natasha> I can barely hear what people are saying in the room
Natasha: the changes since 3 August are more than editorial
Chris: the content has not changed, just the way it is organized; just the structure
+Alan (earlier)
Alan: the only major change was 'feature' to
'quality'
... breaking things into bullets, revising the diagram
... put into the diagram the notion of disjoint by default
PROPOSED: to conclude work on Representing Specified Values in OWL: "value partitions" and "value sets" by publishing it as a Note, contingent on confirmation by Mike Uschold that changes since 3 Aug are editorial
<dlm> hand raising from dlm
so RESOLVED
Chris: moving on to Classes As Values
... changes in organization and wording
... Mike proposed some more editorial changes last night
... specifically, for the considerations sections, making each pattern consistent
<FabGandon>
Mike: and rephrasing approach 4 to improve clarity
<Natasha>
DanC: has the WG shopped this around and gotten feedback that these are useful?
Alan: we got a flurry of feedback in September
... we've presented this in tutorials
[[
ref earlier "value partitions" draft, the path to the current editor's draft is
-> SWBPD Home Page
--> OEP TF page "editor's draft")
--->
---->
]]
DanC: what about names for patterns?
Natasha: I thought about it, but didn't finish
[[
the path to the document currently under discussion is
-> OEP TF page
--> Representing Classes As Property Values on the Semantic Web 2 March
]]
David: there are open issues in the Status of
this Document
... e.g. a promise to develop a dictionary of terms
... do you have a schedule for this?
Chris: yes, we are working on this
... we had hoped to have a glossary by this meeting
David: the highlighted terms are a to-do item
Chris: the highlighting and the to-do list will
be removed
... when we publish the Glossary, we will make it consistent with the usage in this document
<DanC_> (ah... yes, having the TODO in there vs proposing to conclude work had me scratching my head too. moving the TODO list to the TF page would make sense to me)
David: what about the first bullet? ["identify
several OWL DL compatible approaches..."]
... for a document that will live longer than this WG, I would prefer that the to-do list be removed
ACTION: Natasha remove the 'Open issues' from the Status of this Document
ACTION: Chris move Classes as Values and Value Partitions to w3.org
Mike: Alan highlighted pros and cons, that
seems to be useful
... are people comfortable with using this approach in Classes as Values?
David: I found the pros & cons very useful
Guus: for Classes as Values I think there should be no opinion
Mike: not saying 'good' or 'bad' about the pattern overall
Guus: I feel strongly we should stay with a
neutral approach
... this Note is about DL vs. non-DL
... it is dangerous to make subjective statements here
Mike: is saying "maintenance is costly" too subjective?
<DanC_> ah... "# There is a maintenance penalty"
David: saying "this is expensive to maintain" might require further review
Natasha: I would not want to make a judgement
for everything
... some cases are obvious already
... I prefer a neutral approach
PROPOSE to accept contingent on editorial changes to be proposed by Mike and accepted by Natasha
<dlm> no objection
(vote by show of hands)
Evan: this seems to be a convoluted process
... there seem to be substantial structural changes happening
... we're making a judgement about the nature of these changes
David: specifically, we just discussed changes to the value judgements in the document and decided not to make such changes
Alistair: abstain
RESOLVED to accept contingent on editorial changes to be proposed by Mike and accepted by Natasha
<danbri-laptop> (re wordnet, i've not studied Aldo's new work)
Chris: the N-ary relations draft is still
undergoing change
... this document will have content changes
... Ralph has the action to review this when it is ready
... also a new editor's draft on simple part-whole relations
... new draft co-edited by Alan and myself ready for comments by others
-> Simple part-whole relations in OWL Ontologies 1 March
Guus: volunteer to review
Bill: volunteer to review
Chris: I hope soon after the OEP telecon 2 weeks from now that this will be ready for review
<DanC_> Last-Modified: Thu, 03 Mar 2005 16:05:53 GMT
<DanC_> (though it bears the date 1 Mar 2005)
Elisa: will be spending time with Evan discussing units and measures note
Evan: I have a new task to work on this at NIST
... I've been looking at a lot of OWL ontologies now
... there are a lot of OWL ontologies for units, well fleshed-out
... first task is to produce a set of criteria for evaluation
Evan: the WG is not supposed to pick "winners", correct?
Guus: you could propose some minimal criteria for usefulness
Deb: GEO group has a starting point for units and measures in OWL also
Evan: based on ISO work?
Deb: not sure -- will ask
Evan: my intention is first to develop some evaluation criteria
ACTION: Evan and Elisa develop criteria for evaluating units and measures ontologies
<dlm> is the working group meeting I am at . they have starting points in owl for units and measures, numerics, scaling, and comparators
Guus: I see 3 types of things that could go
into the note
... a generic schema for units and measures
... initial examples from Tom Gruber
... 2. actual units and measures themselves
... 3. patterns for using these; showing how to apply them
Guus: could concentrate on some at first
Elisa: there are hundreds of units we could
consider
... so it would be helpful to narrow the scope
Chris: still waiting for Jerry Hobbs to join
the WG [to work on time ontology]
... have pinged relevant AC Rep
<DanC_> (Chris, I'm kinda motivated to help with getting Hobbs to join the WG; I might have time to phone his AC rep)
further discussion scheduled for 12:00-1:00 EST tomorrow
David: what is the publication plan for the glossary? wikipedia?
Chris: not sure, will link somehow from WG pages; expect it to contain ~20 terms
Guus: maybe include as an appendix in future Notes
Guus: up until a few weeks ago we had little
input
... now we have a lot of input; still processing it
Chris: I read one of the documents
<DanC_> from the agenda...
<DanC_> [13] WNET: Ontowordnet
<DanC_>
<DanC_> [14] WNET: WordNet data model:
<DanC_>
<DanC_> [15] WNET: ISLE lexical entries
<DanC_>
Chris: I read the 'mapping' document
... reconciling with good ontology practice
... recognized semantic issues with the toplevel of wordnet
... all good considerations from my point of view
Guus: good action list was developed at Bristol
f2f
... would be valuable to merge this
DanBri: I am excited to see the ontologized
approach moving along
... but I'm worried that we're lurching around; feels like independent academic research being reported to the WG
... how can we better work together to avoid 2-3 month gaps
... how does Brian's work fit with Aldo's?
Guus: Aldo's email suggests he is building on Brian's work
DanBri: I would like to see more of the discussion on the mailing list
<danbri-laptop> (I wonder whether a dedicated mailing list might help provide a place for dedicated wordnet/semweb collaboration...)
ACTION: Chris to ask Alan to take over the Qualifying Cardinality Restrictions Note from Guus
<Zakim> DanC_, you wanted to ask about previewing the XML Schema discussion, and noting some DAWG stuff
DanC: SPARQL has a lot of symbolic matching
... but if a variable binds to a number you can test, e.g. greaterThan
... a set of test cases is being written for SPARQL
... if SWBPD wants to get involved in this [ref. XML Schema datatypes], this would be a good time
JJC: I met for an hour this week with Don
Chamberlin
... found no big disconnects
... the key issue appears to be at the semantic level of RDF datatype reasoning
... e.g. are "1.0"^^integer and "1.0"^^float equal? syntactically yes, real question is at the semantic level
... we're unlikely to reach an answer before SPARQL goes to Last Call
... I don't feel this open issue is a show-stopper
DanC: big design principle was to import from XQuery
<Zakim> DanC_, you wanted to give an example: does this query win or lose? ... AND "1/1"^^my:rational != "2/2"^^my:rational
JJC: in discussing with Don Chamberlin and the SPARQL editors, when we got to a hard question the answer in XQuery was "we structured the language so you can't ask that"
DanC: my example has more to do with open-world vs. closed-world reasoning
<DanC_> "1/1"^^my:rational != "2/2"^^my:rational
<DanC_> not("1/1"^^my:rational = "2/2"^^my:rational)
DanC: one design moved the not to the outside;
inner returns False as my:rational was not recognized
... inner might instead return "don't know"
Stephen Harris: XQuery may use some variation on returning "don't know"
JJC: I also took an action to research how
OWL-DL handles unknown datatypes
... I came away from yesterday's discussions [with SPARQL editors] without a lot of anxiety
DanC: ref. yesterday's plenary discussion about
versioning, OWL has some things; might make this more visible
... could suggest to TAG to look at using OWL versioning for other things
<danbri-laptop> see
Guus: OWL versioning was simple to do
DanC: but the solution is relatively unknown; might be worth an article
TomB: versioning seems to be in scope for Vocab
Management
... would be nice to find a common mechanism that works for SKOS, FOAF, ...
DanC: TAG next meets in June
<danbri-laptop> (hmm OWL Versioning is mixxed up in the Full vs Lite/DL design... Annotation properties etc)
<danbri-laptop> [[
<danbri-laptop> <owl:AnnotationProperty rdf:
<danbri-laptop> <rdfs:range rdf:
<danbri-laptop> </owl:AnnotationProperty>
<danbri-laptop> ]]
<danbri-laptop> etc
TomB: DCMI has a versioning model but we haven't yet figured out how to formally declare it
TomB: OWL would be one candidate way to express this
<DanC_> 7.4 Version information in OWL Reference
Chris: but you wouldn't be likely to use the
built-in OWL versioning mechanism
... OWL version has no semantics; it is only annotation
... I expect [DCMI] wants to write a schema for versioning
DanC: I referred Henry Thompson to Jeff Heflin as the source of the OWL versioning design
Guus: that was correct
[adjourned to 0900 Friday morning]
-> Re: XML Schema Datatypes and the Semantic Web [Dave Peterson 2005-01-31]
<MSM>
--- xml schema problems ----
<RRS> XML Schema Datatypes in RDF and OWL
jjc: document has not changed much since the
first version
... has been editorial changes, except for when to use what
... that is the only major change
... question: do we publish it?
... two comments: hayes' dawg review
... one small comment from peterson on a mistake i made
... the document was initially was for review the datatypes and to when are two values equal?
... the rdf and owl semantics do not specify that (for float and ints for example)
... xquery/xslt had a problem for the duration
... they were pulled out from the rdf/owl, but xpath2 solves that and the text refers to that
... section 5 is on the use of numeric types is for a different set of readers
... it gives some suggestion on when to choose what
... on the user dfeined datatypes the problem is: xml schema gives a mechanism to define his/her own datatypes
... owl/rdf design requires to identify datatypes with URI-s, xml schema do not necessarily do that
... the two will not work together
... we decided to postpone this problem
... there are ways to address this, the issue is more that it covers too many specs
<RalphS> definition of adultAge just prior to
jjc: daml+oil uses the name of a schema type
uses the URI of the document plus # and the name to address an element
... that works, there are implementations, it is seriously non aligns with recs
... the problem is what is the frag id in general;, the architecture says that it is up to the document what the frag id is
... the daml+oil is nowhere close to what the xml or the xml schema solution might be.
... the second solution is to use xml schema component designators from the xml schema group, currently a stable working draft
msm: the binding vote should be close, close to be in last call
jjc: what we trying to do is a pretty basic use case to the xml schema component designator
<MSM> Current SCDs editors' draft:
jjc: however, because of the generality of the solution, the frag id is pretty complex (based on xpointer)
(jjc shows the example in the document)
msm: the change we have made is to make the expression a bit simpler, it will look much more like an xpath expression
jjc: this is still more complicated than the
daml+oil solution, and has difficulties when using n3 which uses qnames
... after the ':' n3 requires n3 names, which does not include '(' and others
<MSM> ...#xscd(/type(adultAge)) becomes ...#xscd(/type::adultAge) -- or, if the type adultAge is assigned to a namespace bound to prefix 'p', ...#xscd(/type::p:adultAge)
jjc: so it will lead to problems with deployed
formats
... so it is ugly, but the generality is attractive
... there is an issue which is at the hear of pat's comment: according to xsd the component designators are for the simple type definitions
msm: no, the simple type definion is an abstraction
<PatrickS> Does it identify something that is a member of rdf:Datatype?
the abstractions are what a schema is made of
msm: the reason it took so long is that the
theological work to say that we are pointing at the abstraction and not the
xml
... i am not sure whether it was crucial but this is it
... specifically, the phrase "i.e. referring to the definition rather than to the type defined." is wrong
jjc: what is clear is that the way rdf/owl
talks on datatypes means that the theological debate is probably unnecessary
... the simple case shouild work
... it does not seem that hard
... i have looked at the xml schema solution and the daml+oil
<MSM> The "simple type definition" is an abstraction (name, base type definition, facets); we call it a "definition" to distinguish it from the value and lexical space which follow logically from the base type and facets. (Distinction between intension / extension)
jjc: the rfc2316 says you take the url, you get
a document with a mime type, and that tells you how to interpre the frag id
... the xml schema documents are xml documents (appl+xml) the mime type permits a bare name, there is a certain amount of deployed experience
... with bnare name, with a name after the '#' and that is an xml id, and that the frag id refers to the xml element
<RalphS>
jjc: we could modify the xml schema to put an
id on
... then the daml+oil solution is close to a solution. At least to me it does not seem to be so theologically unsound to do that
to use the the same uri to address the datatype described with that portion of xml
jjc: pat's makes a difference between the xml
datatype and the note in the rdf semantics
... in my view the id solution is probably a good solution. the xml schema designators one is the general solution, but if you own the xml schema file, than using id seems to be better
... however, there is a theological debate around this
... that is the issue
guus: you propose is to say in our document is to stick an id into the xml schema if you own it
<RalphS> Jeremy's reply to Pat Hayes' comments
jjc: the document seeks opinions. My personal opinion is that the use of the id and the designator solution is optimal
timbl: rdf/xml says that its ID overrides the
xml things
... schema would do the same thing
<RalphS> Pat Hayes' comments
timbl: so you cannot use the bare names you
could not use it to refer to a chunk of xml
... would that be such a departure that it would horrify
... a schema could have a MIME type definition which defines that the fragids identify the type, not definition of it (like for rdf/xml)
msm: what could help the working group: what
has thus far kept us to go is that registering a mime type is a mine field
... the procedure is now shorter, so that may be feasible, and I can take that back to the group
... but there is a concern: i can imagine wanting to talk about simple type definiton and the xml element used to declare it
if we make the ability to point depending to the mime type i loose the ability to refer to the xml element itself
msm: jjc's id solution means refers not to what
we have but to something that is adjacent, because you know what it means
... in the strict sense it relies on a processor
... i would suggest that strictly speaking all of the bits in the scud (schema component designator): it is a problem that it is longer via xscd, but this tells you exactly via the xpointer mechanism what the exact id is
... it addresses the fact that it refers to an element adultAge.
... in any practical context, they add a prefix to distinguish a prefix to avoid name collision
... so I agree it is longer, but it is semantically simpler
Mike: i am all for to have semantically meaningful, can we handle that using a namespace
jjc: in rdf/xml you can use entities, and that
works fine
... but in n3 does not have qname abbreviation
... if the last character is not a proper one
patrick: is there a way around that, updating n3?
timbl: ')' is used for punctuation, you could put an entirely URI there, you could talk to ROy to see how that framework works
patrick: it is really important not to loose focus on how people using datatypes can do that
<MSM> One possibly relevant fact: like XPath, SCDs will have short-forms, so "/type::p:adultAge" can be abbreviated to "/~p:adultAge" and "/attribute::p:adultAge" can be abbreviated to "/@p:adultAge"
patrick: the best practice using xml schemas
should fit in a larger way of using datatypes in general
... you can use some other mechanism (java etc), we should also talk about abstractions and not only that particular processor
<Zakim> timbl, you wanted to suggest that regarding a schema as a higher-level langauge than an XML document is and to suggest that regarding a schema as a higher-level langauge than an
<danbri2> (I'm v intrigued by "The use of the # name also allows content negotation among many languages")
timbl: you can look at schema represents an
infoset
... you can always go back to the source and make a link
... the schema defines types, it is not xml it is a schema language
... if they want to use it as xml, then it could be served as xml
(scribe has given up...)
msm: as long as there is a way to get 'back',
you want to optimize and choose the opt. points wisely, so optimizing
chooising the declaration rather than xml is o.k.
... but tehre are some unexploded mines:-(
<RalphS> MSM: typo in 2.4 -- should be base="xsd:integer"
msm: consider your example it points to another
type (integer) for the restriction
... any processor would have a complete udnerstanding of adultAge
... the problem is when the base is not integer, but my:humanAge, for example
... if humanAge is declared in another schema document
... then depending how this is done, I may end up combining this with version 1 or version 2
... so the result is that the source declaration can end up with different interpreations
msm: strictly speaking the definition defines
adultAge in context of the full schema
... we do not have version solution, so we do not have a good solution for this
... if the base type is an xsd: one or is in the same document, you do not ahve a problem
patricks: in the example you give, I would
assert that is not a problem as long as you use different names
... you are talking about two different abstractions
ralphs: tim's notion to use mime types to explain the semantics of the document is fine, but maybe there may be a tag issue on whether the semantics are carried through the namespaces
<MSM> Patrick, yes, in principle. But consider situations like that of the HTML namespace. The abstraction 'p element' is (according to community practice) regarded as a single abstraction. But the legal contents are specified one way in the transitional definition, and a different way in the strict definition.
IvanH: do you want to use xml:id rather than id ?
<timbl> schema:id
<danbri2> (tim's right; it's a philosophical not theological discussion)
jjc: second issue is practially more difficult
... is the comparison of floats and decimals
... within xml schema there primitive types and the other simple types that are derived by restriction
<RalphS>
jjc: eg decimal is arbitrary long
... there are around 17 primitive tupes
... all the relevant specs are clear that when you derive from a type the underlying semantics does not change
... rdf/xml are agnostic on the issue whether 1.0 integer is the same as an 1.0 decimal
... xml schema is geared to a specific use case: schema processing
... in that case it is fine
msm: we are required to describe validation
precisely
... the schema position is: yes these are quantitative values, 1.0 has obvious relationsships to 1
... they are not identical for schema purposes
... and applications may (eg, xpath2.0 operators) do more
... schema is a bit like an assembly language
... nothing prevents an application to define casting
... rdf can do that
jjc: solution #1: to do exactly what schema
does, ie, they are different
solution #3: xml schema gives you a mathematical specification; use that
jjc: that would also be a very purist line, but
may not be useful, it has surprises
... solution #2: xpath has solved the problem, they have defined 'eq', so use that
... although there are surprises because 'eq' is not transitive, for example
... it might be a show stopper
... at some level a choice has to be made
... we may be lucky and get a good feedback
guus: the schema people would prefer #1?
msm: none of the 3 solution would cause a problem for us
IvanH: DAWG has chosen the XPath solution
... it would be nice if SWBPD did not give totally different advice than DAWG
jjc: DAWG is doing something slightly different; they're specifying semantics of a query language, not of the underlying data
IvanH: they're defining a quality of the underlying resources
jjc: this may rule out the purist option
msm: strictly speaking what qt does is to
define an operator called 'eq' and you may define it as you want
... so you could define a set of operators
<Zakim> danbri, you wanted to ask which xpath (1.0 or 2.0 datamodel) DAWG use, and whether that matters here?
danbri: the dawg guys are commited to xpath 1 or 2?
jjc: 2, they are using the operators of xpath2
guus: does not that influence your choice?
jjc: it will make it the xpath solution more attractive, but may not be the only one
guus: but it would be very strange if there are
two different interpretations
... might be useful to talk to the dawg people on that
jjc: i had already some discussions, but not
conclusive yet
... maybe we can add some indication of preference saying that a direction works better on sparql than some other
... but it would good to publish this soon
patricks: when you look at the options, it is
important to get the best solution without breaking of owl reasoners
... if you choose only those that are safe for owl that would be good
... this chunk of useful equivalence is safe for owl reasoner
... i would encourage those that are involved in owl reasoners to comment
jjc: i think we could comment what we got after getting the comments of today
guus: I would like to publish now
... we could get general feedback
ACTION: Jeremy to incorporate the comments + pats' comments + peterson's comments
jjc: we could slightly change the intent saying that 'currently solution this or that is best'
guus: it might be clearer for feedback if editorial preference is listed. Evan sould review again
jc: realisticaly I would hope to get back end of next week
guus: would be nice to make a decision on publishing on the next telco
Bijan is given floor
Bijan: Speaking about WSDL => RDF mapping
Primary thing being mapped is abstract component model of wsdl
components have component properties that relate them to
sets of components or components
either a straight mapping where all the details of wsdl component model
are expressed in RDF/OWL
OR create a simpler model that glosses over some of the details of the wsdl model
but expresses the key concepts adequately
using the more faithful mapping wsdl-straight requires good blank node supports
in wsdl-straight property names tend to relate to plurals, whereas in wsdl-ont-nice a property name links
to a single component
Patrick: on cc/pp
do not underestimate impact of model on query
nice approach makes it easy to write queries
jjc: the difference is only mapping of forall contains
bijan: but then the english definition text differs from RDF model
timbl: keep it simple stupid, make mapping
automatic,
... strip out all the sets
chrisw: why left hand side chosen?
i.e. the wsdl-striaght
there is also a Z notation for wsdl-straight
bijan: issues to do with faithful as in as
close to transciption as possible
... possible way forward use nice model with straight model as informative appendix
<pepper> slides for RDFTM:
<RalphS> (Valentina is a new member of the WG)
steve: estimated size of audience for survey is
50 people
... test cases not complete nor intended to be
... overview of previous proposals for RDF TM interop
... overview of evaluation criteria
... fidelity considered important
<RalphS> snapshot of Steve & Valentina's slides for RDFTM discussion
<danbri2> (hi-fi vs low-fi slide is useful; low-fi is a lowercase-r-reification of Tm structures, rather than RDF that carries the sense of the original TM)
bijan: conclusion semantic mapping more
important
... survey vs. tutorial focus
mikeU: have a few statements and references
Steve: OK
... Coverage of OWL?
... used as it can help with translation
DavidW: was OWL considered in any of the other work?
SteveP: no OWL in surveyed work
DavidW: Then not covering OWL is OK
... if OWL can address open issues then say that
... want to ensure that "constraint languages" (Topic Maps) interoperate with OWL
<Zakim> danbri, you wanted to note that introducing OWL concepts/facilities to TM community (and v-versa) would be a useful contribution
Steve: Topic Maps Constraint Language is a current work item in ISO
DavidW: feel that "Introducing OWL to TM" would
be out of scope
... for guidelines that is useful, but not the survey
<Zakim> RalphS, you wanted to say this isn't a tutorial
Ralph: note that OWL might have helped somewhere, (as said above)
SteveP: OK to mention commercial implementations?
<RalphS> put a sentence in the appropriate places for each approach where it could have been improved by using something from OWL
RalphS: matter of degree. OK to say at least
once that there are implementation and cite them,
... but repeated reference may be overboard
DavidW: for commercial implementations "see
this reference" as opposed to mentioning it inline,
... esp. given editor is from the company mentioned
discussion of objectivity given mentioning of commerical implementations
guus: should be very clear that its an opinion section, separate from "objective" section
steve: will look at specific sections that seemed subjective and discuss
<RalphS> Natasha's mail
steve: will ask Natasha for specific sections
dan: important to be fair,
Danbri: the document could explicitly solicit pointers to other implementations via the mailing list, which would increase openness
steve: would like to consider input that has been published
Steve: I think we know about everything that's out there
steve: anyone not convinced for semantic mapping (one conclusion)
davidW: i buy the argument, but unclear on
editors position after survey
... how far could semantic mapping go in addressing problems
steve: semantic mapping is the only way to go,
but don't know if 100% complete
... not sure a top priority
danbri: one style uses same namespace URI, the other shadows a similar one
steve: reusing vocabs and therefore URIs seemed preferable
<Zakim> danbri, you wanted to distinguish 2 kinds of semantic mapping
guus: on objectivity treat it in a mechanical
fashion - this is our job, this is our approach - helps remove subjectivity
... can't imagine a solution approach in which OWL would not be helpful
Steve: agree
review of PFPS comments from 2001
<RalphS> RE: On the integration of Topic Maps and RDF [Peter Patel-Schneider 2001-08-01]
steve: shoudl two test cases have indentical
information content
... think so, danB will write some
<RalphS> RE: On the integration of Topic Maps and RDF [Patel-Schneider 2001-08-21]
mikeU: may be forced to constrain one side based on the expressiveness of the other
guus: danger of having contrived test cases
<RalphS> RE: On the integration of Topic Maps and RDF [Martin Lacher 2001-08-23]
steve: test cases in survey intended to be
informative regarding naturalness
... danb should be able to do a good job about expressing knowledge in RDF "naturally"
mikeU: could have middle ground by having same core example, and then growing the example in multiple directions depending on capabilities
steve: want to keep examples short for survey
ralph: like shortness
steve: started to develop some guidelines test cases
<RalphS> brief examples are fine to illustrate each of the approaches. for the final summary and choice of preferred approach, then it would be good to have a more complete example
danbri: even small examples can explore huge
problems (from experience)
... some difficulties arise when using datatypes or URIs
ralph: some of the approaches are so obviously flawed that it doesn't make sense to beat on them with test cases that include data typing, etc.
steve: consider moving test case results to
separate document?
... prefer leaving them in
ralph, guus: up to you as editor
steve: are issues identified "requirements", if not what are they and where should they be documented?
<RalphS> for the purposes of this survey, if the audience is really 50 people, moving test results to a separate document feels like editorial busy-work to me
mikeU: nice to keep requirements separate
... but OK to start with looser ones and tighten up close to finishing
steve: starting to understand what issues will
lead to requirements
... for example range/domain constraints not in topic maps, in RDF, how to we address those
... prefer to wait a bit and let the requirements arise as we work
mikeU: themes in considerations that are
already there in document, could serve to capture that explicitly and
introduce them in the beginning
... proposed a similar thing in classes as values note
... minor presentation thing to make easier to read
steve: OK
... importance of "naturalness" and "fidelity" (are they the same thing)?
... naturalness is "faithfullness" to the paradigm
... fidelity the correctness of the translation
mikeU: agree with idea of naturalness, but not
made explicit in document. The word doesn't capture what you mean
... deeper problem is readability, when you use "semantic hacks" they don't translate well
... naturalness is not that important
guus: also much more subjective
... how do you measure that
steve: can you measure readability?
... agree w/ Mike, added a paragraph that discusses interoperability as it is impacted by "naturalness"
... e.g. using semantic hacks
danb: hard to measure a lot of these things, and good idea to focus on the terminology here
<Zakim> danbri, you wanted to prefer "natural" (or "faithful") over "fidelity", since latter appeals more strongly to concept of truth
steve: ok
... acceptable to require mapping information?
... key problem is that TM and RDF have different levels of semantics
... any triple could map to 6 different things in a TM
... can't know which unless you "understand" the predicate
... can get some information from the nodes
... acceptable to require mapping information? some believe can't be done w/o that
... sometimes can get information from a RDFS or OWL ontology
... do we need to be generic, ie applies to any rdf model, or require some semantics (ie in RDFS or OWL)
ralph: "required" is a difficult SW thing
... can discussed where this information might be if present,
... ie in the namespace document
... but requiring it doesn't seem so good
steve: take foaf:name - w/o semantic
information would map wrong using a default mapping
... but if foaf:name was a subproperty of rdfs:label, would work better
jjc: two issues I see
... wrong means (to me) contradictory, not "not the best"
... annotation on a schema may be third party, keep in mind open world about where annotations come from
danb: good point, grddl does this
... we are deciding who to make work for, vocab owners, app builders,
... prefer to focus on smaller group
... happy to require mappings, making sure they are consistent
<RalphS> when work has to be done, it's better to require it of the vocabulary owners not the (more abundant) vocabulary users
steve: would like to finish survey and move to guidelines quickly
guus: timeline for survey
... get consensus by next telecon (Mar 24)
... think its important to get TM feedback
<RalphS> release a new editor's draft a week before the telecon -- i.e. 17 March
steve: mar 17 is doable
guus: once that is in WD, OK to work on
guidelines
... parallel is OK, too, can start today
... make finishing survey a top priority
steve: OK. That's our goal
... need evidence that he approves of this
valentina: will communicate this back to fabio
ralph: needs to be in mail archive
jjc: would rather have TF discussions on list,
even in italian
... responds in italian w/o using colorful italian idioms
guus: important to make clear that process is open
steve: OK - partially due to under-familiarity with process
danb: skos has its own mailing list but archived at w3c
steve: thanks. Our goal to have guidelines ready for "extreme markup" conference in Aug
<danbri2> (we didn't discuss relation to ... maybe at lunch?)
<Zakim> danbri, you wanted to speak in favour of putting work onto schema authors over app developers and content consumers
<aliman> .. re subject indicators also see
<Valentina> hai seguito la discussione? Potevi chiamare... :)ù
<Valentina> cioe?
(jjc talking)
discussion of http range
everybody agrees that when you do a GET on an http URI you get a representation of a resource
jjc: dc:creator is a URI without a #
<DavidW> SUBTOPIC: HTTP range (# vs. /)
jjc: one school of thought says that, because slash URI is gettable, it necessarily ... scribe lost
timbl: http scheme is a scheme of documents ...
jjc: can slash URIs be used for abstract things
?
... for this group the issue is important because dc & foaf use slash, but if http range goes with hash these things are broken
timbl: http slash uris necessarily denote documents (information resource)
jjc: interested in published subjects
... http range decision breaks pubsub
aliman: says no it doesn't
<DavidW> The TAG refers to this issues as "HTTP Range 14":
pepper: we have one class of things: resources
(RDF speak)
... other class of things that have location (addresses)
timbl: information objects have information content
pepper: information resources necessarily have an address
timbl: but does the bible have an address?
... info resource is not necessarily addressable
pepper: direct and indirect identification of
subjects
... but web has no mechanism for disctinction
timbl: no, indirect identification we can do
example of the man who's name is fred
cf. use the URI to directly denote fred
davidw: if swbpwg has consensus on this issue,
then Ttim needs to come in a defend his position
... if Tim not available, who can proxy?
... actually probably me (davidw) ...
... when we looked at this in tucana we used hash uris
... but practical issue how to deal with mature vocabs that wouuld be invalidated
jjc: also problem of large vocabs - large download problem
timbl: no reason to break up that document
... suggest you use sparql
or here's the algorithm to get a bit
jjc: so we could break up wordnet to make retrieval doable ...
but this seems to misrepresent knowledge
timbl: no keep the same namespace e.g. cyc can be broken up into chunks
david: way to subdivide the namespace?
... jjc said if you want to further divide a namespace you use a slash
jjc: if we define wordnet namespace with a slash ... (lsot)
timbl: in webarch uris identify the files ...
in semweb architecture uris identify concepts
phil: we're saying there's no space for duplicity ....
e.g. MIME type, interpretation depends on context
timbl: URI identifies one thing only
pepper: shows slides how single URIref can be
used to identify two different things ...
information resource is by definition network addressable ...
therefore you can use the network adress as the identifier
but can also use the same URI as subject indicator ...
whatever you mandate people will use both hash and slash
timbl: wants to define a transition strategy to
move foaf & dc to use a hash
... even if it involves building those two URIs into every single RDF parser
danrbi: if WG writes note, would tag review it?
timbl: one of tag issues is written up as an
argument tree ...
so there should be a paragraph number for your position
so tell me where you got to.
davidw: tim has a string opinioin which he has
documented and which he has persuaded others in TAG ...
... all has been dealt with ad nauseum ...
... therefore we should read the existing decision tree and read all other arguments before we re-invent the argument wheel ...
... so before we take a position we should read everything !!!!!
tomb: tim's proposal would invalidate so many
things for DCMI ...
lots of guidance documentation would have to be rewritten
therefore tomb says timbl's strategy would invalidate lots of DCMI
<danbri-laptop> anyone got the url for timbl's position tree diagram?
jjc:anyone else feel they are up on the issue?
Alistair:what if http mandate is not enforceable?
david: we should review the decision tree
<danbri-laptop> (background: "What do HTTP URIs Identify?" )
Alistair I've recreated all these points over
the past 6 months
... I'm worried about the social process of getting everyone to adopt a new solution
... each of the 3 philosophies feels consistent to me
... 1. Tim's
... 2. published subjects
... 3. "you can identify anything with http: but if it's not an information resource you should do a redirect'
<pepper> (tm background:)
david: philosophical issue: should w3c follow or lead?
<danbri-laptop> (SWBP might take the position that dc:title and foaf:Person terms _are_ information resources)
EricP: subject identifiers are pretty close to inverse functional properties in OWL
patrick: reiterate that all of the options are
coherent, selfconsitent models ...
are all consistnet with current webarch also ...
question is not whether they are reasonable ...
but whether if we choose one over the other what will we break and what will we improve
bottom line is that industry has already decided - the semweb poster examples all use slash
and tim's approach to go to hash only is just far too expensive
so issue should be finally decided in favour of slash
pepper: it's a mess, too late to fix it,
pragmatic issue, cannot force people to do something else ...
what happened to fragments? how to identify a fragment of a document? this is what the hash was designed for.
ralph: sympathise with tim's pain ...
conversation seven years ago, tried to persuade tim to tell us what he thought we should do ...
answer led me to encourage model & syntax WG to use whatever they wanted to use ...
but our understanding of these architectures evolves over time ...
tim has articulated a new position since seven years ago ...
things have tightened up since then ...
tag has not yet reached consensus because has representatives for lots of communities ...
perhaps strategy fwd for us is to recognise (1) there are existing applications that have made choices, and it would be unwise to try to get them to change ...
(2) but can say : from some time fwd the best practise is foo
but still don't force people to change
<danbri-laptop> note: Adobe XMP use /, see example
<danbri-laptop> xmlns:xap='" xmlns:pdf='' ...
<jjc> and
<jjc> <pdf:Producer>Acrobat Distiller 6.0.1 (Windows)</pdf:Producer>
pepper: but what about fragments?
ralph: not our problem
danbri: would we begin best practise or would we declare best practise
ralph: our responsibility to look very carefully at tag record
david: one option to start a TF?
ralph: joint TF?
phil: these are observations: ...
the point about fragments is very relevant ...
because you can conceptualise a fragent of a document to be a concept ...
(can see merit in tim's point) ...
second point relates to lead or follow ...
we are a community of leaders ... cf. community of users (follows) ...
we should look at the community of leaders ... examine their position.
david: there are times to lead and times to follow
<Zakim> aliman_scribe, you wanted to say in SKOS Core to say about skos:subjectIndicator
<Zakim> danbri-laptop, you wanted to propose exploring position that vocabulary terms are "information resources" in just the sense of timbl's
jjc: tag is divided
danbri: two things to say: ...
1. if we get this wrong we have a deployment disaster on our hands ...
lots of stuff has been written ...
cf. experience of dc namespace change and how long it took for chnage to propagate ..
dc dcterms foaf foaf-extension adobe xmp creative commons all use slash
2/3 - 3/4 of deployed semweb already uses slash
<RalphS> DanBri: XMP, RSS, ...
if we say: "change" without a compelling story we look stupid
david: and we slow down deploymnent
danbri: foaf files are interesting because they
link to lots of other vocabs ...
could possibly get foaf users to change foaf, but then all the others too ... ?
jjc: but there is no compelling story
danrbi: we need to appreciate the scale of the
problem, several people fulltime for at least a year ...
if we get it wrong we hurt semweb ...
alot of foaf stuff comes from perl scripts ...
but adobe have shipped applications - cost of change huge for them ...
phil: if we get it wrong and we get it late we hurt semweb
danbri: compromise position: fresh start for
new namespaces
... also this topic is discussed in other fora ...
we are only concerned with uris for terms in semweb vocabs ...
could say that dc title and foaf terms are possibly information resources ...
david: summarise ... it's too late to lead
...
live with what we have, if we force into lead then none would follow .
tom: remember when we established namespace
policy ...
wanted to establish it without the 1.1 not in the URI string ...
but without that comment was we would compromise the integrity of dc, evidence for instabliity etc.
so trying to explain a change of this magnitude would be *extremely* difficult ...
it's a philosophical argument, would take alot of resources ...
to make a change, but it would be a waste of resources, we have more important things to do ....
patrick: 1: (melodrama) if we say: though shalt
use hash, this would require significant corporate support ...
but it would receive significant corporate obstruction from not just nokia ...
its about efficient access esp for low bandwidth devices ...
2: the way you present you document makes the difference ...
WG should do a 'not bad practise note' ...
say: look, there are proven, well established practises in semweb, here are usecases and benefits for each solution, because question is what is best when? (not either or) Nokia's position is that http URI can be used to identify anything .. and should not be any redirection ... so need eficient representation mecahnsim.
ralph: danbri said deployment disaster ...but we should distinguish between those changes that would require existing deployment to change ... and those that don't but the other side is ... in what ways would existing deployemtn break if we recommend a new model ...? exiting apps would continue to function ...question is architectural truth and beauty vs. practical engineering. timbl is about truth & beauty & model theoretic consistency but its the engineers that build the thing ...
<danbri-laptop> (my point on 'looking stupid' is not w3c group losing face, but the knock-on effect for the larger community around us who have championed the use of RDF these last 7+ years; they will feel betrayed, i fear...)
patrick: don;t think generalised view is not just truth & beauty, its about a particular truth & beautyeach one is consistent in itself and ... this WG SHOULD NOT USE 'SHOULD'
pepper: another problem with hash: server-side processing cannot be done with hash ... recent project defined 75000 terms ....went with slash because you can do server side processing, so cannot resolve these things
Ralph: tim admits that this is a bug
<jjc> Proposed question for straw poll: can an http URI (without a hash) identify an RDF property?
<RalphS> like metadata vs data -- what is an insignificant difference to one community might be another community's important data
<danbri-laptop> +1 to pepper's point; same happens w/ wordnet-as-classes
phil: observation: lots of past good work, but here too much discussion about the past and not enough discussion about the future, so beware!!!
ralph: diff communities have different priorities
david: consider jjc's question ...
<DavidW> "MAY" to be interpreted as described in [RFC 2119].
jjc: rephrase as "Should the WG say an http URI (without a hash) MAY identify an RDF property in a conformant way"?
<danbri-laptop> "May ... identify in terms of (Berners-Lee et al) "
patrick: is it proper, given current standards & webarch, to use a slash uri to identify an RDF prop
strawpoll:
(including observers)
yes: 11
no: 0
what does conformance mean?
david: wg reconvenes, session on tag issue is adjourned
David: several points were made about server-side processing and the impact of certain URI usage
<RalphS> (TimBl left a while ago)
<danbri-laptop> draft idea: The WG believes that the practice of identifying RDF/OWL terms and vocabularies with non-# HTTP-URIs is consistent with and RFC 3986. It notes that such practice is very widespread, but that there remains some uncertainty in the W3C community on this topic and that this uncertainty is having a damaging effect on SW deployment efforts.
ACTION: Jeremy draft text for statement to TAG reflecting the opinion of the httpRange-14 breakout discussion
<danbri-laptop> ref also http-range-14
<timbl> Jeremy, the note will presumably describe an alternative architecture, and how it affects existing applications?
Guus: Have a specific problem to be addressed: session at W3Conf -> need presentation showing two applications.
Fabien: I provide presentation with recorded demos of a number of applications present in the blog.
Steven Harris: a list of projects is available at
Guus: should we go for a DOAP format?
Libby: criteria are really needed and should be clear (also opinion of Eric Miller) example : open source only - may be too restrictive. Should we include resources for developers only? Resources for promotional exercise? Just examples in general? Is it possible to constrain the audience? Guus you are the one to use it a lot?
Guus: Distinction like Company uses semantic web for internal systems. (close environment) versus used in open web environment.
Ivan: Fujitsu report is internal for instance?
Guus: Aerospace industry example ... no longer existing.
Bill: the internal vs. external distinction may not be relevant because an internal application may be affecting hundreds of peoples behind the firewall of the company.
Steven: a distinction could be "do you control the data?"
Libby: the real problem is to define the criteria for including/excluding someone in/from the repository
Gavin: it seems we are still trying to find criteria to narrow the scope. We must make the difference with finding criteria/attribute to view/navigate/sort database.
Libby: provide the list in OWL/RDF and readable format => list is unreadable => facetted browser would be much better. Not even sure we have to scale it down really.
Guus: application domain is another criterion.
Bill: yes it's natural / sensible ... for instances application is doing data mining / etc.
Guus: yes but also application domains e.g. medical domain / product selling / ...
Bill: protect privacy, etc.
Fabien: I use the application domain to answer questions for instance for the W3C communication (I am meeting wit someone in Bioinformatics, what semantic web applications do you have in this domain?)
Gavin: aren't there other directories we could learn from, the way they categorize, from their schemas, etc.
Ivan: there are some e.g. Semantic Web Board,
etc. afraid of duplicating
... Dangerous path to count too much on W3C endorsement, and what happens when the TF stops?
Guus: ok let's stop the TF :-)
Ivan: who will maintain?
Libby: use DOAP and leave it to the users to maintain it.
Gavin: why do we need this repository?
Guus: a list of application is the most frequently asked question by people!
Ivan: is it possible with DOAP to build a web site that is maintained via some community effort?
Steven: that's what we do in our project (AKT project) list of URL of descriptions scanned every night.
Bill: we want no maintenance.
Libby: we may be too ambitious. Small descriptions (a pointer and a sentence) only with a pointer and people can use it to harvest.
Ivan: but somebody has to do it? What will happen in three years?
Fabien: two different problems: get a list and find a way to maintain it after the end of the TF/WG.
Libby: may be we won't need it in three years. :-)
Gavin: for the description how we get it?
Fabien: just from the form to generate your DOAP file.
Gavin: simple interface to accept the submission that could be used by someone else in three years.
Ivan: if this something that could lead to a
significant collection not only for us geeks?
People are not interested in geeky stuff like FOAF, they are interested in the Photoshop example.
Gavin: distinction between project and products. Our product line use XMP can I put everything inside?
Ivan: as an admin I would say please put only one item.
Gavin: pointer to a technology vs. project vs. product
Guus: is there a particular type of selection to show the added value?
Ivan: still in the phase where we have to convince people that there are a lot of applications out there.
Libby: may be we should just focus on getting a
number of them in the list. First priority.
Gavin & Ivan: only members should be able to put commercial products.
Libby: if require the RDF description that may
slow-down the flow of descriptions.
Also recall that the criteria have to be precise.
Gavin: true if we could come up with the right criteria in the first place, but that won't happen. So it is not that important.
Libby: RDF and OWL applications only?
Ivan: for the time being only RDF and OWL.
Guus: I agree.
Bill: concerning the classification, right now we have not so many applications, so we may be trying to classify in the vacuum.
Fabien: an extensible flat list where users can add missing domains would be ok, if it grows then we can reorganize it latter (topic ontology :-))
Libby: summarizing = we stay with web log + we try to set up some tools for task force administration tasks (accept a description) + provide support for DOAP + maintain a simple list of DOAP files in the blog.
Andreas: will the list be available in RDF.
Ivan & Libby: Yes
Libby: the RSS blog gets picked-up by Planet RDF.
Andreas, Ivan and Stephen:
RSS is not really an RDF application since there are syntaxes in just XML (not in RDF)
Guus: but it cannot be ignored as an application that uses RDF.
Bill: cannot submit just an ontology; the application submitted must actually do something.
Ivan: with Mozilla it's your private data in RDF.
Pepper:do TM apps qualify?
Ralph: I propose that any Topic Maps application that supports our translation mechanism be accepted to the ADTF index
<danbri-laptop> ...UML, LDAP, ...
<DavidW> <rathole>HERE</rathole>
<Zakim> danbri-laptop, you wanted to sympathise with the TM case, but note that plain XML, MathML, KIF, Prolog, all have a case for this
<danbri-laptop> (I might've also noted that RDF itself could grow and mature... an RDF 2 might have strong KIF/CommonLogic, CG and TM influences... but RDF remains the architectural focus)
Chris: believe the ADTF page should be limited to apps that work with RDF & OWL
<Zakim> FabGandon, you wanted to talk about CG
<jjc> how about restricting to applications have web pages that validate as (X)HTML?
<jjc> or provide correct use of language tags?
<jjc> (jjc asides above)
Fabien: if we include Topic Maps apps in this list I will not be able to maintain my position to the Conceptual Graph community that they must support RDF & OWL
David: what is "free" -- as in "speech" or as in "beer" ?
Libby: "free" meant "not costing money"
Phil: feel there is space for a soft line
... we could talk about the "Web of meaning"
... in WOM there is space for other stuff
Chris: this is exactly what I would like to avoid
<danbri-laptop> some W3C Activity statement excerpts: "The goal of the Semantic Web initiative is as broad as that of the Web: to create a universal medium for the exchange of data. It is envisaged to smoothly interconnect personal information management, enterprise application integration, and the global sharing of commercial, scientific and cultural data" ... "The principal technologies of the Semantic Web fit into a set of layered specifications. The current componen
<danbri-laptop> ts are the Resource Description Framework (RDF) Core Model, the RDF Schema language and the Web Ontology language (OWL). " "The Topic Map (XTM) and UML communities have been finding increasing synergy with the RDF family of technologies." --
<jjc> (draft MSG)
Chris: a lot of people in this WG are working
on other technologies that are not specifically RDF&OWL but that are
related; e.g. KIF
... you expand the charter of the applications page significantly if you force them to deal with these issues
Mike: could avoid the issue by titling the page
"RDF And OWL Applications"
... as a theoretical point of view, there is no reasonable grounds for saying something that is not RDF & OWL is not the Semantic Web
Gavin: will there be a discussion 6 months from
now on what goes on a "Semantic Web Applications" page?
... have we done anyone any service by taking off the label "Semantic Web"?
Steve: Can the line be drawn at semantic technologies based on XML and URIs? I joined the WG specifically because the W3C Activity Lead wanted my community to be part of the Semantic Web
Mike: it's not practical to say we can draw the
line to include Topic Maps [and exclude others]
... RDFTM is a Task Force in this WG so therefore this is being considered
... we can choose to catalog just RDF/OWL applications as a way of bounding our work
Gavin: this would solve the immediate problem but not solve the larger problem of what the Semantic Web is and isn't
DanBri: this is not a static situation
... EricM is a very inclusive fellow; he and others go around trying to connect communities
... considerations from Topic Maps and others influence what the Semantic Web is
... the context we are chartered in is RDF, RDF Schema, and OWL
... the future of the Semantic Web will be more "Topic Mappy"
... you're here to do the mapping
Steve: does every Topic Map application have to support RDF directly?
DanBri: once we have a WD out we can say the entire universe of Topic Map activity joins RDF
Gavin: when talking about mapping into Semantic Web, what does this mean?
DanBri: being able to run SPARQL queries against data that was published as a Topic Map
Gavin: does this make Topic Map part of the
Semantic Web or the transform part of the Semantic Web?
... if GRDDL is adopted, does HTML become part of the Semantic Web ?
... I'm trying to understand whether it is the transformation that becomes part of the Semantic Web or the technology that was formerly outside?
Guus: I prefer to stick with RDF/RDFS/OWL defining the scope of ADTF
<danbri-laptop> aside: the phrase 'lowercase semantic web' is being used by some for XSLT-able xhtml markup that carries semantics, eg see
<danbri-laptop> (Tantek was here yesterday)
Guus: PROPOSE that ADTF registry be limited to
RDF/RDFS/OWL applications
... and we discuss on a future telecon what is
... and we discuss on a future telecon what is "part of" the Semantic Web
<dom> I just wanted to note that the log of applications is also limited to "free" applications
<dom> ... which isn't saying that non-free applications are not part of the SW
Chris: my goal was to scope the work of the ADTF
<dom> ... that's only called scoping a problem, AFAICT
Possible DRAFT msg to TAG [Jeremy]
<jjc> s /primary concerns are/primary concern is/ in DRAFT msg
<Zakim> pepper, you wanted to add the rape of the concept of "fragment" to the list of other important concerns
JJC: I would object to adding fragments to this as it gets more philosphical
Steve: inability to identify document fragments is an issue
Dave: it's less important for us to address Web Architecture issues than to address Semantic Web issues
<dom> [I don't see why this makes it impossible to identify document fragments]
Dave: I don't want to cloud the issue with things that are not crystal clear
DanBri: the more we can do to narrow down to a
closeable part of the problem space, the better
... we don't care whether cars and airplanes can be identified with http: URIs; this is about RDF properties
Alistair: the tone of this message will make
people on the other side of the debate dig in their heels
... the issue needs to transcend the opposition
Guus: this is a very factual message
Chris: suggest dropping "failure to resolve" and just leave "This issue is impactin..."
<jjc> Suggestion to drop final para
DanBri: this debate should not be allowed to go on for another 2-3 years
Guus: we may not be able to reach consensus on
this today
... may need to postpone to a future telecon
BenA: my impression is that any argument we
give that is based on "it would be hard to redeploy" would not help -- Tim
would not be receptive
... technical arguments would make a stronger point
Guus: my assignment is to chair a deployment
group
... if deployment issues don't count, why are we here?
David: this message is going to the TAG, not specifically to TimBL
Patrick: this is a request to the TAG for
closure, not consensus
... the TAG can close on the issue with dissent (if necessary)
Mike: prefer Seattle area or Galway (concurrent w/Sem Web conf)
Jeremy: prefer to have it outside the US
... due to one member not being able to attend this meeting due to visa issues
Steve: Montreal in August is one possibility
<danbri-laptop> +1 w/ jjc's concern
Steve: w/Extreme Markup conf
Gavin: Ottawa is another option
<libby> +1 on extreeeeme
Gavin: my office is in Ottawa
Mike: I can look into hosting in Vancouver
<danbri-laptop> (I abstain re whether i can attend any non-Europe travel: funding uncertainties... I can pay my own way to Galway happily enough I think)
Andreas: DERI would likely be willing to host in Galway
<danbri-laptop> see
<danbri-laptop> (for bar discussion: if uses Javascript for hyperlinks, is it a Semantic website?)
<dom> (I would argue that they don't use enough declarative sematics to qualify as such)
Mike: is November too late considering the WG charter ends 31 Jan ?
Guus: October would be nice
<jjc> I will find it easier in November,
Guus: I'll do Web poll on Vancouver & Galway
---- closing summaries ---
Guus: there were 17 documents on the reading
list for this f2f
... we should all be glad with the progress over the last 4 months
... we should look forward to closing some of our Task Forces; this would be a positive
... glad to see nice collaboration on UML and Topic Maps
PROPOSE to cancel 10 March telecon, next telecon 24 March
scribe:make 24 March a 2-hour telecon?
ADJORNED | http://www.w3.org/2005/03/03-swbp-minutes | CC-MAIN-2015-06 | refinedweb | 17,574 | 60.35 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
*quick note: I don't have much formal training in PL's, so if I'm wrong, please correct me. For reference, I know python, C++, C, JAVA, OCAML, LISP, prolog (in that order of familiarity).
The quick one liner:
Why are objects that we use in programming so vastly different from real-world objects?
The slow multi-liner:
Human languages have the ability to talk about the state of the world, and all of them contain nouns, which rather than describing a particular thing, they are supposed to indicate a set of things that contain a certain property. Cars can be driven; boxes are containers; speakers produce sound. People from different cultures that have different languages, even with a difference in nomenclature, typically have common words that describe sets of things the same way. In addition, modern science has found mathematical relationships between classes of things with precision. All this to say there is such a thing as "real-world objects," it is not arbitrary (and suggests that there is a right way and wrong way of describing things.)
Most programming languages have a concept of objects, and allow programmers to define them and describe relationships between objects. The idea of objects helped programming with type-checking, encapsulation, code reduction, etc. In fact, an argument can be made that the job of the programmer is to figure out a representation (real-world object -> code -> binary data) of a problem and a process to solve the said problem.
Yet even with the amount of programs being written and problems about objects being solved, rather than converging on a complete representation of real-world objects, objects in programs seem to diverge, where an object from one project is different from an object from another project and is typically different from how a normal person thinks about the real world object.
When the representation in code is the same as a person's understanding about a real world object ("common sense" or common understanding), the person can process and reason and be productive with the code with ease (like.. integer object types). On the other hand, if the representation is not the same (unintuitive), then the person has to go through the documentation/look over each line to match up the person's internal representation with the representation in code, making programming difficult (any large scale programming project).
Working Definition:
Class of Real World Objects: A set of things that has the same property. ex: a cardboard box (made from paper, rectangular, contains stuff)
Relation: A relationship between objects. A relation between object A and object B describes a constraint on some properties of A and B. ex. a ball is "in a" box (constraint on the location of the ball)
Inheritance: A particular relation between object A and object B where object A has a subset of the properties of object B. (an ISA relation) ex. a paper box "is a" paper object. a paper object "is a" physical object
Functions: a relationship between the parameter and the return value
Objects in OOPL:
Objects in object-oriented programming languages (the most straightforward implementation of objects) are defined by the name of the class, with a list of attributes and functions. Relationships between objects are implemented through inheritance, templates, functions, or attributes. Many problems come from the use of inheritance, where the base class has to contain a subset of the derived class, so there is no implicit way of sharing attributes between derived classes if it isn't in the parent class -- unless you restructure the inheritance tree to no longer resemble any real objects (which require constant refactoring with each additional class). Note, functions in the class definitions are relationships between the parameter, the self object, and the return value.
Objects in logic programming:
Logic programming (the most straightfoward implementation of relations) defines objects in terms of relations. Atoms can be objects, and classes may be declared with single term predicates.
brother(A,B) :- isHuman(A),isHuman(B),father(C,A), father(C,B).
Defines the relationship "brother" for objects of class human
brother(A,B) :- isHuman(A),isHuman(B),father(C,A), father(C,B).
ownsPet(A,B) :- isHuman(A),isAnimal(B),has(A,B).
Another example of a relationship. you can find who owns a pet named kitty (?-ownsPet(X,kitty).) and you can find the pets that a person John owns (?-ownsPet(john,X).) (think about how to program this simple relationship in OOPL -- adding owner to class animal and pet to class human would require two fxns and messed up classes).
ownsPet(A,B) :- isHuman(A),isAnimal(B),has(A,B).
Some logic programming languages are object oriented, and with clear definition of objects and relations, come the closest to the way we think about objects. However, logic programming is (typically) done on horn clauses, and uses backtracking to solve the query (run the program). Therefore, you can't put all human knowledge into a prolog program, and still expect it to solve your query in a reasonable amount of time.
Conclusion
Object-oriented programming give very good benefits to the programming environment (encapsulation, abstraction, code reduction, code reusability, type correctness), but common approaches limit the definition of objects, creating objects that no longer describe real world objects and sometimes decrease productivity in the process. Rather than thinking "what objects there are, and how to use the relations between them to solve the problem," a programmer has to think about "what objects can I implement that will give me all the benefits without any of the problems."
Logic programming approaches, on the other hand have a particular control process, which make intuitive implementations of real world objects unfeasible to solve.
So here's the discussion: How do you improve object and relation definitions in a way that is "natural" and useful to programmers? Is there a provable reason why objects in programming languages are so unintuitive? will they ever be?
[and bonus question: benefits/pitfalls of object-oriented functional langages?]
One of the best leaps a programmer makes during their path from newb to godhood is to realize that 'objects' don't map well to real world objects. Programs work better, designs are more clean... code sucks less when a object in code represents the embodiment of a singular concept; a singular set of invariants rather than an object. Is that a failing of language design? A quirk of calling it object oriented programming rather than state oriented programming?
Probably.
But back on point, since you seem to want a language where 'objects' do map to objects.
How do you improve object and relation definitions in a way that is "natural" and useful to programmers?
How do I personally improve them?
I start with changing the type system to a structural one. As you said, many of the problems come from the use of inheritance. The real world does not fit into a nice tree, and disparate libraries certainly don't integrate well enough together to agree on what the tree should look like. This is one of the key strengths of dynamic languages, and something that languages like Scala are moving towards.
It allows objects to be a subtype of another concept more easily/more naturally. It allows object compositioning/multiple inheritance more easily.
I've a few other ideas, but are not really as sure that they're good ideas. Moving away from the inheritance tree is, in my current estimation, a good idea.
Agreed, and not having multiple inheritance, mixins, or traits in the mainstream languages seems to be problematic. As you stated, the real world doesn't fit into a nice tree.
As you stated, the real world doesn't fit into a nice tree.
The idea behind objects is that they are *peers*. You do *not* model object-oriented programs as a tree!!!!!!!!!!
Never.
Ever.
Unfortunately, many people get this aspect of OO completely wrong. Ask any seasoned developer what they think of Model-View-Controller and they'll say its good. However, quite often their code has an emblematic OO code smell: "Controller Trees".
There is a very, very, very important reason why we try to normalize our behavior substitutions to fit a tree structure, however. Provided that the objects in the problem domain are peers, then we can dynamically substitute the behavior of peers to get rich, run-time behaviors.
Compare this to frame-oriented programming, the antithesis of OO. Frames model programs as lattices. As a result, there are no constraints on what structure your object hierarchies can be in. How do you intend to typecheck something like that? Moreover, how do you assign responsibilities to subsystems and divide out tasks among team members, if your code is a Spaghetti-modeled lattice?
how would you move away from the inheritance tree, if inheritance is where most of the strength of OO comes from? the good thing about OO is when you can treat the derive classes the way you do the base class, while adding and modifying some aspects of it. Multiple inheritance helps, but are there alternatives?
I've a few other ideas, but are not really as sure that they're good ideas.
care to share? I'd love to hear any + all suggestions.
Also, I realized I'm making an assumption if that a person's idea of an object = real world object = object in code, it will be easier for the programmer to code, code will be reusable, things will go smoothly, and the sun will shine. Which may or may not be true, ... and is a bit related to your first point
There is a difference between inheritance (including the contents of a base class) and subsumption (being able to use a subclass where a base class is expected).
They often coincide, but need not necessarily be 1:1. You said you use python. Objects there need not inherit from some interface to be used with a method, even though the method presents a sort of interface (supply these members or else!). Some type systems' traits or mix-ins do stuff like that statically.
care to share?
Eh, perhaps after a night's rest. For now:
I think units of measure are good (though I'm so-so on this implementation).
Real world objects are not PL objects
That's crucial. I think that the most important task when teaching OOP is disabusing people of the idea that objects should be based on real world objects. This myth, coming from OOD, is one of the things that makes OOP as terrible as it is.
Really, they're not. Before even getting into a discussion about OOD, I would take issue with the assumption that the world naturally divides itself into discrete "objects" with natural properties. Not that these distinctions are entirely arbitrary, either.
For a computing-informed perspective on this, I recommend Brian Cantwell Smith's On the Origin of Objects. I know that this is not to everyone's taste, however...
Like "God Approximately" -
"The world simply doesn't come all chopped up into nice neat categories, to be selected among by peripatetic critters - as if objects were potted plants in God's nursery, with the categories conveniently inscribed on white plastic labels."
previously on LtU
Edit - found them
Ehud, I think you miss the point or are simply using your experience as allowance for hand-waving. Could you please share real world examples where you failed to use OO effectively?
Sure, there are edge cases where OO poorly describes a problem domain, but for systems integration it is extremely valuable.
In my experience, OO is really good when there is a stable description of the problem domain.
Ehud: if you feel it will take less time, you can instead enumerate all of the circumstances you know of where it is possible in principle to use objects effectively. :-)
I'm not just being snarky..
That's not a bad place to be, and it's a fine reason to use OO programming, but it shouldn't be confused for an endorsement that OO is in any sense the "right" model for programming.
OO "works" because it is the least-bad and least-incomprehensible mapping that has been identified to date.
Hmm, it looks to me that it took off because it had plenty of trivial and concise examples giving the illusion of simplicity and plenty of code reuse and extensibility.
Contrast with examples discussing ADTs, where extensibility is often not even discussed, and any person not steeped in all the theory could easily reach the conclusion that OO is superior.
OO "works" because there is no consensus or objective definition regarding what it means for a paradigm to "not work" (or, for that matter, to "work"). It is a word without meaning.
That, and effort justification.
Seriously, most OO programs I've seen spend most of their code in design patterns to escape their OO shell, and the rest in non-OO algorithms. Somehow, people point at this immense struggle to escape the turing tarpit, observe marginal success, and call it 'working'. I call it 'overhead'.
(To be fair, this isn't unique to OO. People do the same thing for FP, i.e. pointing at monads as though to prove FP 'works'. I've never understood this phenomenon; I cannot distinguish type-0 computation models by ability to model other paradigms. Only local reasoning and expressiveness distinguish paradigms.)
James Gosling said something to the same effect a while back. But having a "stable description of the problem domain" seems to be more of a throwback to the days of closed, monolithic systems and waterfall designs.
Actually, I'm not bashing on OO, but Java-style OO (single inheritance, single dispatch, no extension methods). A language like Scala seems to more "easily" map from a domain to a model.
I always found PSYCHOLOGICAL CRITICISM OF THE PROTOTYPE-BASED OBJECT-ORIENTED LANGUAGES to be an interesting paper.
Subtext wants to be a programming system that mimics the way that humans "naturally" think about putting together code - an automated copy-n-paste system.
For me, the bottom line is that modern, mainstream OO languages have been sold to us as a bill of goods. There's really nothing "real-world" about mainstream OO. I believe Alan Kay lamented the fact that he didn't call OO message-oriented programming.
I believe Alan Kay lamented the fact that he didn't call OO message-oriented programming.
I may have heard Alan Kay say that once, but I'm not sure. What he has lamented is that he coined the phrase "object-oriented programming", because he said that got everyone focused on the objects, which wasn't his intent. What he meant to get across was the power of messaging. He said what's important is the relationships between objects, and that the real abstraction is in the messages, not the objects.
In a couple speeches he kept using the Japanese word "ma", which roughly translates to mean "what goes on between things". He said the best English translation (though it's still rough) is "interstitial". His original vision for OO was that each object was in effect a computer. If you layer on the idea that any computer can emulate any other computer, that gives you an idea of what he was driving at, but the "computers" were supposed to be incidental. Messages are a representation of interaction. That's what's important. The objects are mere endpoints for that interaction. I use the analogy of the internet. Computers on the internet are just "boxes". What's important is what goes on between them, the interaction between the endpoints. The internet and OOP were formed in the same environment. The objective of each was to decompose what had been monolithically designed systems into finer grained models that would scale into massive systems without breaking.
To expand the point even further I saw a message he posted on the Squeak developer list which basically said, "You don't have to follow my model for objects (member variables and associated methods). Create your own. What's important is messaging."
If you really want to see what he was driving at you need to look at Smalltalk (or Squeak, it's modern version). Some of his ideas were adopted in modern OOP dev. systems, but the important ideas were lost.
My sense of what he was getting at with OOP was ultimately to create a system where people could create their own languages to model their own systems. If you look at Smalltalk it's very easy to create your own DSL. Objects and messages were a linguistic infrastructure for achieving that. That's just my read of it. I could be wrong.
Another important thing to note is that Kay did not consider Smalltalk and his ideas of OOP to be "the final answer", not by a long shot. He's used some historical analogies for it, like "a minor Greek play", or "Gothic architecture". These are very old by modern standards. The problem he says is that just about everything else that's used in what we call modern programming (even OOP as it's popularly known) is analogous to things which are much older and archaic. He uses analogies to ancient Egypt and pyramid building for them. What he's long hoped is that someone or some group will come along and "obsolete" what he's created, in other words create a new architecture that is so good it eclipses what he helped create almost 40 years ago. He's said with the exception of Lisp (which he calls the most beautiful language yet created), which predates Smalltalk, he hasn't seen that yet.
reference to Kay's comment on messaging
Thanks for sharing this Kay gem, I've not seen it before.
In particular, I've never heard him say the following before:.)
The fact Kay realized this fine point of design in the late '60s (according to him) is why he is a Turing Award Winner.
I know Lisp programmers who even today don't understand this point - their code is succinct but the API has a massively unnecessary learning curve due to unclear boundaries.
Sometimes my coworkers object to me paying extraordinary attention to detail about what the boundaries are. However, if we don't pay attention to boundaries, we may as well all be Netron Fusion COBOL programmers munging VSAM records and EDI data formats.
anybody know of things which could reasonably be said to be excursions along the path of the 'ma'?
Some things of 'ma':
Exemplified by projects such as Orc and CeeOmega, support for 'eventual send' in E language, research into the 'kell calculus' on Oz/Mozart, at least one implementation of transactions for process calculus, and more.
My own language design is very heavily focused upon achieving useful properties in the interstitial space, involving distributed transactions, automatic distribution, regeneration after loss of a node, support for tunneling of 'context' information to support cross-object concerns like process-accounting, logging, and security... and so on.
When I first read Kay's words (years after he spoke them) I saw them as a confirmation of a conclusion I had already reached a couple years before. I favor the word 'interstitial', which I learned after reading Kay's comment; prior to reading it, I simply didn't have any single word to describe the challenges objects/actors/processes face across space and over time.
Regardless of whether Kay's words fall upon deaf ears or whether he's preaching to the choir, I would say there is still plenty going on in this area.
thanks!
It is a matter of investing extra time up front for greater productivity later on.
The trick is to "make stone soup. not boiled frogs".
Today, one of my coworkers commented that my adding two new classes to the system to increase cohesion and decrease coupling was probably unnecessary, and he disliked the fact it added an additional step he had to go through to complete a task. I basically responded that we should follow the Google Search Principle: hiding the additional steps through administrative tools that step us over the additional steps (and also allow us to step into the complexity when necessary). We can also provide reasonable defaults for dependency injection. For instance, either whitelist or blacklist security policies could be the default, depending on what we're doing. These defaults would be hoisted into some sort of schema authoring tool that defines the injections a framework defaults to.
Most tools simply do a poor job adapting to complexity, but these tools are a direct reflection of what programmer's ask the tools to do; I don't just want a tool that does rote tasks, I also want a tool that is self-forming, self-describing, self-reasoning, self-healing, self-*. I want a tool that programs with me, and yet also a tool that wants me to control it. Most tools control the programmer. Such goals are not easy and are a labor of love, require lots of iteration, and I cheat in many ways to avoid big up front design - "make stone soup. not boiled frogs".
As an aside, sometimes you just look at APIs done by big corporations and wonder what the hell they were thinking. For instance, .NET 3.0 is a train wreck from an OO perspective - I refer to it as "a mix of OO and architect's-kitchen-sink". All the metathings they introduced for dependency analysis should've been rolled up into a MetaObjectProtocol.dll. Instead, there are two duplicates of DependencyObject and DependencyProperty, each in their own assemblies (one for WCF and another for WPF). On top of that, the actual API has holes in it that allow the dependency analysis process model to be subverted (i.e., there is no DependencyPropertyUnsetValue singleton denoting "nil" in the dependency analysis subsystem, instead there is a DependencyProperty.UnsetValue() method that calls the object() constructor, and so an object instance is used to denote "nil"). And I still feel much of the guts could've been folded into the VM as a VM innovation. They also should've realized a Uri is the ultimate metathing, since it is what identifies every object in your system. Had they realized this, they would've completely re-written the Uri class and made a new one for .NET 3.0. Just my humble opinion, of course.
Re: comment-48119…
⨯-reference: comment-62754.
so wait a second. if it isn't the objects so much as it is the messaging... but we've got this whole thing of OO where people are all like, "hey, encapsulation is a good thing!" and yet we end up with problems because of it (e.g. the OO side of the expression problem coin) vs. just having "data is the core, then algorithms around/on top of that." so while there might be some things to be said for OO (as long as we make sure to disallow inheritance :-) is it really just sort of a cosmic bad wild goose chase?
and, so, furthermore, uh, what actually is it that is so magic about messaging? is it something about abstraction and loose coupling (and possibly separate processes)? like, if it is all statically checked and doesn't allow for loosey-goosey "message not understood but lets just keep it between the two of us and not blow up the whole program" of smalltalk and ruby, then does it not have the key things Dr. Kay would advocate?
Encapsulation is a good thing for subsystems, ie. large systems of aggregate components. I think encapsulation in the small is not so good, as it impedes equational reasoning.
Details that are hidden can be changed. Details that are hidden can be secured. Encapsulation serves a definite purpose in modularity, composition, and security.
The mistake is not encapsulation.
A mistake is the belief that everything-is-an-object is a good thing. A mistake is the use of objects for data-types. A mistake is the use of objects for domain modeling.
what actually is it that is so magic about messaging?
The value in messaging is that it is not magical at all. It corresponds well to physical processes (albeit not quite so well as conservative logic), and yet it still provides an efficient model for computation and natural approaches to both synchronization and concurrency.
But I don't think that's what you're asking.
The direction OO took has been largely focused on modifying the objects (mixins, classes, inheritance, etc.) and treating objects as containers. A focus on messaging would have resulted in a different path. Instead of objects being given all those gaudy wrappings, the messages, queues, the relationships and links between objects would have received the features. OO would have gone a different direction - configuration of communications, dependency injection, dataflow and orchestration, automatic fallbacks, synchronization patterns, distributed transactions, and so on.
I believe that OO would have turned out much better with a focus on messaging rather than gaudy trappings for the objects. I'm not certain Alan Kay feels the same, but his statement above suggests he at the very least feels considerably more attention should be paid to the messaging and providing 'features' in the interstitial space between objects. I described several such features in the list above.
Any time you introduce a symbol as an abstraction for some value, the symbol should only represent the value and not its internal details.
Also, I would never, ever suppress message not understood. It is part of protocol design and therefore boundaries to have a well-defined way to respond to failure. Failure is a critical event notice, and shouldn't be responded to in ad-hoc ways such as adding properties to a class, imho. If you have to add properties at run-time, chances are your design is not very stable.
If you want a decent introduction to messaging, check out "Advances in Object-Oriented Metalevel Architectures and Reflection", edited by Chris Zimmerman.
That being said, encapsulation is not always a good thing, and in some contexts premature encapsulation is possible. Continuation-Passing-Style optimizing compilers are a good example of how to break syntactic encapsulation, while preserving the semantics of the subexpression. Tail call optimization is fundamentally an encapsulation breaking procedure.
Note, this describes both parametricity and encapsulation. I think the former is essential, but the latter not so much.
At work, we use the phrase "Stick-built" to describe a solution we've not yet parameterized, because we've yet to understand the permutations and are still solving the problem in a mostly ad-hoc way. However, once parameterized, we don't junk the phrase encapsulation, because it still means a lot.
Encapsulation is what makes objects peers; a network of inter cooperating parts formed on the fly. The Law of Demeter captures this fairly well, and it effectively specifies **the effect of** constraining system parameters in various ways, some better than others for encapsulation purposes. [Edit: For a related discussion on this and trade-offs in class modeling, see Walter L. Hürsch's Should Superclasses be Abstract?]
Parametricity helps only in the case of inheritance, by forming type-based compiler-checkable contracts for awkward scenarios such as Template Methods. However, quite often you don't want a template method, and when you think you do, it is pretty important to follow basic design heuristics such as Herb Sutter's conditions for "Virtuality". To me, template method implies some ownership of a protected resource, and ownership implies control of the lifetime of that resource.
There are four levels of specification: defined, implementation defined, unspecified and undefined.
Visibility modifiers and security managers define what capabilities clients have access to.
However, encapsulation is not nearly as important a principle as data independence.
I also must confess to a strong bias against the fashion for reusable code. To me, "re-editable code" is much, much better than an untouchable black box or toolkit. I could go on and on about this. If you’re totally convinced that reusable code is wonderful, I probably won’t be able to sway you anyway, but you’ll never convince me that reusable code isn’t mostly a menace...
- Donald Knuth, Interview by Andrew Binstock
The larger problem here is ABI compatibility and interoperability with old languages - it is what forces most premature encapsulation.
the idea that we want to extend our code w/out ever changing the source files bugs me a lot.
... the idea that we'd want to extend other people's code and services without changing their source files seems pretty straightforward.
There certainly needs to be a balance of the two - ability to modify source-code and modify a service you "own" on one hand, and the ability to extend systems without such modifications on the other.
The point of OO is not merely encapsulation.
Neither is it abstraction (which you can do with procedural and functional programming just as well).
Neither is it 'simply' about messaging.
So what is 'so magic about messaging' is, as Alan Kay put it in The Early History of Smalltalk: doing encapsulation right is a commitment not just to abstraction of state, but to eliminate state oriented metaphors from programming.
Regards,
Lance
apologies for probably being dense, but for me at least, that could stand further explication :-) i mean e.g. where does that fall on the spectrum of say monads to dependency injection to getter/setters? (i shall go try to follow the lead of that quote online and see what i can learn.)
This seems more like a data modelling/ knowledge representation pondering... The best book I've read on these kinds of things is Data and Reality. Very expensive on that link though!
Any program is only going to model the behaviour of an object that it cares about. If you think about where "real" behaviour comes from, it is the physical structure of a real world object. A real world object can sometimes be indistinct, and it may be that lots of people have disimilar mental models, or even different mental models for the "same" object in different contexts. So, do we model indivdual quarks, leptons and quantum gravity (hm...) and wait for emergent behaviour? No. We model observed behaviours that are relevant to the enquiry at hand. Anyway...
Logical programming languages do not make this problem go away. Who decides on the terms? The granularity? Its still just the guy writing a program to model a situation. This is what leads the "Big Knowledge Representation" guys to have massive fights over ontologies, and you can look at the progress of the semantic web to see how thats coming along.
Anyway... back down to earth... I found this "DCI" article quite interesting in at least trying to formulate a way to express the different roles the "same" object may play. Worth a gander.
that book is one of the best books on information science ever written, and as such a wealth of information on how to design programmable systems in-the-large
If you note, I am the big heckler in response to the DCI article thread. That article was poorly written, and hopefully my feedback will sink into the authors' brains.
If you really think anything in that article is a new idea, then you need to crack open a good model-driven architecture object-oriented analysis text.
.. that's the only one they have at my library =/.
Yes, I have the 1978 version and am unaware of a different version.
I've heard Brian Cantwell' Smith's book exhorted in various circles, and let me just say it is a mistake. It was not only a painful read, but filled with logical fallacies. One of these days I'll post an Amazon.com review ripping it to shreds.
It is amazing how many APIs are poorly designed. Just look at .NET Framework 3.0 and WPF and WCF.
- You need a Ph.D. in distributed systems research just to understand some parts of WCF. (There are some good parts.)
- WPF has so many complex interdependencies that it basically violates every law known to man as to how MSFT ever finished building it.
Also look at J2EE. Poor "object-based" specs like these are the darlings of component vendors, because the spec always requires you to pay for some third party library to accomplish anything. B/c vendors love them so much, marketing $ get pumped into promoting these specs.
As a result of examples such as these three big failures, practitioners equate OO with these awful APIs/projects/specs.
However, OBJECTS *ARE* INTUITIVE. The hard part is understanding the problem domain, but that is what Brooks warned us about in No Silver Bullet, anyway!
However, OBJECTS *ARE* INTUITIVE.
I agree wholeheartedly. The problem is that while objects are intuitive, they so often promote bad intuitions. It is very hard to assess how much this is a consequence of the model and how much a consequence of the developers. And of course, the "best" solution is the one that chooses the right impedance match between the two.
So you have the magnitude right, but the sign still worries me...
I see two bad consequences of objects being intuitive, one cognitive and one social. The first seems ironic: given a tool easing effort to reason, many developers reason so much less they manage to retain flawed results. The same way work expands to fill available time, we also see effort contract to suit ease of tools.
People tend to excess optimism. When a problem is almost solved, folks decide they're close enough and stop trying. A bad social consequence I mentioned seems related: many folks aim to solve problems in plausibly superficial ways, so as long as it looks nearly right, this counts as "showing up" to garner social credit; failures are just bugs.
Sometimes I seem productive only because I try hard to think of all ways code can go wrong, and there are so many. Objects can simplify, with fewer ways things can go wrong. But we still have an obligation to aggressively look for remaining failure modes due to complex semantic interaction, including failures in an object model's runtime.
Perhaps elegance due to object clarity can mislead developers to ignore subtle issues, such as edge conditions in contracts.
I agree wholeheartedly.
Too many OO applications start their lives as domain simulators. This 'false start' is implicitly encouraged by the such textbook examples as Person (Employee, Manager), Vehicle (Car, Tank, Helicopter), Animal (Cow, Duck). This 'intuition' is at the level as wondering whether the little men living inside your television would like tacos and ice-cream as a reward for all their hard work.
It doesn't help that OO is simply awful for domain modeling. You'd be better off with ye'old relational and procedural. OO makes difficult to describe ad-hoc queries, to tweak rules, to introduce complex relationships, to support many simultaneous 'views' of a system, to integrate external data resources.
Developers eventually learn that OO is for modeling program elements, not the domain. But this only happens after a lot of black eyes, bruised knuckles, bitten arses, and learning to walk around in a clunky suit of boiler-plate mail. Developers battle with concurrency, IO, persistence, observer patterns and reactivity, reentrant callbacks, cache maintenance, etc. Boiler-plate is all those design patterns they end up using: functors, queues, futures, pipes, DI, visitors, exception tunneling, messages, et cetera.
It is unclear to me how developers come through imagining that OO actually helped. Perhaps it is some mix of cognitive dissonance, effort justification, selective memory, and blub paradox.
I'd prefer we find some good physical intuitions. My RDP intuits a sort of 'people watching each other' model, aware of the eyes upon them, capable of only local actions (like waving). But, if we must make a choice, I think it would be better to favor an initially unintuitive language than one with known problems. (But we certainly don't want counter-intuitive! By unintuitive, I mean that the language is mostly neutral for intuition.)
I'd even suggest the possibly unintuitive philosophy of making the difficult things easy and easy things possible. For example, with OO we could reject ambient authority, embrace the object capability model, and this makes a 'hello world' program and FFI integration more difficult but better supports various in-the-large robust composition properties. There are many things we can do along those same lines to support concurrency, coordination, persistence, partial-failure, graceful degradation, resilience, robustness, reactivity, real-time, modularity, distribution, security, upgrade, debugging, testing, integration.
Besides, we can build new intuitions in our languages. Invariants and local reasoning offer a justifiable bases for intuition. Ocaps are quickly become intuitive, since it is easy to understand 'nobody can interact with this new element until I give them a reference'. By comparison, the OO modeling of programs is based on the unjustified claim that programs should be a concrete representation of a bundle/substance model that is volatile in our minds.
OO makes difficult to describe ad-hoc queries, to tweak rules, to describe complex relationships, to support many different 'views' of the system, to integrate external data resources.
That's o.k. but most of time you don't need it just like there are no industry jobs for writing software that controls and coordinates swarms of robots, evolve the global brain and other sophisticated SciFi and MIT/DOD technologies. General purpose language are used for general purposes i.e. they shall facilitate 99% of the common tasks which are fairly unsophisticated.
In my day job I work together with embedded systems developers who often lack software engineering 101 knowledge and practice ( modularity, separation of interface and implementation etc. ). OO was the vehicle to stuff it into programmers throats like structured programming was before. Since this succeeded only half way the same is now iterated with FP, with great promises, much snake oil but only mild success as it seems.
I'd even suggest the possibly unintuitive philosophy of making the difficult things easy and easy things possible.
As Nietzsche said he hadn't found yet a single philosopher who lived at the heights of their own claims and demands. We have to see yet the very first non-trivial algorithm / design pattern of yours that solves an actual problem in language implementations and not a huge bag of ideas and an even greater bag of requirements which are tweaked and stacked up faster than anyone can follow and which is dead expertise that finally fades into oblivion. This is not good philosophy and it is no science, no art and no engineering.
I agree that you don't need domain modeling for most applications. I had thought that clear from the paragraph immediately before the one you quoted, but I guess it isn't because simulation is just one form of modeling. My point was that OO manages a double whammy - it encourages you to do the wrong thing and it is relatively poor at what it encourages you to do.
Anyhow, I'm not denigrating OO just because I want to replace it with something that helps solve problems in my day-job (which is, nominally, developing software for controlling and coordinating swarms of robots). I admit that's part of it. But I also believe that OO is worse even than its predecessor - the old mix of procedural programming and multi-process architectures where whole processes might be considered objects interacting on a shared bus or blackboard or database, or configured externally. I'd favor actors or Erlang processes or FBP to objects. At least with process per object, it's rather obvious you shouldn't be describing composite patterns and visitor patterns and so on with 'objects'.
As Nietzsche said he hadn't found yet a single philosopher who lived at the heights of their own claims and demands.
Nor will I, I am sure.
(It would be sad, indeed, if philosophers only ranted about maintaining the status quo. Those who demand systemic change will always face a barrier to living at the heights of those demands.)
But I'll raise you another quote. Ralph Waldo Emerson advises: Speak what you think now in hard words, and to-morrow speak what to-morrow thinks in hard words again, though it contradict every thing you said to-day.
That is the advice I follow.
We have to see yet [...]
Are you feeling impatient?
Skepticism is, of course, warranted. OO and FP and CORBA and COM and many other paradigms and architectures have made promises then failed to deliver. You have nothing but my enthusiasm to suggest RDP can do better. People have a difficult enough time grokking FRP without having used it, so I don't expect you to grok temporal RDP just by looking at a description.
You seem frustrated that I provide a huge "bag of requirements". But I think it important to develop and share such lists for three reasons. First, it is difficult to improve languages after they are in use, and incremental design is a lot less flexible than up-front design, so it's very important to have the huge bag of requirements up front. Second, I'd like more eyes on the same design problems I'm trying to solve because, honestly, the problem is a lot bigger than I am. Third, I see a lot of people trying to improve things without any real direction. Can you tell me what it means for a paradigm to "work" in a manner that will usefully distinguish paradigms? Did you remember the corner-cases and scalability issues and partial-failure conditions and concurrency properties of real applications? A list of desirable properties, even described informally in terms of user stories, is one way to judge the worth of a paradigm. (Greenspunning, frameworks, and non-localized design patterns or self-discipline are symptoms of paradigm failure, IMO, but that's not a popular view.)
I'm quite happy to move slowly in the right direction. It would certainly be nice, IMO, if more people used compasses based on pervasive language properties, even if they don't precisely agree with me in terms of priority.
I see OO as a paradigm that moved with the speed of fad in the wrong direction. Developers spend too much time fighting the paradigm (observer pattern, visitor pattern, ORM), or subverting it (with global data, singleton pattern, stack inspection, multi-methods, etc.), or working around bad implementations (dependency injection, abstract constructor patterns, troubles with synchronous and asynchronous IO). E is as close to OO 'done right' as I've ever seen, and Gilad Bracha's Newspeak is also excellent, but even OO 'done right' scores poorly on the gamut of requirements and user stories I consider important.
Zenotic inertia is stating the impossibility of beginning because there is an infinite list of tasks which have to be performed before anything can be performed. It's not quite that I'm in a hurry but surely too impatient for analysis paralysis and bad infinity.
It is difficult to improve languages after they are in use, and incremental design is a lot less flexible than up-front design, so it's very important to have the huge bag of requirements up front.
It is easy to improve a small and dense code base even for a single author. In my own case I'm reworking a code base of 5-7 KLOC for a couple of years ( apart from professional activities ). I wouldn't even call the design "incremental" and progressive. It is cyclic, critical and revolutionary: big step changes caused by discoveries made while working with the material. No matter how hard you think about requirements, you never get there by musing about how it should be and whether the list is complete. It's like working out a proof for a theorem which drives the invention of techniques which are impossible to conceive without doing this particular proof. It doesn't matter if it takes 5 or 10 years or lasts forever when the stream of requirements doesn't end - it usually does and being subjective = opinionated about what goes in and what is left out is sound.
In some sense I also mistrust "language design" when it is decoupled from implementation and the invention of algorithms. Just look at the current trend to make all data-structures immutable solely based on preconception what is hard and what is not so hard in code analysis. Without getting a feeling what is actually hard to do and where hints / decisions are absolutely unavoidable you just stick with your prejudices. Same with the relative importance of problems. You can spend a huge amount of time advancing a formalism which does not solve a problem which is complicated or even tedious to solve without it.
easy to improve a small and dense code base even for a single author [...] cyclic, critical and revolutionary
That may well be the case while nobody else is sitting on top of that code base. Platform technologies - such as languages and protocols - are different due to the inability to fix a massive codebase built atop them while attempting critical and revolutionary changes. Alan Kay expressed the following regret:
Alan Kay:.
You can easily become a victim of your own success if you 'release early release often' with a platform technology. Critical, revolutionary changes are no longer feasible after a language "hits the larger world".
I've been writing code, prototyping, sketching implementations, running through a long list of user-stories. I've decided to cut several paragraphs discussing my progress (too big, off topic), and just say that I'm a little irritated at your implications that I'm a pretentious, prejudiced idealist that has spent all his time stuck in requirements analysis rather than trying and improving upon solutions. Besides, the process of trying things out has been my greatest driver for new ideas.
Prior to RDP (a new paradigm I developed earlier this year), my efforts were focused around layering and organizing aspects of existing paradigms (actors, dataflow, functional, relational). I had no plans to release an implementation until design had stabilized for a few years. But, since RDP is an original paradigm, I plan to release RDP (via Haskell library) even if I later find something better.
In some sense I also mistrust "language design" when it is decoupled from implementation and the invention of algorithms.
Language design should stretch beyond state-of-the-art (else, what's the point), but must also be grounded in implementation concerns. Our approaches to this dichotomy seem a little different. I'm happy to throw a bunch of ideas into the air then try to land them, whereas - if I'm understanding you - you'd prefer to keep at least one foot on the ground even for your revolutionary changes. Following this metaphor I think, perhaps, that my approach is more likely to have a ton of language designs crash, evaporate, or simply float away. But it also has a better chance of surpassing the mountains before me.
Just look at the current trend to make all data-structures immutable solely based on preconception what is hard and what is not so hard in code analysis.
I disagree: there is no sole reason, and those reasons are not preconceived. We developers have been bitten aplenty by mutable state.
Any given datum is generally useful for influencing more than one behavior, and thus often is observed concurrently. The single-observer scenario, where linear programming applies, is a rare specialization. It has been well observed that reasoning about structure that is mutating is 'hard'. And that's even before you concern yourself with 'corruption' by buggy or malignant observers. Languages with mutable data structures often force developers to explicitly copy large data structures many times in order to process it in a sane manner and resist data corruption.
Beyond that, mutable data-structure semantics does not scale well, i.e. for applications sharded across hosts or between GPU and CPU, the mutation semantics introduce high synchronization costs.
Mutable state should be regulated, avoided when practical, and well distinguished. (I'm not anti-mutation. I just want all the mutants to register!)
Without getting a feeling what is actually hard to do and where hints / decisions are absolutely unavoidable you just stick with your prejudices.
It is a little unclear, but I think you're arguing against a sufficiently smart compiler. With that, I'd agree; simple and naive implementations should be feasible, and depending on 'sufficiently smart' compilers is a stupid idea.
However, languages can do much to make us more productive and intelligent by increasing expressiveness and local reasoning. To a very large extent, the same properties that help humans reason about code will also help machines. The converse is also true: the properties that an Integrated Development Environment can reason about can help the humans. Thus, a good language will tend to support a great deal of code analysis by machines by simple virtue of becoming better for humans.
I have no objection to using hints and annotations to support the implementation in achieving performance. But I would disfavor pragmas that affect language semantics, or at least consider them a bad language smell.
You can spend a huge amount of time advancing a formalism which does not solve a problem which is complicated or even tedious to solve without it.
I agree. And it's even worse if that sort of formalism gets implemented and becomes popular. OO would be a fine example of a huge amount of time and resources advancing a language design that does not solve any of the complicated, tedious problems it initially promised to solve (such as code reuse).
A while ago I read an article "How intutitive is object-oriented design?" (ACM link, free preprint available here). The article unintentionally displays some of the things I find wrong with OOD teaching. Quote:
For example, in a hotel reservation system, the class fax could inherit from the class email, since a fax object requires more handling (such as scanning and digitizing), hence has more functionality, than an email object.
When OOD is taught like that, I'm not surprised to see people mixing up real-world objects with formal ones and creating awkward inheritance hierarchies.
How do you improve object and relation definitions in a way that is "natural" and useful to programmers?
Stop calling them objects. Give them a scary name so that they don't seem intuitive anymore. ;)
There was a discussion on the Haskell mailing list recently whether 'monoid' shouldn't be called 'appendable' as that's "less scary and more intutive," and many felt that this is misleading and misses many of the applications as a monoid is a more general and precise concept than 'appendable'. I agree to this. Programs are formal entities, and they do contain abstractions that do not directly map to any real-world intuitions. The less the programmer is tempted to draw on intuition, the more it will help him.
So, ultimately, I'd say "don't adjust objects to match intuition, adjust programmer's attitude to not rely on intuition." Unfortunately, that's the exact opposite of what you are looking for.
There are copious semi-famous examples of college textbooks on programming or general OOA/D with horrible examples. The best one is:
public class Circle : Point {
//...
}
public class Circle : Point {
//...
}
I agree to this. Programs are formal entities, and they do contain abstractions that do not directly map to any real-world intuitions. The less the programmer is tempted to draw on intuition, the more it will help him.
I think you are missing the point those of us in the minority tend to proselytize: objects in the large, functional programming in the small.
"Monoid" says nothing about what portion of your development team owns a given component. Objects do.
"objects in the large, functional programming in the small."
Functional programming in the large, objects in the small.
Programming in the large would be geared towards specifying interfaces between separate parts of a system that are developed in isolation. I can't see that functional programming is geared towards modularity and isolation.
But I'll admit that I may be missing your point.
The reason for my opposite view is due to concurrency issues. When you want a large system to work well with multiple independent threads, it is quite important to make sure that you avoid sync-points. A very neat way of achieving this is by requiring that objects avoid mutating global data, hence in effect working in a functional manner. This makes it much easier to reason about large systems as a whole.
In the small, each thread/job may be functional or object-oriented, from a whole system point of view - it doesn't really matter as long as inputs to the system are read-only.
To return to the topic of the OP: Aside from a subset of man made devices, it has not been my experience that "real world objects" have well defined interfaces...
I think the point David Barbour mentions earlier comes in here. If you're trying to model a situation, then relying on a well-defined interface is folly, since as the situation changes so may the structure of the problem domain. Centering a model around situations is generally not a good idea, unless you are building a fuzzy controller intending to use heuristics to solve an intractable in hard real-time problem such as a self-balancing robot that must compensate in hard real-time.
The ultimate form of modeling a situation, in my humble opinion, is Presentation-Abstraction-Control, an architectural pattern that for most applications takes Model-View-Controller to its logical and disastrous conclusion.
Typically the code smell is obvious to me: permutations are treated too specifically at the highest level of abstraction, requiring case statements to be written on an ad-hoc basis. Often the programmer will try to conceal the problem by pushing complexity into factories. Encapsulating the ugliness into factories actually makes the problem worse, because ultimately nothing meaningful to the problem domain ends up being encapsulated and permutations are still being treated too specifically at the highest level of abstraction! This is where the worst form of OO coupling takes place: instantiating relationships by passing concrete object references. As a result of such a coding maneuver, the constructor needs to consider several layers of STATIC configuration, as well as the context in which the factory was called, and data still on the heap or stack from prior stages. As a consequence, superclasses control which subclass leaves are called, as opposed to modeling subclasses based on specialized properties of the superclass.
It is no coincidence that the same well-known architect who pushed Presentation-Abstraction-Control as a good idea also pushed for frame-oriented programming.
Disagreed! Functional programming in the small, message-passing in the large (which is what I think a lot of people here mean by "objects").
Yes.
However, I'm not sure you *always* need fire-and-forget semantics. Synchronous reactive semantics possess as much if not greater clarity, but would require an "automated truth maintenance system" to make possible as an in-the-large alternative to message-passing.
but isn't occam/csp synchronous by default, and claims to be composable and provable (e.g. fdr tool)?
yup. part of all of this depends on what people really mean when they say objects or object-oriented. which apparently hasn't been resolved ever much at all or at least there continue to be enough different interpretations to cause trouble.
don't adjust objects to match intuition, adjust programmer's attitude to not rely on intuition
People have been saying the comments that the OOD principles don't work in the real world, and it's best to differentiate objects in the real world and objects in code. Which I totally agree with (and is the motivation for this post). If we were talking about using C++ to make a large scale project, I would also think in terms of how best to organize code, rather than trying to represent real things.
But is this a problem with the principle or the implementation? If you were to make a new language that reinvents OOPL, what would you do differently than C++/JAVA?
btw, the links are great =D.
But is this a problem with the principle or the implementation?
Perhaps it's a problem with particular approaches to programming. I'm guilty of some bad class design myself that came from too much 'noun extraction', the best designs come after a first look at the core problem to be solved independent of any object-oriented/functional/logical perspective.
Next, I try to work out the concepts/abstractions and find some general properties which I would like users of my API to rely on and try to enforce as many in the type system as possible.
Ideally, that would dictate most of the design.
If you were to make a new language that reinvents OOPL, what would you do differently than C++/JAVA?
This won't nearly reinvent OOPL, but one thing that would immediately come to mind is better support for delegation. In many cases, I've found delegation to be a better choice than inheritance.
I'm a bit in a hurry now. So just to ask once more, I understood you correctly that you wish to figure out if one could build an OOPL in which objects do behave more like real-world objects?
I thought the link was pretty lame as it almost entirely focused on newbie and knee-jerk mistakes. The good idea behind OOP is not this kind of jumps-out-at-you intuition, but rather the intuition that comes from trying to model things and relationships exactly as they are.
IMO, that OOP tenet is good. You can't model everything, but that should be the goal, the default. Not modeling some aspect of the real world so that a simplifying assumption can be made is where the real design decisions are. IMO the main failings of OOP languages occur when they don't allow a faithful model of the situation.
IMO the failing is when people attempt to model a situation.
... except if they're writing a simulator. You can sometimes get away with that, even though it doesn't allow for backtracking and trying multiple paths and all the other niceties of constructing a dataflow-logic approach.
OOP is best used to model the program, not the domain.
...even though it doesn't allow for backtracking and trying multiple paths at once and all the other niceties of constructing a dataflow-logic approach.
Why would modeling the situation prevent this? I don't have much experience with constraint systems (just unification type solvers), but my feeling is that explicit modeling of what's being governed by the constraint solver is typically better than having constraint resolution as the general execution model. You can certainly model explicit dataflow and do back-tracking search though. The fact that OOP tends to not model time/mutation explicitly is one of its failings.
OOP is best used to model the program, not the domain.
In the beginning, you have a domain and no program, so this seems question-begging to me.
Why would modeling the situation prevent this?
[...] my feeling is that explicit modeling of what's being governed by the constraint solver is typically better than having constraint resolution as the general execution model
You misunderstood; in no way did I intend to suggest you shouldn't model the domain/situation for purpose of simulation. What I intended to suggest is that you can do better than OOP for modeling the domain/situation for purpose of simulation.
If the domain-objects are modeled in OOP, it is difficult to subject them arbitrarily to different views, to copy them for divergence and backtracking, etc. Such operations as 'copying' get replaced by sending a message to the object and asking it to clone itself, which is significant semantic noise and doesn't readily support automatic optimizations.
More relevantly, the events being simulated in OOP are generally not reversible, and the issues such as what happens for a given event is encapsulated. The former prevents backward chaining when asking questions of a model, and the latter makes it difficult to tweak the model as a whole in order to modify the simulation behavior (e.g. a tweak to an axiom of the model, or adjust one's confidence in an observation).
In summary: OOP does far better with modeling programs than the domain because it is far easier to work with invariants introduced by programmatic construction than it is to distill invariants out of real-world observations. Further, "domain objects" are just one way of grouping and viewing domain data, and OOP encapsulation tends to resist logical transformations for viewing/grouping data. Finally, all modeling techniques are ultimately related to prediction and analysis, and most of these techniques require branching; the encapsulation and 'command/message' orientation of objects is contradictory to this purpose and requires one work painfully around the OOP language rather than with it. These forces greatly resist effective use of OOP in domain/situation modeling.
If OOP is used instead to model the modeling system - i.e. to model a system of axioms and confidence rules, forward and backwards chaining, etc. (or, as I said above, to model the program) - the end product will (in general) be more flexible, more modular, would have more reusable components, would expose more invariants for the classes. The end result of doing so, is essentially a new language for constructing programs on-the-fly. That - configurable modularity - is the strength of OOP. (Admittedly, OOP could do even better with a dependency-injection / object-graph configuration language component.)
You can certainly model explicit dataflow and do back-tracking search though. The fact that OOP tends to not model time/mutation explicitly is one of its failings.
Modeling dataflow and backtracking is modeling the program. That's something OOP does (moderately) well.
OOP is best used to model the program, not the domain.
In the beginning, you have a domain and no program, so this seems question-begging to me.
OOP is best used to model the program, not the domain.
In the beginning, you have a domain, a set of user-stories, a (presumably OOP) programming language, and no program. E.g. the domain is tax reporting, the user-stories are the automated detection of alarming reports. Modeling the domain - i.e. creating 'tax report' objects - is unnecessary to accomplish this user-story. Modeling the user-story also isn't necessary.
Using OOP, I would suggest only modeling the dataflow (data access and processing) for the tax-reports, model the rules/heuristics that raise the alarms, and model the necessary data-management and data-fusion required to support the rules/heuristics.
I agree with much of what you've written. Also, there is a difference between what idioms current OOP languages support and the idioms they encourage. You can certainly model almost anything with a sufficiently heavyweight encoding, and some of my comments were directed at what's encouraged, not at what's possible.
Modeling the domain - i.e. creating 'tax report' objects - is unnecessary to accomplish this user-story.
Hmmm, you haven't described the problem in much detail, but from what you've said I don't see why you wouldn't want to model 'tax reports' explicitly except to work around poor modeling faculties of whatever OOP system you're using.
You seem to be imagining that merely having an OOP system with better 'modeling' facilities would address the problem.
I think the contrary. The poor modeling facilities of OOP are caused by essential properties of OOP. These essential properties include encapsulation and message-passing polymorphism. These are the same properties that make OOP powerful for configurable modularity, scalability, and object capability security. Attempting to blend modeling support into an OOP system will necessarily result in a language that lacks the essential properties of OOP.
Since I believe this,'.
I suspect that any blended hybrid aiming to improve modeling in OOP will have the worst of both the modularity and modeling, and very few advantages of both. E.g. adding multi-methods gives one an alternative to message-passing polymorphism that is better for specializing operations and modeling rules, but destroys the distributed-scalability and damages the 'runtime' fraction of modularity in OOP, and is still inflexible for ad-hoc queries and cross-cutting relationships when compared to dedicated modeling paradigms.
In PL papers, I see people coming repeatedly to the same conclusion: that layering, not blending, is the answer that achieves combined features in both functional and non-functional properties, and that the main cost of achieving this without loss to performance is in taking advantages of layer invariants for cross-layer optimizations in the compiler. I hypothesize from this that, rather than improving the modeling facilities in OOP, we would do slightly better to sandwich OOP between higher layer configuration languages for interactive modeling (e.g. using objects for data-fusion, subscriptions, rules) and lower-layers for the non-interactive data-processing (e.g. pure functions and values, maybe support for relation-values and micro-queries).
But layering has its own cost - a greater complexity.
There is a difference between what idioms current OOP languages support and the idioms they encourage. You can certainly model almost anything with a sufficiently heavyweight encoding, and some of my comments were directed at what's encouraged, not at what's possible.
True, but I believe that what's encouraged, and especially the focus on simulations (animals, vehicles, etc.) in texts teaching OO, is often something that should be discouraged. As noted in the first comment, the path from 'newb to godhood' is to learn that one shouldn't even try modeling objects from the real-world as objects in OO.. They'll eventually get where they want to be, but the process will be filled with pitfalls and discontinuity spikes and revolutionary changes to their program architecture.
OP was designed originally for simulations (thus "Simula"). It's bad at them (compared to logic languages), but they still greatly influences the education of OOP. This should be undone; we should teach OOP simulations last, if at all. No ducks-go-quack, no cows-go-moo, no animals 'speak()'. Functor objects, dataflows, signals, events, dependency-injection, data collection management and database integration, and other components of programs should be taught. One doesn't even need 'visitor pattern' or multi-methods for these things.
People should walk up to a problem dealing with tax reports and think: 'well, "tax report" is a domain object. Thus, I almost certainly don't want a 'virtual' tax report object. If I have one at all, it will be a concrete value-object class used for input or output purposes.'
What they shouldn't do is think: "hey, well I have tax reports so first thing I need to do is add a virtual root for reports, then subclass that for tax reports, and so on." A person who can't see any reason they wouldn't want to do this is likely a person who hasn't learned the hard way that OOP is bad for domain modeling.
I have a feeling, however, that no significant changes in education will occur before we move away from the word 'objects' entirely, possibly in favor of 'actors' or 'first-class processes'.
I'm not defending OOP. I don't like OOP. I don't think message passing everywhere is natural. While it gives lots of freedom to replace methods, it scatters related code and makes it hard to know what the semantic requirements of a given method are, and thus makes it hard to replace methods correctly. If the messages are mutating objects as they get passed around, things are even worse. I think the natural unit for encapsulation is something like a module, not an object.. Whether or not there should be code dedicated to reports in general depends, but you shouldn't be able to code yourself into a corner either way.
I suspect that any blended hybrid aiming to improve modeling in OOP will have the worst of both the modularity and modeling, and very few advantages of both.
The hybrid I prefer looks much more like functional programming than OOP, and to my thinking has better modularity and modeling properties than OOP, which isn't saying too much.
The idea of layering is interesting, but I don't think it needs explicit language support. Using modules, you should be able to plug things together at an appropriate level of abstraction without changing languages..
I vehemently disagree! To say a language has a problem supporting top-down design merely by virtue of it not modeling the world is to ignore the vast multitude of top-down designs that do not model the world. Among these include top-down designs based on modeling the program or service in terms of processing requirements, observations, dataflow, data fusion, rules, hooks, command distribution, etc.
Further, since the vast majority of programmatic tasks are not world-simulators, I say it should be a false start to model the world in the vast majority of cases. Ideally, it should also be obviously a false start, so that people don't bother with it.
The idea of layering is interesting, but I don't think it needs explicit language support. Using modules, you should be able to plug things together at an appropriate level of abstraction without changing languages.
Layering of tasks can be performed through careful library design. Self-discipline and some care allows one to write 'pure' functions even in languages with Meta-object protocols, like Ruby.
What language support gives you is more invariants, with the lower layers having more invariants than the higher layers. Nice properties to make invariant in lower layers include: guaranteed termination, determinism, confinement, capability, evaluation-order independence, freedom from certain side-effects, freedom from certain errors, freedom from deadlocks, etc.
These invariants in turn make the language easier to reason about, especially with regards to black-box composition. This allows programmers greater confidence in the result of composition (e.g. in terms of security, deadlock, partial failure and recovery) without requiring they become informed of implementation details of the modular components. Invariants also make a language easier for an automated optimizer to reason about, with the obvious (potential) benefits.
I also believe (but cannot confirm) that layers make it easier for programmers to determine where given a feature 'should' be added, essentially providing a rubric by which to judge features (in terms of real programmatic properties, such as invariants), and also forces them to think about how (and whether) features (such as a 'tax report' type) will fit in with the other features. This should help them get off to a correct start, i.e. making the 'obvious' way to do it also be a correct one.
Lazy, functional programming at the bottom gives a few nice invariants at that layer, but fails to scale once we start piping information or carrying dialogs through a large number of functions, callbacks, plug-in modules. In-the-large, we need invariants related to communication, coordination, concurrency, dataflow, security. FP doesn't offer that.
dmbarbour, you are fun to listen to. do you have a blog?
No.
z-bo's blog?
We have an internal corporate tumble blog at work where we post links to interesting data visualizations and UI toolkits. It's very superficial and not that interesting.
However, I want to write some articles in the future about how to review really large APIs and make quick judgments on them, and also explain thing such as what questions you should ask beyond what the vendor's marketing material tells you. In particular, .NET 3.0 is an interesting case study. Despite the fact it is "too big to re-implement", it features the best UI toolkit to date. So I can't just say "the design is bad", because, really, compared to what? Sometimes you have to say, "the design encourages bad practices you must fight off with these practices I use".
I suspect that any blended hybrid aiming to improve modeling in OOP will have the worst of both the modularity and modeling, and very few advantages of both.
Do you know of Naked objects? It seems like a successful modeling system (and they are actually modeling 'tax reports') but it retains interesting OOP properties, encapsulation and code sharing among them.
Update:
I do not believe that modeling 'tax reports' in OOP is something that we would 'want' to do. Ever.
A very insightful statement.
(Naked objects previously on LtU.)
Just wondering...
Have you read Richard Pawson's book Naked Objects?
I have, and it basically contains no content whatsoever.
It's very colorful, though! And, wow, that book's binding is sturdy.
Well, it's a managers book, so my teasing is somewhat unnecessary.
Bottom line: Naked Objects doesn't go nearly far enough, because it doesn't address event-driven concerns in most enterprises. A complementary manager-level book would be The Power of Events by David Luckham (Rational Software co-founder and Stanford University professor). The UIs NakedObjects creates are not nearly as workflow oriented as they claim to be, because they are not context-aware and do not focus the user on tasks. However, this is a criticism of just about all UIs. Yet, if NakedObjects were truly object-oriented, then it would be trivial to display a UI as a viewpoint on some object, and that viewpoint would be based on some context of what's going on in the system.
Basically, Naked Objects always struck me as characteristic of a cynic: knowing the the price of everything, but the value of nothing. The authors chide Ivar Jacobsen's methodology but don't really provide an equivalent methodology.
So what does Naked Objects do well, that I do similarly?
I do know of Naked Objects. Naked Objects expose user-interface to a system via automatic transform (e.g. to HTML), has some advantages over application model, but can be improved in a number of ways (including dataflow, security, composition, flexibility).
Relevant to this topic, the Naked Objects do not much benefit from having a 'tax report' hierarchy, inheritance, et cetera. The system needs to provide certain forms to the user for create, read, update, delete, and ideally some liveness properties in a concurrent system (to see changes by others). These forms are an interface to the system. Modeling each form as an object without any inheritance, using a facet-based approach to interface the form with the system is at least as effective... and can share just as much code.
Even better if you don't need to name a specific object for each form... just provide an optionally named organized collection of facets (an object graph of simple IO channels) to which a user needs access, and an automatic translation from "set of facets" to "display document". That's far more composable, far less rigid, readily supports fine-grained security. One can create new displays by recomposing facet sets without the pain of creating a class or prototype just for that facet set. There are well-defined properties for union or intersect of two documents. With 'labeled' (role-based) facets, one can also define documents as override, combinations, fallbacks, sequential, and parallel documents. A little extra support for describing and subscribing to form display properties, and ideally the whole facet-set (e.g. via optional inheritance/embedding/linking of other, named facet-sets), in a functional reactive manner and you'll have what I'd begin to consider a decent UI framework.
An OO taxonomy for specific forms simply isn't very useful. Such a taxonomy is rigid, inflexible, will slowly petrify your project. It doesn't provide nearly as much code-reuse as you might think... not nearly as much as various alternatives, at least. You'll get far more reuse out of functor objects, "UI facet sets" with automatic layout, etc.. Bertrand Meyer says that OO is twice removed from reality. Do not model forms. Model models of forms: buttons, text areas, animations and transitions, and so on; then compose the form.
The Naked Object variation on that advice would be to not model form-objects (a specific tax-form object), but rather to model models of system interface (read-only views of state and and message streams, stateful properties (policies, goals, switches, checkboxes), message interfaces, etc.) then compose these, then automatically translate the resulting compositions into forms in a display language. This still ensures the system interface and the user-interface are 1:1, which is one of the big selling points of Naked Objects, without losing composability and fine-grained security and such. (Might still not produce a really nice UI, though. And Naked Objects Framework doesn't support this, at least not as of last time I read about it.)
Actually, not true at all.
Usually we have lots of programs and no domain. When the domain comes, we use programs to support the new domain.
Objects allow you to substitute entire problem domains easily. Usually, however, we don't call these objects simply objects. We use terms such as software factories, components and frameworks.
Regardless of the domain, evolutionary computing has always been the main thrust of OO computing. Its roots trace back to one of its founders, Alan Kay, who possessed a background in biological systems.
Usually we have lots of programs and no domain. When the domain comes, we use programs to support the new domain.
I think we're playing word games at this point. The comment I was responding to is still question-begging.
We use terms such as software factories, components and frameworks... evolution... Alan Kay... biology...
One of the main problems with mainstream OOP languages is their poor ability to create frameworks supporting reuse, and IMO weak metaphors to biology and evolution etc. don't help the situation.
I don't play word games. It seems you want to just say "OO has a poor ability to do so and so" and aren't pleased that I'm defending the way I build applications
[Edit: If you'll tilt your head just a little, you'll note I'm suggesting that very rarely do I start project's with a blank canvas with an infinite continuum of possibilities. I start with many choices already made, and I know why those choices were made. When I come across an unsuitable problem domain for these choices, I will likely have an increase in development costs, but I can price my estimates accordingly.]
I have pretty high reuse, although I think that is a stupid way to measure scalability. We should measure complexity curves. As the system gets larger and larger, how strong are the arches in the architecture as they stretch longer and longer?
Scalability is when you can wiggle something in the front-end and immediately tell me what just wiggled in the back-end, and why. Scalability is the removal of what Brooks calls accidental complexity and also the removal of what Pontus Johnson calls artificial consistency. To do this, you need a stable structure of your application domain. As a rule of thumb, your gross application structure should mirror your problem space.
I wouldn't place the blame on mainstream OOP languages. I'd point the finger inward. The mainstream folks just lack accountability. That is why methodologies like Domain-Driven Design work so well: they add lots of artificial consistency such as "Aggregate Roots", but give people a tradition and a set of rules to follow, such that they don't have to be accountable for thinking new thoughts.
I don't play word games.
I probably could have phrased this more tactfully, but you responded to the words I used rather than the point I made. The suggestion that we "use OOP to model the program" is circular - modeling is how we get that program.
It seems you want to just say "OO has a poor ability to do so and so" and aren't pleased that I'm defending the way I build applications.
I hadn't noticed any defense, so no, it didn't upset me. I've actually mostly agreed with much of the sentiment of your posts in this thread, including this last one. But much of it is pretty nebulous, which is also why I dropped out of the thread with David - it wasn't clear to me what exactly we were talking about.
My complaints about mainstream OO languages stem from my belief in domain modeling. When I do an analysis of the structure of a situation and then note that OO languages don't offer a good model of that structure, I complain.
The suggestion that we "use OOP to model the program" is circular - modeling is how we get that program.
If "program" is the problem-word, feel free to substitute it for "process components", "dataflow components", "messages and transport", etc. OO programs are best constructed by modeling the process in terms of its components. I don't believe calling this "modeling the program" is inappropriate, but I'd rather not battle over definitions.
Inputs from the domain into the process need to be modeled in OOP. Outputs also need to be modeled in OOP. But, the 'domain' itself doesn't need to be and shouldn't be modeled in OOP programs. Even if the goal is simulating the domain - i.e. simulating the weather, or simulating a city, or simulating traffic on a city map under various weather conditions - OOP is rarely a best choice for constructing said model.
My complaints about mainstream OO languages stem from my belief in domain modeling. When I do an analysis of the structure of a situation and then note that OO languages don't offer a good model of that structure, I complain.
Why would programs need to model "the structure of a situation"? Where, outside of iterative prediction and planning systems, does a solution require one?
My complaints about mainstream programming education stem from my belief that programmers are taught to believe in 'domain modeling' even where it is entirely inappropriate (which is anywhere except for simulators/predictors/planners). My disgust with mainstream programming education stems from my realization that programmers are further taught to use ineffective programming tools (such as OOP) to achieve this domain modeling.
That is, they come out of school thinking to use the wrong tool for the wrong job.
It's still not very clear to me what you're advocating in place of modeling.
Why would programs need to model "the structure of a situation"?
In general, to code a solution to a problem, you have to explain what world objects map to the inputs and outputs of your program. In order to correctly process inputs to outputs, the structure of the computation you perform will need to match some structure that's present in your problem domain. Identifying the abstract structures present in your problem domain is what modeling is about. If your language requires cumbersome or ad hoc encodings, that's bad.
In general, to code a solution to a problem, you have to explain what world objects map to the inputs and outputs of your program.
Perhaps. Such explanation likely helps programmers understand how the program fits into the world.
But such explanation does not need to be part of the program. Indeed, it shouldn't be part of the program, since how a program fits into the world around it is not a property of the program itself.
In order to correctly process inputs to outputs, the structure of the computation you perform will need to match some structure that's present in your problem domain.
That isn't true, at least not in general.
Perhaps, because you assume this is true, you see a need for an 'alternative' if domain modeling is eschewed.
In general, there is no need for an alternative. For most programming tasks, the construction of a domain model, description of domain objects, matching some "abstract structure" that humans use to 'understand' the problem domain is a completely wasted effort. In general, the programming task doesn't need to be a domain simulator or any other class of domain prediction engine.
Only when the 'output' is a prediction in the domain, or by extension an informed plan or anything requiring prediction within the domain, is it necessary to capture a 'situation' or 'domain model' within the program. And, in these cases, OOP domain objects is far from the best choice for reasons described above.
My complaints about mainstream OO lang"uages stem from my belief in domain modeling. When I do an analysis of the structure of a situation and then note that OO languages don't offer a good model of that structure, I complain.
You need to give us a real example of a real mistake you made using OO. Then we can correct you.
This is like saying "multivariate calculus cannot model instantaneous change" without understanding basic theorems such as Fubini that allow you to do certain kinds of integration based on the structure of the problem.
I'll respond in a few days when I have time to think through an example.
I thought the link was pretty lame
I considered it lame as well, but the interesting thing was for me that people criticising OOD for being 'too intuitive' and pointing out beginner's mistakes apparently haven't really gotten it themselves to the point they should be writing such an article if they suggest deriving fax from e-mail, professor from student or define 'abstract class' as 'a class with at least one virtual function'.
Having sat through some OOD classes I found it rather typical than the exception that teaching matches real-world objects to OO objects far too often and neglects the more abstract cases that come up in programming.
If something is taught badly, people can't properly use it (of course, that doesn't mean there's no good OO teaching)
won't make you the next Ralph Lauren.
We say it over and over again, but people keep forgetting it in conversation: the best programmers are self-taught.
I've never seen any (non-anecdotal) evidence for this, and countless examples of the opposite.
Where can you go to get a great education on how to be a programmer?
What is an 'example of the opposite', exactly? The best programmers you know being formally schooled? Or, rather, many poor programmers who were not formally schooled?
To unify both possible opposites, I'll say that it's important to be able to think scientifically. Knowing the scientific method, even informally, is way more important than knowing functional programming or object-oriented programming. Richard Feynman has a famous Caltech commencement address titled "Cargo Cult Science" where he explains my feelings better than I ever could.
First, I want to make it clear that I think education is necessary, but not sufficient. But as to where to get an education in how to be a programmer, I'm personally fond of a lot of the things we're doing in the curriculum here. For me, graduate school has been an important part of my training to be a good programmer, which is the case for other people I know.
What is an 'example of the opposite', exactly? The best programmers you know being formally schooled? Or, rather, many poor programmers who were not formally schooled?
Both. Almost every good programmer I know has formal training, and I've been mostly unimpressed with self-taught programmers (including myself, back when I was one).
As for the most important attribute in a programmer, I would say it is discipline. Well after that, the ability to communicate ideas to other people.
We are two different worlds.
I co-wrote a curriculum assessment of my college's CS dept. My senior year of college a few years back, and did curriculum comparisons between our program and other school's in the northeast.
Self-taught seems to rub you the wrong way, as you are inculcated in an academic lifestyle. Self-taught doesn't mean "no formal school", but it may mean they were a Mechanical Engineering or Philosophy major who switched careers. "Self-directed" may be more apt. Some programmers are just carnivores for information that will make them better programmers. Usually, it is OTJ training.
Where some self-taught programmers go wrong is reading a concert of disconnected blogs or trade press articles, rather than coherent set of papers or a book that hangs together really well.
The biggest problem with universities is their curriculum's are targeted toward accreditation, and can only teach so much tangential knowledge in four years while still meeting ABET requirements. ABET Accreditation is a good thing, as compared with no-name we-just-need-accreditation accreditation. However... credit hours are a limited resource.
Things get sacrificed. Students are never told what those things are, because it is in the interest of a university to have students unaware of what they're *not* learning. Self-taught programmers must figure this out for themselves, and it is a very non-linear process. However, self-taught programmers often benefit from learning these skills and ideas through OTJ training, giving them real-world experience and problem complexities that university courses balk at. There are exceptions to this rule, but they are typically subsidized by massive government or private loans - it is the only way to overcome the fact that the best practitioners tend not to be teachers. PLTScheme is a decent example, and so is the freshman programming course Bertrand Meyer has been teaching the past three years. Bertrand has students code against a 150,000 SLOC Eiffel project. Many other NSF "teaching grants" tend to be failures judged by ill-informed grant supervisors as successes.
The alternative, as you point out, is to fork over extra cash for graduate school. However, graduate school is frequently not the best place to learn how to write clear, concise, well-structured, well-designed complex programs. Performance or correctness and proof-of-concept dominate, and as masters and Ph.D. students get closer and closer to the submission deadline, they often make more and more compromises. Moreover, the problems a thesis deals with are typically isolated and the wise student will not take an overarching project but rather solve an isolated problem. Most Ph.D. theses solve an isolated problem - I should know, I read about 30 dissertations a year.
Graduate school will perhaps teach you what not to do. However, this is a bit like asking 50 divorced couples for relationship advice, and spurning the one couple that has been happily married for 50 years. The other value of graduate school is that it places you accountable, ***by pairing you with a problem where you are directly affected by the consequences of your actions***.
Furthermore, more often than not, in the curriculums I reviewed, courses were teaching things I most definitely do not want students in college to hear, but instead get that perspective from me OTJ.
discipline. [...] the ability to communicate ideas to other people.
True, that is why in job interviews we look for somebody with a four year degree (a sign of discipline) and somebody who can distill theory into plain English a four year old can understand. Nothing is more impressive than feeling like you're talking to a four year old with a four year CS degree. A good theoretician is a good practitioner, and a good practitioner is a good theoretician. Actually, too much education or training can be a disservice, as we do not build applications in a mainstream way. We'd need to un-train the person first.
I've heard Alan Kay talk about the importance of teaching mathematics and science to kids, and the use of computers to do it (esp. Squeak), which is something he's been working on for years. He was inspired to get into this work by Seymour Papert, the creator of Logo. Mathematics and science are both ways of thinking that get people to think beyond what's intuitive. Science in particular, and mathematics to a certain extent, help people create mental models that are more reliable than what intuition offers. So yes, I think you have a good point here, that we should not use intuitiveness as our guide to what is the best form of programming.
I think what we should be more focused on is what capabilities a language adds to our understanding, how well it facilitates modeling our ideas. There is an aspect of intuitiveness to this, particularly in a language's representation, but we should not be afraid to try to get beyond intuition if intuitive models are insufficient to get us where we really want to go. I think a mistake we make is that "intuitive" is where we really want to go, and all the crap we have to put up with as a result of that is worth it. A question that needs to be critically examined is, "Is the operating model sufficient to really get at the power of this thing?" Intuitiveness is a barrier that is difficult to get humans to cross. Getting beyond it means we actually have to think and expand what we know. A more positive way of looking at it is we're more capable than we think we are. We just have to work at improving ourselves. The way to motivate people to do that is to show that the effort is worth it.
VPRI is Alan Kay's research organization. Those who follow the VPRI folks closely know that one of their core interests is finding Maxwell's equations for computer science. The idea is that the core ideas to evolutionary computing can, like Maxwell's equations, be printed on a t-shirt and sold for riches. :)
Inventing Fundamental New Computing Technologies
STEPS Toward The Reinvention of Programming
A mentor once told me that "teaching is simply telling a smaller lie each day." Science and math are very much the same way. Even though we know classical mechanics to be wrong, it is a very useful lie to tell before introducing quantum mechanics, QCD and QED, etc. Furthermore, what matters is whether there are real-world problems to which this "new found intuition" applies. As Jef Raskin would've said, what makes it intuit-able?
Because "object" is an abbreviation of "abstract data type".
Somehow I find very funny that on LTU objects are being critised as 'too unintuitive' whereas arcane concepts such as 'monads' are praised..
Double standard by people who prefer functionnal programming to object-oriented programming?
Probably.
OK, there are quite a few bad books/example about OO design with stupid hierachies, so what?
There are also good books such as OOSC by Bertrand Meyer..
Aren't these two sides of the same coin, namely that intuitive concepts are bad? ;-)
i think Jef agreed.
I am a bookworm and I basically don't have a favorite OO book, despite having one of the largest programming libraries you can imagine. Actually, my favorite programming book is by Andrei Alexandrescu: Modern C++ Design. I once heard a fellow programmer tell me, "That book is template pornography." I will admit the book places great demands on its reader. However, the material is truly wizard-level programming stuff. Far beyond SICP, On Lisp, HTDP, etc.
Christopher Alexander of patterns fame once asked a student if her design was as good as Chartres. He said we have to insist on such greatness if we are to build things of great significance.
I think that I have extreme prejudices as to what constitutes good use of objects, based on being burned by my own poor designs. However, I've never blamed the language.
I always blame myself, and return to the drawing board insisting on better. I analyze my failures carefully, and try to learn alternative program construction techniques to defeat the werewolves that attack me.
"My library is bigger than your library?" ;-)
[I am actually wondering how many books you have.]
I have no idea. It is a collective habit I started in college when I asked a professor to buy books for the library, after I saw how badly our collection sucked. After compiling lists of books and reasons for why they'd be good additions and getting ignored, I simply decided to build my own library. It started with half my summer earnings being invested, and I've never looked back. I usually buy over 100 books per year, and also read plenty of ebooks, papers and dissertations through ACM membership and free/libre/open content linked on CiteSeer or places like here.
For me it was a collective habit I had as a teacher (you basically get books for free, if you don't overdo it). Although, I admit I am rather underwhelmed with my library at the moment. Most teachers I know start giving books away at some point.
However, I've never blamed the language. ... I always blame myself
I gather this is common in abuse situations.
That has me laughing out loud, but I agree with the sentiment.
I'm afraid that there is no salvation for you in the world of language-based computation. It is not possible to create flexible objects while we are sticking to programming languages (symbol based computations).
The problem is that there is no real work objects in real word, unless you are going to stick to platonism or its descendants.
The "real word" objects is how are we organize data about real world. So they are tags for the experience created by brain in culturally dependent way. "The map is not the territory" as it is said. The flexibility of natural languages in dealing with "real world" objects comes from the fact that we can redraw and fix our maps.
But there is no such luck in PL. In PL we deal with map of the world and we cannot redraw it in process of execution because object boundaries are already selected. These pesky programmers are needed to change program according to real world feedback. But program itself deals with the closed world model (the so-called "open world" models are mostly closed in non-traditional ways).
We can create a map of a map, or even a map of a map of a map. But it will just rigidity to the system (not a bad thing for some scenarios).
In the games, the difference could be seen to extreme. In games the program tries to simulate world to some extreme, and it is not possible to do anything outside of the map of build by game creators. It is not possible to break the wall unless it was made breakable. The map could be quite detailed, but there is limit of what could be coded. And limits of the model easily push out of the game trance.
On other side business applications are more-or-less honest in recording only important and incomplete data about world, and if they are asking you birthday, be sure that this information will be used by marketing department of for some other evil purposes, since every bit of data comes at some cost.
The solution might be possible for neural networks since they can change behavior basing on feedback, but I do not know the topic well. But we do not know how to build scalable and reliable solutions basing on neural networks.
I've been holding back on my own thoughts on the matter, for hopes that the discussion would lead to interesting points (which it did, scala, subtext are interesting, as are the books and links).
But I think that it is possible to make an intuitive PL that models the real world and run without too much overhead (and which I'm trying to design).
Say that you are programming a toy truck in a toy world. You will need to have an object Truck, an object for the ground, and object for location. There are relationships that come up: the truck drives on the ground, and location maps to parts of the ground, and the truck is always at some location. We can implement the ground as a 2x2 char array, and location as a pair of Int, and the truck as the letter X. The function drive should take one parameter -- the location that you want it to go; and for the pathfinding, we'll use an A* algorithm (assuming we also have obstacles). Let's say you want to pickup and drop off cargo (implemented as a 'C' character), the algorithm for Delivery(cargo, location) would be to drive(cargo.location); pickup(); drive(location); dropoff().
Now consider programming a controller for a truck in the DARPA challenge. You still have a truck, you still have the ground, you still have location. The truck still drives on the ground, locations still map to the ground, and the truck is still always at some location. You can't implement the A* algorithm, because you don't know exactly what's at what location, in fact, you're not too sure where the truck is either. But, the algorithm for delivery is still the same: drive(cargo.location); pickup(); drive(location); dropoff(). Now, how those each individual fxns are implemented may change, but the algorithm does not.
I can't think of harder examples off the top of my head, but you mentioned maps. Maps can be defined to be a representation of the relative locations of things in a terrain. A map of New York City has the relative positions of the streets, the museums, etc. In a game, a "map" can contain the relative position of quests and buildings. Even though these two kinds of map are actually very different things (one's a physical object, another's just an image on a computer, stored in 0's and 1's), you can still use a map the same way to find out how to get somewhere.
There are two points that I want to make. First: if relationships between objects are constant, the algorithms that take advantage of the relationship is contant, as you saw in Delivery(). Actually typical definition of a "model" is that the relationships are the same. When we said "implement" we actually created a mapping of what we wanted to do and some programming construct that have the same properties/relationships.
Second: You don't need to describe the objects/relationships in full to write particular functions. There are plenty of things you can leave out -- if we just implemented the pathfinding algorithm, we don't need to know whether the truck is holding cargo or not. Deciding what to implement / what not to is up to the programmer, but it's conceivable that we can have a library of relationships with functions that you can use if that relationship is true so the programmer can pick ones that fits the object representaion he chose.
For some reason, discussion along these lines reminds me of the Taxi Programming Language, which employs an intuitive interface of passenger delivery. (Not that it has any practical application).
I think this thread suffers from the lack of specifics. Knowledge representation and modeling are, to be sure, interesting topics. But I can't connect the dots. I'll be the first to admit that there are serious drawbacks with the current slate of programming languages and design techniques. Lots of interesting languages, with various strengths and weaknesses. I can't help but think that emphasis on intuition and the real-world, obscures the fact that we are dealing with symbol manipulation. Any parallels between language and reality is by analogy. And analogies tend to fall apart when you examine them too closely.
I think that the comparison of the problems with OOP to that of ancestry determination in Prolog masks the fact that Prolog can be a pain to use for problems that fall outside of the domain that it happens to excel at. Without degenerating into platitudes, it would be nice to carve out some specifics in terms of syntax and semantics. Personally, I like the direction of Oz in integrating the concepts of logic programming with a multi-paradigm approach. But the relational/declarative features of Oz may not be what you have in mind.
Well, I'm not too sure what to say about your comment about specific examples, I can pull out a real world problem from my field (bioinformatics) and say how I think the relationships lead to reusable code, but the "real"ness of the problem is proportional to the length of the comment, so I dunno if people will be interested enough to analyze it. Let me know if you think it's an interesting issue that is worth looking into, otherwise, you'll have to wait till I'm done w/ the language I have in mind.
You mentioned knowledge representation, my take on it is that it is too focused on trying to use it for reasoning rather than figuring out what we can do with it. Typically you can express everything you want with first-order logic, and if not, higher-order logic (or fuzzy logic if it's not precise). The problem was never with the representation, but with the inference. Sometimes people prefer a KR that is weaker, or one that's incomplete so that it can be used to tell you things that you didn't specify.
I'm all for using inference to figure out more stuff from the world, but when you start using logic to move blocks (STRIPS) it sounds like it is overstepping its role in an intelligent system. I think that's where most of the issue come up -- quantification/frame/ramification problems become debilitating when you are reasoning about details when they could be handled with heuristics.
I read a paper a while ago about an AI system that ran a lot faster and better by doing reasoning at a higher level rather than planning each step of the state change(don't have the reference... sorry).
The point with knowledge representation is that it can already represent pretty much everything we want it to represent, but a lot of the attention is on how to use it to make inferences for planners. My take on logic programming is that it uses a particular inference engine to search for the solution we specified with logic. Note that neither of these are using the predicates to represent knowledge about our program to help us write code well.
And analogies tend to fall apart when you examine them too closely.
I would argue that analogies work for things that the relationship btw objects hold, and they fall apart where the relationship no longer holds. A car is like a bicycle in that they can both be used to move the location of a person (relationship between car/bike and passenger and location are the same), so you can use either one for transportation. But they are operated differently (relationship between car and its pedals is different from the relationship between a bike and its pedals), so a person that can drive may not know how to ride a bike.
So anyways, I'll definitely share the language on LtU when it's done, since the two main ideas that I have should be novel and interesting (as I've gathered from this thread and the previous one I started). But it might take a while without any help. So here's a shameless request for people to discuss specific implementation ideas / help with design =P.
In OOD, if you have a hammer and a nail, you have two classes Hammer & Nail. Proper design forces you to choose a place to implement the behaviors. This leads to expressions like:
hammer.drive(nail)
hammer.remove(nail)
hammer.drive(nail)
hammer.remove(nail)
or
nail.insertWith(hammer)
nail.removeWith(hammer)
nail.insertWith(hammer)
nail.removeWith(hammer)
The tasteful designer chooses the first set, since it seems more natural. Natural until the next day, when there's no hammer, but a brad/nail driver and a pliers. Now we have:
nail.insertWith(brad_driver)
nail.removeWith(piers)
nail.insertWith(brad_driver)
nail.removeWith(piers)
A functional approach (like you use above w/ drive()) requires no such choice, only that the implementations exist somewhere.
drive(nail, hammer)
remove(nail, hammer)
drive(nail, brad_driver)
remove(nail, pliers)
drive(nail, hammer)
remove(nail, hammer)
drive(nail, brad_driver)
remove(nail, pliers)
In essence, the act of driving a nail into a wall w/ a hammer (satisfying as it may be) is no more a property of the hammer, than the nail.
I think we're all at the consensus that the objects from current OOPL's aren't compatible with real world objects.
I wouldn't necessarily consider that a functional approach. Though I agree that the traditional OO problem you describe sucks and would be better dealt with via multi-methods as you describe.
I don't think he was suggesting multimethods, I think he was suggesting that whatever the implementations of nail, hammer, etc. they must have some minimal knowledge of each others implementation. The abstract interface can be specified separately, but the implementations must be provided together, or tied to each other somehow in order that the functions can actually compute something meaningful.
I disagree with your example. Specifically, when you have a nail you are removing with a hammer, you are removing it from something.
This is analogous to any example where you are removing an element from a collection with a driver.
Therefore, it'd still be object oriented even in your functional example, you merely left out the object, wood.drive(nail,hammer).
I really don't see why you'd need any sort of dynamism on the part of the nail or the hammer in this case. I don't think you'd ever want to call remove on the nail or removewith on the hammer, because an element (when not a nail) can potentially be a member of multiple collections and so by doing it you would have to maintain those collections in the wrong objects beneath the wrong abstractions.
An actor/entity (not necessarily 'you') is removing a nail from the wood with a hammer in a context/environment.
Influence from actor/entity includes joint rotations, balance, mechanics, etc. E.g. two different robots would often have two different requirements here.
Influence from the context/environment includes laws, constraints, requirements. E.g. one might need to change hammer-swinging behavior in a busy airport, or based on traffic in the immediate vicinity.
Neither the functional nor OOD approaches are very good for this problem. Due to the expression problem, neither may be adapted readily to account for new rules and regulations from the context and different constraints from the actor. As with domain model and planning systems in general, some sort of logic programming or declarative meta-programming is far more appropriate.
I'm not convinced meta-programming is necessary. I feel this is really just a simple software engineering problem.
For example, upon further reflection I now sort-of feel that the wood.remove(nail,hammer based parameters (force, friction ... etc)) would also be called by the hammer.
Another actor, person, would call hammer.remove(nail,wood, person based parameters (person's force.. etc)).
wood already understands the context it exists within, so that isn't necessary in this example.
I'm not convinced that there is an expression problem here, or that new rules and regulations will really influence this framework, but I'm willing to see a further example or further elaborations.
The wood.remove(nail,hammer based parameters (force, friction ... etc)) would be called by the hammer. The person would call hammer.remove(nail,wood, person based parameters (person's force.. etc))..
Given any set of costs and constraints, it becomes very difficult to know exactly which data will feed into any action. How much noise are you allowed to make (context)? Does the wood need to remain intact? What about the nail? Safety considerations? What are the current physical capabilities of the robot - i.e. how can it actuate its arm? how maneuverable is the platform? In time vs. fuel/energy cost, which needs to be conserved more? The problem becomes considerably harder when dealing with dynamic changes in capability, dynamic changes in policy, and unknowns. For example, a robot's wheel could be damaged as it moves to complete the mission, or perhaps several children have some probability of playing around near the nail that varies with the time of day.
The number of variables that feed into a planner for real world behaviors, or even a reasonable simulation of them, is enough to fill a database. This is true even when all you are doing is using a hammer to remove a distinct nail from a clearly identified piece of wood and you're starting right there in front of it.
Real life makes things even more complicated by introducing needs for recognition, positioning, approach and angle management, and so on. For a typical robot, removing a nail from a piece of wood will probably require path planning to decide how to approach the nail in a manner that will allow a robotic arm to effectively angle a claw-hammer to remove said nail.
I'm not convinced meta-programming is necessary.
It isn't necessary. One could go straight for logic programming with side-effects.
As I understand it, the primary difference between 'declarative meta-programming' and straight-up logic programming is that for declarative meta-programming you have an intermediate executable 'plan' construct that can be saved or compiled, allowing for staged processing. Both approaches allow for re-planning on the fly (and a plan can include contingencies and making a different plans later, allowing feedback).
Anyhow, while meta-programming isn't necessary, getting away from OOD - in particular OOD using domain-objects or domain-based classes - is pretty darn well necessary. Unlike logic programming, OOD is not readily extensible to deal with new concerns as programmers become aware of them.
Alan Kay once said in a talk that object oriented programming ended up being something that he didn't intend when he originally coined the term. He said something to the effect of "Its not about the objects. Its all about the goo that goes between objects." and I tend to agree.
Smalltalk does indeed capture that spirit to a good extent, in spite of the hammer/nail design asymmetry. Messages have their own existence in Smalltalk, apart from the code that gets run when messages get sent to an object. Comparing that with the functional style, it is kind of the difference between the specification of a function invocation and the code that the invocation eventually runs.
The robots are extremely inflexible. In industrial settings they organize environment so robots could work. The factory shop becomes a big modular bench.
And human herder is still needed to oversee the work, since they fail to adapt from time to time despite of specially organized environment.
The reason is that symbolic computations are unable to leave the source model, since when we start manipulating symbols, we have already lost reality. Ability to go from symbols back to reality and to modify symbols cannot be overcome in living OOD programs, since OOD assumes that the problem is a selection of the correct symbols (and no model is correct everywhere).
If you look to LISP and Arc's essays of Paul Graham, you see that the motivation is to create a program that will allow changing their model on the fly. His point seems to be that since every living program has rigid model that cannot be changed by the program itself, the programming environment should facilitate the change of this rigid model being done by programmer. So we instead of adaptable software, we will get flexible and adaptable software+wetware complex. And I also think that this the only way if we stick with symbolic computations (which are inherently rigid by themselves). The question is what tools we will use to create this adaptable complex and what should be unit of change.
And I think that we stick with symbolic computations for a long time (except for boundary cases), since it the only way that we currently have that allows us to understand what we are doing.
The funny thing that it happens to humans in society as well. Humans start manipulating symbols only, and their behavior becomes as inflexible as one of the robots as result. They are loosing ability to alter meaning of symbols and creating new ones (look at legal systems for good example, they live in world of symbols, and they problems of leaving that world even if symbol-based inference do not match situation well). Zen, Daosism, many schools of Yoga, NLP and friends try reverse this trend for their followers (but they use so mystique language)..
If you want to account for feedback, then the best would be to define object properties that can be altered by external objects. Then the behaviour of the object will depend on these properties. For example: the 'noise' perceived by the robot when hammering, the 'visual impression' left by wood chipping, etc.
You need concurrency-safe mutability or dataflow to handle these things in a program, except if you are ready to restart the whole computation for each sample. Which gets me to think, isn't dataflow programming really a whole mess of small programs tied together? Only they focus more on propagating a message (state change) rather than implementing a behaviour (OOP).
Which gets me to think, isn't dataflow programming really a whole mess of small programs tied together? Only they focus more on propagating a message (state change) rather than implementing a behaviour (OOP).
Isn't OOP really a whole bunch of small programs tied together? Only it focuses more on configuring the application (late-binding, modularity) than implementing the steps (imperative)?
>;-)
Dataflow is far too broad a subject (too many different models) for me to actually answer your question. But propagation of changes over time is at the heart of it. And we should be careful about how we frame that statement because just is a dangerous word. The difference between lazy and strict evaluation is 'just' a small tweak in the interpreter, right?
to account for feedback, then the best would be to define object properties that can be altered by external objects
The problem is more difficult than simple feedback because we really want the anticipated profile to influence behavior before we swing the hammer. Additionally, it is unclear in OO where the 'rules' for computing, say, the noise profile would go - it is influenced by so many things: type and position of wood, width of nail, shape of room.
This is a class of problems that has interested me for many years, but I've no viable solution for it, despite recent attemtps. (I wouldn't consider a solution 'viable' unless it maintains security constraints, scalability properties, stability, controllable performance, and at least doesn't prevent real-time properties for targeted end-to-end reaction paths.)
The approach you outlined strongly reminds me of the concept-based (generic) programming school as popularised by Alexander Stepanov in the C++ community. Did you look into evolving from there? Is there a significant difference compared to your approach?
I think the motivation is the same -- capture the essense of the algorithm for modularity and reusability. But I'm not convinced that OO languages these days can express these concepts well.
One key feature that I want to implement is a programming library search, where you will specify the description of the object and it will automatically find all functions that you can use with this object. With templates, you can apply it to any object, even ones that wouldn't make sense to apply it to. With inheritance, you can only apply it to derived classes.
With templates, you can apply it to any object, even ones that wouldn't make sense to apply it to.
That's what C++0x concepts address. They are the interfaces of generic programming in a way. They are much closer to Haskell's type classes than to the mainstream notion of OO interfaces, though.
Ralf Lämmel's paper "Software Extension and Integration with Type Classes" examines Haskell's type classes (a lot carries over to concepts) in the context of software extension. He does some comparison to alternative (particularly OO) approaches. You will be interested in at least sections 2.3 (Tyranny of the dominant decomposition) and 2.2/4.1 (Retroactive interface implementation).
I think these two are related to both your original question, and to your goals of reusing algorithms. It is not natural at all if an object can't be reused for a task it is suitable for just because the creator of its class didn't think of that use when writing the class (in the case of custom interfaces, he probably couldn't even know of their existence). In a language emphasising genericity similar to STL, I'd rather not want to resort to design patterns to make objects work with my algorithms.
Lots of stuff to read =D.
In a totally screwed sense one could suspect that objects arise from clipping functions in Photoshop. I wonder if Photoshop isn't just a natural model of a mind: a central workplace with a neutral background where objects can be clipped and contemplated. Then there are lots of tools at the periphery that can be used to manipulate those objects. O.K. Photoshop lacks autonomy and free will but according to the latest philosophical trends the latter doesn't exist anyway.
The history of AI tells us that people have gone mad about logics and the "laws of thought" and considered the mind as subtle rule based engine. The mind is full of rules just like an 18th century automaton but powered by neurons. Even objects in the OO sense mostly reflect the 20th century "linguistic turn": OOD is about stating sentences in natural language and identifying the nouns, verbs and properties. Those become classes, methods and object attributes.
With this approach we never enter the contemplative realm of Photoshop but rather get sucked into Smalltalk.
I've told people many times before that Photoshop is the best programming language in the world.
My academic friends tell me "that's impossible, because of the GUIs directed-ness".
How disappointing when academics lack the ability to think outside what they read in papers by others.
As somebody who got paid money for awhile to use Photoshop, while I love it, I switched into PL because I found the design tool chain to be brutally redundant, brittle, and write-once.
Photoshop (and Flash authoring, and Illustrator, ...) are tackling tough problems. The last few versions of Photoshop have made enormous leaps in these areas... but there's still a lot of room for improvement. When you say the best PL, I assume you don't mean the underlying scripting languages, but the direct manipulation interfaces. A lot of traditional PL-like ideas are creeping in, but, again, there's room for a lot more.
One word: Sketchpad.
I wish that was my thesis :)
I would love examples of brutally redundant, brittle and write-once portions of the design tool-chain.
And, yes, I am talking about the direct manipulation interfaces, although I think "direct" is a misnomer, since Photoshop enables a lot of indirection.
Some off the top of my head.
1. Once you use Photoshop for awhile, you develop multi-step tricks. What you really want is something like a PBD tool that observes what you're doing and then synthesizes the relevant variables for tweaking later on (imagine mixing PBD with some of Bjorn Hartmann's tunable variable work). For example, I worked with brushes a lot and developed a basic technique for it: the task of creating a brush, tweaking its parameters, and using it is too linear (and in one direction) and long.
2. The tweakability of parameters in general. No effect should ever be final: instead of applying an effect, it should be in the data flow style, where it's just listed as a transform that I can tweak later. Similarly, losing information when you size down and then back up.
3. This is getting a *lot* better, but the disconnect between vectors and pixels, especially when layering effects on top, still gets annoying. They are fundamentally different, but not when you're doing generic operations.
There are more, but three to start with.
The no effect should ever be final constraint is a small symptom, not a disease. A competitor to Photoshop, written by one programmer, Pavel Kanzelsberger, already supports this. In fact, it has for about 4 years. The fact billion-dollar-a-year Adobe can't keep up with one programmer is disappointing, but shows that Adobe's programming model lacks support for real-time effects. By natively incorporating this as a requirement, Pavel has single-handedly trumped some of the best image processing gurus in the world. The takeaway here is that system design matters.
Currently, there is a workaround in Photoshop which is to clone the image at a particular stage. This approximates the brittleness you get by doing FP in a CPS-style. What you are advocating is the removal of this brittleness, which I agree to. The original Photoshop model hasn't advanced much since PS 5, despite rebranding to CS X.
You also seem to be requesting something more powerful than actions and action sets with your Programming by Demonstration (Yes, THAT PBD?). You want an active macro recorder that analyzes logs of inert macro recordings - and then offers Statistically Improbable Macros (SIMs, making up a TLA here) to the end-user? To be meaningful, you'd also want to directly link back to the instances in time that defines each SIM example instance. In this way, the end user doesnt have to look at a command language, but rather view their own intuitive design actions by examples they've created. So what you are really saying to me here is that Photoshop should take advantage of its notion of History (self-explanatory) and Bookmarking (saving selections, loading image files into selections) into a very rich Hypermedia system.
Unless I'm putting words in your mouth (giving too much credit?), I agree. My intent here is to flesh out your thoughts on this matter, though.
Adam & Eve doesn't describe hypermedia in a systematic way.
I can comment on more of these if you'd like to have a back-and-forth.
1. I'd give Adobe a bit more credit :) They're doing a lot (e.g., natural brushes may be coming soon!) and the benefits from their studio integration is great.
2. Your analogy to CPS is how I think about it as well :) The history starts to help, but is just a slightly richer form escape continuation.
3. There was actually an image editor (80s?) that started to incorporate PBD (you got the TLA right :)) -- it's not a new idea in this space :) I'm not sure why you want to know about improbable macros -- I want to know about probable ones (... and wrote a paper draft about a program an analysis suggesting the road to exposing such knowledge for arbitrary web apps!), and, from those, expose the tweakable knobs are. How this maps into history, bookmarking, etc. seems more of a legacy implementation detail than algorithmic or systems design problem...
But yea.. I think we're on the same page. Once you move into the 3D space, it gets even worse in practice (I don't know how, say, the lighting guys at Pixar stay sane). Thought of some more things I disliked: once you start thinking about moving items around, guides were a weak and flat constraint language. Furthermore, there is a projection problem in how to extract part of one image and put it into another (which remains a PBD/HCI/slicing problem, even if you take a data flow approach).
Another inspirational (linguistic) place to look is tangible functional programming and subtext. There was also some stuff on live and composable pixel shaders, but I never worked in that space so I don't have a good feel for how interesting it was.
My head isn't really focused on this space anymore, so I'm not sure I have a particular direction for this comment :)
Those who don't understand Hypermedia are doomed to reinvent it, poorly. Rich history and bookmarking is a data structures issue. Part of the reason the Web has so many flaws is that it was invented by physicists playing with computers. The HTTP Uri scheme is pretty awful for "Web applications". Despite Roy Fielding's best attempts, HTTP still has its flaws, notably its fragment identifier scheme.
To give you an idea of its value, lets say the user wants to publish a brush made via PBD. He/she can also publish how that brush was synthesized and then later on used, because all those details were tracked by history and bookmarking. This would be a PhotoShop "Lifestream" in David Gelertners's parlance.
Statistical improbability gives you the highest probability macros for re-use over a long sample period. You're looking for outliers, by definition of searching a sample space for statistically significant behavioral demonstrations by the user. In the background, a log/trace utility captures macro primitives and something-else analyzes them.
What image editor used PBD? Link to your rough draft?
Pixar tends to use very boring body animations throughout the movie, and then mixes in very custom body animations for a small set of subsequences. Otherwise, they concentrate on facial expressiveness and the angle of the face to the camera. This is just an observation.
[Edit: Also, I think the CPS analogy isn't flattering. It is a smell of bad I/O subsystem design. I explicitly prefer talking in terms of Landin's J Operator for this reason, as it is a uber-goto "jump" operator capable of making arbitrary jumps. The arbitrariness is constrained with semantics that explain why on earth you'd allow such a jump to take place, and what jumping from A to B even means. CPS doesn't actually model this well, as it is primarily a technique for compiler optimization. Using it for programming requires programmers to edit code as a side effect of controlling side effects. That is two levels removed from the problem.]
[Edit: by the way, it is not that I'm failing to give Adobe credit. Rather, I'm simply saying correct system design up front can make a single programmer more productive than a small army.]
Tracking actions (especially generically) and reifying them as intuitive first-class objects (or providing even more fun linguistic support) are both tricky. Furthermore, adding persistence / global naming is a big performance headache, as the continuation server guys have rediscovered (Fielding wasn't shooting from the hip when he went with REST -- the context was inventing Apache at the same time, I believe!). I'm still not sure about how to do it well in a rich setting -- Seaside starts to support hierarchical continuation-based components, which is a good second step, but this area is still feels like the stoneage. We probably agree that the 'ideal' editor interface probably shouldn't be very far from the ideal web one -- imagine Photoshop with wiki-like features -- but I've found this space to be really, really challenging once you want to build robust / large apps with layers of application semantics. Photoshop might get some simplifying assumptions if you assume only one user, but probably not enough.
I don't remember which editor incorporated PBD unfortunately, though it was manually done (at the framework level) which is passe relative to modern attempts like CoScriptor/MashMaker etc. I think it was described in an early CHI or UIST paper talking about treating PBD as a slicing problem on the history of actions.
I can give you a draft of my paper if you email me -- it's how to do a blackbox (dynamic) control-flow analysis on arbitrary web apps to figure out the different UI states and how they transition. Stuff like k-CFA doesn't work because it's too low-level to be useful for a lot of web needs, and, considering the web is a hodgepodge of JS/DOM code and PHP, it fails in practice anyways (and why I'm skeptical of many papers claiming good analysis results!). The importance is that it's a fundamental analysis at a usable abstraction level for web apps. Stuff like PBD can take advantage of it -- we ended up making a demo to translate natural language commands into sequences of web application actions, and we implicitly learn the action graph that a PBD tool would extract along the way (the original draft focused on aiding PBD, but reviewers found it unimportant!). It's not on my site because I'm struggling to get it published (the joys of grad school?).
Btw, I'm not convinced about the particular example in terms of productivity. Without knowing better, it's like comparing TinyOS development practices to Vista ones; extrapolating is hard. I agree in general, but 1 programmer vs an army is pretty strong, and it's unclear how those ideas in particular compose with a comparable system.
At work, we have 5 programmers supporting 45 (and growing) clients in one of our product divisions. For the concepts we support well, implementation is fast. I'm in charge of our next-gen software, and untying the existing interdependencies in the bad parts is a pain. The thing is, the bad parts are a small percentage of our code base, but a large reason for us not being able to innovate any further. We'fve reached a glass ceiling that requires massive refactoring. As an academic, you are probably wondering how this happens. Usually, these mistakes occur when the CEO requests something for a sales demo, and then 10 years later you're still stuck with the sales demo kludge. Right now, most of my productvity is sapped trying to remove 10 years of sales demo kludges. We still have a very good architecture, but we want our next architectural leap forward to be 30 years ahead of the rest of the industry.
Larry Ellison, the CEO of Oracle and one of the shrewdest businessmen in IT, has always believed in keeping the core Oracle team to under 50 people.
Moreover, if you look at projects that had a lot of programmers (Windows Presentation Foundation), their architectures have many interlocking interdependencies. WPF, in particular, had no fewer than 350 programmers on it at any given point in time, and almost as many as 500 at one point.
extrapolating is hard.
But experience working on small teams is not extrapolation. It is strictly a matter of being directly affected by the consequences of your actions and knowing your actions are the root cause of your pain. You can't read a study by CMU's SEI that studies stuff like this. There is no good way to measure whether people realize they're at fault. For this reason, Dr. William Edwards Deming invented The Red Bead Game Experiment to demonstrate that people are poor at assigning fault, especially when the fault occurs as essentially chaotic behavior of the system. In other words, some failures occur due to the design of the system itself. As usual, Joel Spolsky speaks the truth, "At no point in history did a programmer ever not do the right thing, but [the Office file formats are still messed up.]"
Seaside starts to support hierarchical continuation-based components, which is a good second step, but this area is still feels like the stoneage.
I mostly feel that various practical proposals for continuations are a mistake. At least JBoss Seam supports the continuation model-redux in an OO form using the Workspace-Conversation metaphor.
Strange about WPF. Back at Macromedia, the big projects had less than 1/10th of that many feature engineers (maybe more like up to 10 A-feature developers). The rule of 150 advises against the WPF model (though I'm sure it's compartmentalized, e.g., driver devs are separated from say frontend devs).
However, getting back to the point, I don't get what either of us experimenting with new software systems has to do with the claim that the architectural and linguistic design of the photo manipulation software you linked enabled the individual devs to be as effective as an army of Adobe ones. My first thought, actually, was that this was due to the many perks of dealing with a smaller code base. My second thought was that the productivity could be deceptive: the features being added to Photoshop are more interesting nowadays because the boilerplate/basics are done (outside of fundamental things they didn't think of early on). E.g., if I remember right, there's even an internal scripting language!
I think you said my point well. A small army fighting with stones (still creating the boilerplate API) will seem feeble when faced against one chain gun. However, one chain gun doesn't guarantee an infinite supply of ammo. Also, once the small army upgrades to chain gun, the blue civilization in Age of Empire takes over the single red and wins.
Adobe's boldest move so far was basically Sean Parent effectively saying, "QT and frameworks like it don't cut it any more."
And, yes, Macromedia is another good example.
BTW WPF wasn't as compartmentalized as it should've been. They got the basic division correct, milcore.dll and PresentationCore.dll, but PresentationCore was designed wrong (IMHO).
Rule of 150 - First time I heard it. Dunbar's Law on wikipedia - very interesting.
I found the 150 rule in Tipping Point (Malcolm Gladwell) -- a lot of useful tidbits for trying to popularize something. I don't practice them (doesn't fit well with my research style) so I can't lend too much weight in how easy the ideas are to apply, but it was a fun read :)
Could you elaborate on this? My only experience is with MS Paint. :)
As an aside, this is the second time in two days I have seen an desktop application cited as an influence on PL design.
If this trend keeps up, I am afraid we will be programming in PowerPoint by this time next year.
In a way, Photoshop's direct manipulation interface is an implementation of Landin's J. Operator (the predecessor to modern continuations). Things such as selections, actions and action sets, history and layers all provide pretty much arbitrary indirection.
It gets artists to become programmers. This is pretty much the "grocery clerk as DSL programmer" pipe dream some computer scientists had in the 60s through 80s.
Also, that thesis sounds very good. I've had the author's viewpoint for years, and it is exciting to see an academic flesh it out. Should be a great read, and I will look forward to giving the author any feedback I can. Spreadsheets are very much analogous to how I think of programs today.
Edit: by the way, I'm a huge visual languages junkie, so informally designed visual programming languages like Photoshop are intriguing because of the fact they were organically created to solve visualization and artist workflow problems, not programmer control flow problems.
Excel and visual languages like it suffer from the same design flaw: unconstrained modelling. Back in the '80s, Maureen Thomes wrote a book describing a bullet-proof way of constructing spreadsheets called the "staircase layout", but it defeated the point behind laying out spreadsheets in a way that was naturally consumable for not only analysis but also reporting.
photoshop is constrained to pixels, basically. all the tools and abstractions are based on that.
how do you envision a general purpose language along those lines, that isn't about graphics?
By the way, for those of you who agree that "Photoshop is constrained by pixels", you would be mistaken.
pixels:Photoshop :: telescopes : Astronomy
A pixel is just a picture element - a data structure of some kind used for describing an element in a set.
Pixels are not a defect.
constrained By vs. To pixels - the former to me implies an upper limit, the latter is more about foundational atoms.
You say Photoshop is great which is intriguing, but I haven't understood from any of your notes yet precisely how it is so great, and I'd like to understand.
What level of expertise do you have with Photoshop?
I am very big on providing people with specific examples of things, so I'd be happy to elucidate. However, knowing my target audience proves crucial. In my experience, most programmers have never even used this program and even more think it is just some fabulously expensive toy for designers creating corporate logos.
If you've never used Photoshop before, I'd recommend the web comic animated series You Suck At Photoshop.
As an aside, I once gave somebody a really good example of how not to create a DSL by citing Tivoli Storage Manager's configuration language, and their excuse for not understanding my example is that "I've never used the program, so I am not sure how big of a deal this design flaw you point out really is".
i'm not a graphic designer, but have done basic photo editing and art creation in it. and have used mac paint style programs since the original mac. so i basically know about photoshop's abilities wrt pixels, brushes, filters, layers, plug-ins.
Why are objects that we use in programming so vastly different from real-world objects?
Because they're not real-world objects. That's the mistake. I am personally very dismissive of the idea the objects in OOP should aim to simulate real-world objects. To me, object-oriented program design is about managing complexity and making libraries more user-friendly.
An object which does correspond to a real-world concept is better understood as representation of the aspects of that concept that are relevant to the application. Sometimes you invent objects which only makes sense in the world of computation - and that's OK! There's no such thing as a StringBuffer or Socket or Event in the physical world, but you can't deny they're useful classes.
Don't fall in to the trap of philosophising, remember that you're just writing code.
I agree that you don't need domain modeling for most applications.
What? Really? Are people being paid to rewrite "grep" or frob some trivial report generator?
Hell, even in my days slumming in Web development, complex domain modeling was necessary. Just boring companies, organizations, projects, discrete vs. multi-year projects, funding sources, user accounts, group accounts, credit pulls and merges, complex loan products, lending criteria, property types and on and on and on....
Not just some crap one types right into C or Python or Java or any other language. I had one accounting system firm at a non-profit simply back out of their contract when confronted by the complexity of the domain model.
Where does this idea of "domain model not necessary" come from? [And, no, I am neither some modeling nor OO fanatic by any stretch of the imagination.]
- S.
I overstated that. Domain modeling is involved in fusing data to make useful decisions or predictions. For example, to report alarming tax reports, we do need a decision function for what constitutes 'alarming'. So most applications do include some implicit forms of domain modeling. What I mean to say is that most applications do not need to include domain models.
Where does this idea come from? Well, you don't need to model a printer in order to send postscript at it. You just need a pipe. You don't need to model a keyboard to receive input from one. The domain model is whatever lives in your client's head - the printers, keyboards, monitors, robots, motion, traffic, traffic lights, mountains, mole-hills - possibly shoehorned into some sort of relational schema.
Applications don't need those models, not unless they're going to be performing rich searches to make 'creative' decisions. Now, I wouldn't reject a paradigm that leveraged rich domain models to support planning and creative decisions - and I've made a few stabs at generative grammars for that sort of purpose. But that's far beyond state-of-the-art. At the moment, most applications don't need domain models, and would not be able to leverage them.
Thus, an OO class whose type names a domain element is almost certainly a mistake: even if your application is among the exceptions that does need or effectively benefit from a domain model, you could do much better than OO.
Since GUI programming seems one of the best fits for OO methodology, could you tell me how you would program a windowed GUI -- and provide an API for it -- without naming a window, a button, a font...?
A model of GUI elements is not usually considered a domain model. We don't often fill databases full of information about windows and buttons, for example, and the sort of 'decisions' and 'predictions' involved aren't directly related to any client requirements. There is a great deal of 'accidental complexity' involved. I'll grant that it's quite borderline, though.
But to answer your question, there are plenty of ways to provide GUI without a model of windows, buttons, fonts. Naked objects was already mentioned here. Tangible values, developed by Conal Elliott, would be a choice in the same vein as naked objects, but for functional values. HTML largely declares a GUI in terms of its content (though that still requires a transform between models, i.e. to turn databases into text). Some technologies simply turn the code into a GUI via graphical programming environments (Smalltalk, LabView, Max, Croquet). A console serves as a simple UI that doesn't require naming any UI elements. (I'm not endorsing them just by listing them.)
GUI programming is not a good fit for OO methodology (but I'll agree that it's one of the best fits >;^). With OO, you will end up reinventing a reactive programming model (badly), and a concurrency model. You'll face challenges dealing with a mix of synchronous and asynchronous IO for both the user and for keeping the display up-to-date. You'll deal with corrupted state and glitches, where state of objects diverges from the model it is intended to represent. You cannot easily observe a change in the GUI based on tweaking the code underlying it, so debugging and testing is expensive. You get no help from the paradigm with accessibility, multi-language, and various other cross-cutting domain concerns.
i only sort of follow/believe what you are saying, as i read it. and i do agree that oo guis suck wrt concurrency and 'oh now i need the nail friction here' issues.
yet things like naked objects or tangible values have to go through some final actual gui/rendering system like x-windows or whatever, which do in fact (for good or for ill) have "window" concepts. so i'm not sure how that helps your argument if the things you say are different end up using the not-different foundation.
i don't see how html supports your "w/out naming windows, buttons, fonts" claim when it explicitly does mention buttons, checkboxes, menus, fonts, colors, iframes (windows), ...
so the light-over-my-head-ness of what you say is only at like 43%, i'm hoping to grok more.
What is displayed can be content-driven, which frees developers from concerning themselves with positioning windows, managing layouts, and generally modeling the display elements. Inputs can also be 'content-driven'... e.g. a boolean input becomes a checkbox, and a string input becomes a textbox.
HTML I mentioned as 'largely' content driven. We could go much further in that direction than did HTML. HTML is a markup language, which (by definition of 'markup') mixes content and presentation. I mentioned it because it serves as a familiar example. HTML itself eventually moved more towards the GUI modeling direction by adding Cookies, scripting, and DOM. As an example, you look at an 'iframe' and see 'window', but a slight change in direction and we might have been looking at 'iframe' and seeing 'content transclusion' - i.e. a simple declaration that some external content should be included in this content.
And we don't actually need an underlying gui/rendering system 'like x-windows'. As a potential alternative, we could have underlying gui/rendering systems for 'application' objects be more 'like Seadragon',or more along the lines of object browsers and table browsers. The graphical programming environments mentioned above certainly qualify, some to lesser degrees than others.
You are correct that, at some stage, we do need a translation or templating system that will put pixels on the screen at the right place and right time. A windowing concept might be involved, but even if not we'll at least be using the domain concepts of time, colors, positions, perhaps even of monitors and GPUs, and some sort of integration with the user's input devices (mouse, keyboard, joystick, webcam, microphone).
And the real question is how much of this needs to be modeled by our applications. There is some difference between declaring a button (or declaring a 'unit event' input suitable for buttons) and describing a model of buttons (buttons go up and down, etc.).
I guess you've never programmed in Smalltalk!
There is no such thing as a Window in Smalltalk, at least not as GUI toolkits like Progress OpenEDGE ABL, Microsoft AWT, Microsoft WinForms, Microsoft WPF, Microsoft MFC, Sun Swing, Sun AWT, IBM SWT, Borland whatever-they-called-it, etc. refer to one.
This is amusing, of course, because Smalltalk is considered by most OO people to be one of the best OO languages so far, despite also being one of the first. And yet the design of its GUI toolkit shares almost nothing in common with any of the industry toolkits mentioned above.
i'm aware of MVCish and Morphicish stuff in the history of Smalltalks. looking at the GUIDevGuide.pdf of VisualWorks, Chapter 3 shows that there are, in fact, window objects. ?!
thanks for any pointers.
Smalltalk-80 does not have an ApplicationWindow class, like VisualWorks Smalltalk does. I would have to consult my copies of Ted Kaehler and Dave Patterson's Smalltalk-80 book A Taste of Smalltalk and Glenn Krasner's Smalltalk-80: Bits of History, Words of Advice book to tell you exactly what the class designs were for the Smalltalk-80 GUI subsystem.
Smalltalk-72 used a turtle class as the primary line drawing mechanism, and a window class for general window management.
turtle
window
But this was gradually de-emphasized ever since the introduction of the Model-View-Controller application organization (1978). You don't need a Window. That's the point of MVC, and its follow-up variations (well, aside from some strange variations, which demonstrate a complete non-understanding of what the M, V, and C stand for responsbility-wise). The point is to not create monolithic Window objects, but to further decompose the responsibilities, so that there is no God object governing a bunch of non-related stuff, like your application model, i/o, and presentation processor.
It seems that VisualWorks didn't have this metaphor and clearer division of responsibility. Even the name ApplicationWindow sounds wrong to me.
Some of the GUI toolkits I mention only have these monolithic Window classes to pacify the linker, loader and underlying event system of the operating system... For example, you need to know the history of Windows Graphics to understand why WPF is the way it is. Although WPF is largely DirectX now, there is still the Win32 Message Pump for backward compatibility with GDI/GDI+ applications, and this leaky message pump abstraction is exposed in some places (WPF's own event model, so-called routed events, is completely stupid, but that is another matter). More basically, the Application class, as I understand it, is hardlinked to how the operating system works, including basic stuff like removing unnecessary privileges from the process token.
On a tangent, windows are very deprecated for web UIs, phone, table, and slate UIs, they just don't work that well. In fact, we are seeing focused window-free designs propagate back to conventional PC desktops--its a very exciting time in UI as we are finally getting beyond Xerox.
I agree it usually doesn't make sense to include a keyboard object - how does one implement a keyboard in software? But it can make sense to model the state of such external entities and use them as phantom types in monadic keyboard interaction functions. Do you count that as domain modeling?
I wouldn't qualify a buffer of keyboard state as domain modeling because it isn't part of how the client understands the domain or describes requirements.
I would grant that the relationship between key events and system behaviors as a form of domain modeling, though. (i.e. "If I press Ctrl+Alt+z, do xyzzy.") But it wouldn't really be a domain 'model'.
For full disclosure, I should note that there is one place 'domain modeling' is pretty commonly useful: mock objects for unit testing. One would reasonably model a keyboard if that is what is necessary to support reproducible tests of key event sequences.
Well, this is a thorny issue. There is a lot that goes into properly handling key presses across all countries (I recently was baffled when Stefan Hananberg let me type on his German laptop, and he had to help me figure it out) [edit: and even operating systems, if you want true Write Once, Run Anywhere virtual machine technology]. The domain model here simply must be able to take as a context these variables, and map out to a keyboard model a set of standard keypresses. In addition, at a higher level, a key event manager could handle key chords, since the keyboard itself should have no knowledge of the internal timing of key presses. - Just imagine from an automata perspective the explosion in the model if you tried doing that!
you mean like on a smartphone?
Unless you're specifically dealing with a keyboard as part of your problem domain, say, e.g., your developing one with new functionality, a keyboard normally never is part of your domain model, and most OOAD development methods will encourage that one doesn't model input devices or GUIs as part of the domain model.
A keyboard might be part of the design model, i.e., if you want to elaborate in an embedded system how different to be implemented components work together. But, it's a grey area really since design decisions are part of the architect.
But all in all, it would be very weird if a keyboard ever makes it into the domain or design model.
[ Ah heck, this post ended up somewhere random. ]
Not surprisingly people's posts in this thread are all over the place in terms of adding to the discussion. People have understood your question differently, or been intrigued by different aspects of the question so the variation is only to be expected. The question is simply too vague and people's experiences too varied for the discussion to coalesce around a few concrete points where everyone agrees.
Programs are pretty much the same in the sense that everyone tends to bring a unique perspective to the problem. So if you define objects as being intuitive to mean there is a clear consensus about which objects are needed and how they are related --like you believe there is is for real-world objects-- then you need way more than just objects. You need a precise problem definition and clear criteria to judge between competing solutions; otherwise, there is simply no reason to expect the kind of convergence you desire. To me, placing the blame on objects for not being "intuitive" is simply barking up the wrong tree for the most part. | http://lambda-the-ultimate.org/node/3265 | CC-MAIN-2015-48 | refinedweb | 26,384 | 53.31 |
[Date Index]
[Thread Index]
[Author Index]
appropriate file format exportation for editing images in Corel Draw X5
hi,
what is the appropriate file format to export a graphic generated by a
mathematica expression, in order to edit it in Corel Draw x5.
What I want to edit is a graphic of a 3D object, a torus and I want to
import it as a vector graphic in corel draw.
The first thing I tried was the .pdf format, but the file appeared to be
corrupted while adobe reader opens it normally!
I tried also eps, svg and didn't work out!
Iakovos | http://forums.wolfram.com/mathgroup/archive/2011/Jan/msg00713.html | CC-MAIN-2015-48 | refinedweb | 102 | 67.89 |
Gettext
From OLPC
gettext is the GNU internationalization (i18n) library. It is commonly used for writing multilingual programs. The latest version is 0.17. Python code
print 'Hello World!'
would become
print _('Hello World!')
in Python
To load the gettext function and alias it to _, include this code:
from gettext import gettext as _
Now you're set for using gettext in your project. Simply wrap outputs from 'Output' to _('Output'). Keep in mind, that not only strings can require localization, but also
- numbers,
- time formats,
- currencies,
- time zones,
- names and titles,
- ...
Example Application
Let's make up an example (test.py) for translating names and titles:
from gettext import gettext as _ title = _('Mr.') lastname = 'Hager' firstname = 'Chris' name = _('%(title)s %(lastname)s %(firstname)s') % {'title': title, 'lastname': lastname, 'firstname': firstname}; print name;
It's possible to leave a comment directed to the translator like this:
# TRANSLATORS: Please just rearrange the 3 '%(...)s' parts as required. name = _('%(title)s %(lastname)s %(firstname)s') % {'title': title, 'lastname': lastname, 'firstname': firstname};
Note: When './setup.py genpot' is used in a sugar environment to generate the PO template file, it specifies 'TRANS:' rather than 'TRANSLATORS:' as the marker for comments to translators. So, if the software being internationalized is a python sugar activity, comments directed to the translators should be marked with 'TRANS:' rather than 'TRANSLATORS:'.
Building the template file
No we use xgettext to build a .po template file from the source code. This will be used by translators to derive local .po files.
xgettext --add-comments=TRANSLATORS: test.py
Our newly created template file with translations (eg. messages.po) looks like this:
#: test.py:3 msgid "Mr." msgstr "" # TRANSLATORS: Please just rearrange the 3 '%(...)s' parts as required. #: test.py:7 #, python-format msgid "%(title)s %(lastname)s %(firstname)s" msgstr ""
Distribute it and people can start translating.
Translating
We can derive a local .po file from the template using the msginit program. For a german translation we'd do this:
msginit --locale=de --input=messages.po
This will create a file named 'de.po'. The translator needs to edit it either by hand or with tools such as poEdit. When they are done, it will could like this:
#: test.py:4 msgid "Mr." msgstr "Hr." #: test.py:9 #, python-format msgid "%(title)s %(lastname)s %(firstname)s" msgstr "%(title)s %(firstname)s %(lastname)s"
Finally, the .po files are compiled into a binary .mo file with msgfmt.
msgfmt de.po
These are now ready for distribution with the software package.
Running
On Unix-type systems, the user sets the environment variable LC_MESSAGES, and the program will display strings in the selected language, if there is an .mo file for it. | http://wiki.laptop.org/go/Gettext | CC-MAIN-2016-26 | refinedweb | 457 | 60.51 |
Recently, I’ve been working a bunch with Grinder to do some load testing. I’ve had great success with it in the past, and wanted to punish an app. My test needs to make HTTP and HTTPS requests which I never anticipated would be a problem. Unfortunately, my server has a self-signed certificate which the Java processes refused to recognize. I tried adding the certificates through the Java Console but that led to these weird no peer exceptions (shudder).
I figured that I would have to use keytool to load the certificates. What I did not realize is that you cannot load a self-signed key where you already have a certificate using keytool directly! So I had to follow the following steps
- Convert my certs from PEM format into DER format
> openssl pkcs8 -topk8 -nocrypt -in server.key \
-inform PEM -out key.der -outform DER
> openssl x509 -in server.csr -inform PEM \
-out cert.der -outform DER
- Use the Java code from this AgentBob post to create a keystore
> java -Dkeystore=mycerts ImportKey key.der cert.der
java -Djavax.net.debug=all -classpath $GRINDER_JAR \
net.grinder.TCPProxy -console -http \
-keystore mycerts -keyStorePassword importkey
- To use the certs from my Agent process
from java.lang import System
grinder.SSLControl.setKeyStoreFile(System.getProperty("keystore"),System.getProperty("keypass"))
- Finally, I needed to add this to the properties file for my Grinder agent process
grinder.jvm.arguments=-Dkeystore=mycerts -Dkeypass=importkey
And tada it works! One other tidbit: using -Djavax.net.debug=ssl was invaluable in debugging. You can use -Djavax.net.debug=help to find out all of the debug options.
One thought on “Getting Grinder To Work with a Self-Signed Certificate”
Hey Rob, have you tried an SSL proxy? Basically, speak http to a proxy and have it handle the SSL negotiation for you. | http://www.innovationontherun.com/getting-grinder-to-work-with-a-self-signed-certificate/ | CC-MAIN-2019-04 | refinedweb | 306 | 60.11 |
Distributed Data Storage on a LAN?
Cliff posted more than 10 years ago | from the redundancy-gooood dept.
?"
Don't forget... (-1)
SCO$699FeeTroll (695565) | more than 10 years ago | (#7341288)
NBD Does this (5, Insightful)
backtick (2376) | more than 10 years ago | (#7341293)
(-1, Offtopic)
Anonymous Coward | more than 10 years ago | (#7341311)
---cut discussion here---
MOD DOWN - KARMA WHORE (-1, Offtopic)
Anonymous Coward | more than 10 years ago | (#7341371)
Re:NBD Does this - NBD server for windows (5, Informative)
flok (24996) | more than 10 years ago | (#7341379) [vanheusden.com]
This version enables you to also export partitions/disks.
IS there any technologieS?: +1, Patriotic (-1, Troll)
Anonymous Coward | more than 10 years ago | (#7341408)
At least you are following the path boldly forged
by our fearful LOSER - G. W. Bush [whitehouse.org]
Try a search at Google [google.com]
Cheers,
Kilgore
OT: Re:IS there any technologieS?: +1, Patriotic (-1, Offtopic)
Anonymous Coward | more than 10 years ago | (#7341458)
same way
whitehouse.com != whitehouse.gov
and
whitehouse.org != whitehouse.gov
Re:NBD Does this (1)
Matrix272 (581458) | more than 10 years ago | (#7341501)
Would NBD be able to fill all those needs? I'd like a RAID5 setup over all the computers, although maybe even some other type of RAID, like RAID5 with 5 extra disks, just in case someone powers one down... Would that work? Ideally, I'd like to make a cluster of the workstations, but also have a console for each of them... but I haven't had a lot of time to research it lately, so I don't know what's available out there. Does anyone think NBD would be a viable solution for me?
Re:NBD Does this (5, Informative)
dbarclay10 (70443) | more than 10 years ago | (.
Rob Malda caught in circle jerk--kills self (-1, Redundant)
Anonymous Coward | more than 10 years ago | (#7341298)
Re:Rob Malda caught in circle jerk--kills self (0, Offtopic)
danny256 (560954) | more than 10 years ago | (#7341339)
Re:Rob Malda caught in circle jerk--kills self (-1, Redundant)
llamaluvr (575102) | more than 10 years ago | (#7341350)
yes (0)
Anonymous Coward | more than 10 years ago | (#7341573)
yes (1)
Triumph The Insult C (586706) | more than 10 years ago | (#7341317)
rsync Re:yes (1)
cprice (143407) | more than 10 years ago | (#7341345)
Re:rsync Re:yes (1)
macemoneta (154740) | more than 10 years ago | (#7341405)
Re:yes (1)
Triumph The Insult C (586706) | more than 10 years ago | (#7341360)
we use afs (pre-openafs, tho i'm sure openafs will work just find) on top of nbd (link escapes me right now). works pretty well.
MSI OSS (-1, Offtopic)
Anonymous Coward | more than 10 years ago | (#7341323)
Next time a friggin patch comes out (which is probably next week, thanks "a patchy" web server!), I will have to go through this rigamorole over and over again. OSS software sucks ass.
Re:MSI OSS (0)
Anonymous Coward | more than 10 years ago | (#7341354)
Sorry, OSS ruined my subject field too. (0)
Anonymous Coward | more than 10 years ago | (#7341414)
pirst fost! (-1, Offtopic)
Anonymous Coward | more than 10 years ago | (#7341324)
Okay, that's it (-1, Offtopic)
Anonymous Coward | more than 10 years ago | (#7341330)
do tell (0)
Anonymous Coward | more than 10 years ago | (#7341377)
aw geeze. (0, Offtopic)
nbvb (32836) | more than 10 years ago | (#7341342)
If you don't understand why, just put your Packard Bell back in the box and ship it back.
Tell them you're too stupid to own a computer.
Re:aw geeze. (1)
JohnnyKlunk (568221) | more than 10 years ago | (#7341423)
it's different to having a 6 week offsite tape rotation strategy, but does protect you against a disk failure, which is what the original post wanted.
I backup my servers as work, I also raid them. To me, doing both makes perfect sense.
Re:aw geeze. (1)
wallywam1 (715057) | more than 10 years ago | (#7341426)
Re:aw geeze. (-1, Troll)
Anonymous Coward | more than 10 years ago | (#7341432)
The poster clearly states he is trying to protect against a disk crash.
In fact *jackass* - reread his post. It does NOT say he is looking for backups (he says he does that separately), but simply a mechanism for redundant storage of data - network raid.
Fucking jackass illiterate fool.
Re:aw geeze. (0)
Anonymous Coward | more than 10 years ago | (#7341434)
-AC
Win2k (4, Informative)
SuiteSisterMary (123932) | more than 10 years ago | (#7341344)
Re:Win2k (1)
... James ... (33917) | more than 10 years ago | (#7341421)
For example, say you have a DFS root of \\domain\dfs, with multiple children, like \\domain\dfs\mp3 and \\domain\dfs\games. mp3 and games can be shares on two different servers, but they're accessible via the same virtual \\domain\dfs share.
It's useful nonetheless.
Re:Win2k (2)
SuiteSisterMary (123932) | more than 10 years ago | (#7341513) (1)
RedX (71326) | more than 10 years ago | (#7341581)
The distributed feature would be quite worthless if there wasn't some synchronization taking place to make sure the data was synched across all servers in the DFS namespace.
Re:Win2k (0)
Anonymous Coward | more than 10 years ago | (#7341565)
Re:Win2k (1)
Havokmon (89874) | more than 10 years ago | (#7341480)
Comment (1)
TerminatorT100 (720110) | more than 10 years ago | (#7341346)
NBD for Windows (1, Redundant)
backtick (2376) | more than 10 years ago | (#7341507)
(I haven't used this, but it exists)
So... (1)
Pingular (670773) | more than 10 years ago | (#7341352)
Kind of like a Beowulf of hard-discs then?
rdist would work... (4, Informative)
ZenShadow (101870) | more than 10 years ago | (#7341353) (4, Interesting)
backtick (2376) | more than 10 years ago | (#7341546)
slashdot creates idiots (0)
exspecto (513607) | more than 10 years ago | (#7341366)
Re:slashdot creates idiots (0)
Anonymous Coward | more than 10 years ago | (#7341474)
Standard Linux kernel maybe? (2)
buzzbomb (46085) | more than 10 years ago | (#7341368)
Anyone tried this?
Re:Standard Linux kernel maybe? (3, Informative)
backtick (2376) | more than 10 years ago | (#7341463)
If you're curious about using the enhanced NBD w/ failover and HA, you can read about it at:
Re:Standard Linux kernel maybe? (1)
buzzbomb (46085) | more than 10 years ago | (#7341531)
Ok. But does it work under Windows? That was one of the requirements.
Gah (0)
Anonymous Coward | more than 10 years ago | (#7341372)
InterMezzo (1, Informative)
Anonymous Coward | more than 10 years ago | (#7341373)
Sounds like Coda or InterMezzo [inter-mezzo.org] would fit the bill, but they won't address non-linux systems directly. You'd have to export the InterMezzo file systems with Samba and mount them on the MS Win boxes.
AFS (4, Informative)
Reeses (5069) | more than 10 years ago | (#7341374)
There's another alternative with a different name, but I forget what it's called.
Re:AFS (1)
Reeses (5069) | more than 10 years ago | (#7341443)
Coda:
and InterMezzo:
and there's a review here:
Although, honestly, a 5 second search on google for "distributed filesystem" would have turned this up.
Ah, well.
Re:AFS (1)
wetshoe (683261) | more than 10 years ago | (#7341456)
AFS is actually pretty cool. You can run a file server that uses all this disk space of all the client machines. It's a great idea now, especially since most new machines come with 40GB hard drives, and most people don't use anything more then 5GB.
AFS is a wonderful solution to not only this problem that the poster is talking about, but it can be used in so many other interesting ways.
Re:AFS (1)
kaybi (261428) | more than 10 years ago | (#7341462) [openafs.org]
Why? (2, Funny)
Anonymous Coward | more than 10 years ago | (#7341382) friggin computers.
We realize you think you are cool because you have a few KVMs, a couple of Linksys routers, and a bunch of old PIIs running Lunix with one Windows machine, but come on, man. Stop spanking yourself over your elite NAT-ed network and just get one computer with hardware RAID. Instal Cygwin if you feel the need to type configure && make && make install a whole bunch of times and watch teh pretty text lines scroll.
Re:Why? (0)
gatkinso (15975) | more than 10 years ago | (#7341464)
Most common form of data loss? (5, Insightful)
Anonymous Coward | more than 10 years ago | (#7341385)? (1)
JohnFluxx (413620) | more than 10 years ago | (#7341452)
Re:Most common form of data loss? (1)
Xerithane (13482) | more than 10 years ago | (#7341569)? (4, Insightful)
Blackknight (25168) | more than 10 years ago | (#7341598)
Intermezzo (5, Informative)
mikeee (137160) | more than 10 years ago | (#7341389)
It isn't particularly high-performance, from what I know, and may be more complexity than you need.
Network RAID (1, Interesting)
Anonymous Coward | more than 10 years ago | (#7341394)
Bandwidth (3, Insightful)
omega9 (138280) | more than 10 years ago | (#7341396)
Re:Bandwidth (1)
SirJaxalot (715418) | more than 10 years ago | (#7341511)
RAID on Files (3, Insightful)
Great_Geek (237841) | more than 10 years ago | (#7341402)
This would be really useful for SOHO type places to allow me to have a hot offsite backup at multiple friends (and vise versa).
Re:RAID on Files (1)
ZenShadow (101870) | more than 10 years ago | (#7341478)
--ZS
DIBS? (1)
kulpinator (629554) | more than 10 years ago | (#7341403)
Backing up all within your house (4, Insightful)
Alain Williams (2972) | more than 10 years ago | (#7341404)
8 copies of the same document all nicely toasted!
Re:Backing up all within your house (2, Funny)
feepness (543479) | more than 10 years ago | (#7341530)
Come on, this'll never happen. I live in San Diego!
Re:Backing up all within your house (1)
peragrin (659227) | more than 10 years ago | (#7341595)
Your chances are even better if you seperate the macines through out the house.
Re:Backing up all within your house (1)
BigDumbAnimal (532071) | more than 10 years ago | (#7341599)
Loose Hard Drive? (2, Funny)
Anonymous Coward | more than 10 years ago | (#7341409)
Speed would be an issue... (4, Informative)
Trolling4Dollars (627073) | more than 10 years ago | (#7341412)... (0)
Anonymous Coward | more than 10 years ago | (#7341563)
Coda (3, Redundant)
fmlug.org (695374) | more than 10 years ago | (#7341416)
Distributed Network Block Device (2, Informative)
JumboMessiah (316083) | more than 10 years ago | (#7341418)
data loss (1)
_fuzz_ (111591) | more than 10 years ago | (#7341420)
In my experience, the most common form of data loss is not hardware failure, but user error. RAID is great for protecting against hardware failure, but be sure to still make backups to prevent against accidental deletion.
...existing technologies that will let me do this? (-1, Flamebait)
Anonymous Coward | more than 10 years ago | (#7341430)
Next question?
Two words (sort of) (0)
Anonymous Coward | more than 10 years ago | (#7341433)
What you are asking for sounds pretty damn complicated. My home has about 10 machines in it, and I just use Samba on two mirrored disks for network storage.
Hey, but it's a free world. Feel free to ratchet up the technology till you bleed....
Try Rsync or DRBD (4, Informative)
oscarm (184497) | more than 10 years ago | (#7341436) (3, Informative)
DrSkwid (118965) | more than 10 years ago | (#7341438)..
PENIS (-1, Flamebait)
Anonymous Coward | more than 10 years ago | (#7341450)
Expensive but reliable solution (2, Interesting)
onyxruby (118189) | more than 10 years ago | (#7341459) (2, Informative)
blaze-x (304666) | more than 10 years ago | (#7341461)? (1)
SuperBug (200913) | more than 10 years ago | (#7341465)
Check this out... (1)
BubbaTheBarbarian (316027) | more than 10 years ago | (#7341485)
n dex.html
Not as a solution in and of itself, but it is a good idea considering that you more then likely have a box to burn...also try to grab some old PolyServe software. It will do that samething over a network, though not without resource loss.
WAR TUX!
Not really a good idea (1)
c77m (690488) | more than 10 years ago | (#7341491)
What about data integrity when the network fails? Or when a single host fails? You could create ACLs for hosts that would be responsible for certain data upon certain failures, but then you're adding to an already overwhelming management nightmare.
Why not consider a shared storage system? You're not realistically going to have a failproof plan in your home, so just narrow it down to a few things. External JBOD with software RAID, presented as NAS to the rest of your computers. If a drive fails, just replace it. If the NAS head fails, just hook up the JBOD to another host.
Re:Not really a good idea (1)
Indianwells (661008) | more than 10 years ago | (#7341528)
Typo! (0)
Anonymous Coward | more than 10 years ago | (#7341495)
"Is there any existing technologies that will let me do this?"
--Should read "are..."
Lustre (0)
Anonymous Coward | more than 10 years ago | (#7341499)
Rsync and Ssh (4, Informative)
PureFiction (10256) | more than 10 years ago | (#7341521).
The holy grail (1)
mcrbids (148650) | more than 10 years ago | (#7341535))
Unison? (1, Informative)
Anonymous Coward | more than 10 years ago | (#7341539)
They say: ."
Careful... (0)
Anonymous Coward | more than 10 years ago | (#7341540)
dedicated vs. network (0)
Anonymous Coward | more than 10 years ago | (#7341545)
You aren't gonna get a real RAID. (5, Insightful)
PurpleFloyd (149812) | more than 10 years ago | (#7341547). (1)
Lester67 (218549) | more than 10 years ago | (#7341570)
You probably don't want to do this. (3, Insightful)
NerveGas (168686) | more than 10 years ago | (#7341552)
nbd + evms2 = your best bet, but you'll lose (0)
Anonymous Coward | more than 10 years ago | (#7341553)
Be forwarned: This will be slower than snot on a cold Sunday.
The fastest and maybe even the cheapest setup to do this with would be to have a bunch of NAS drives on their own switch, with the host machine attached to the same switch. Host has multiple NICs, all channel bonded to this switch, and then has another NIC to the outside network. This would give you a big setup.. but again, SLOW! Your looking at 5MB/s tops with overhead.. IDE does this stuff all the time at up to 40MB/s+. SCSI and Fibrechannel, even faster.
Good luck!
I can't believe... (2, Interesting)
wcdw (179126) | more than 10 years ago | (#7341560)
while it's a cool idea (1)
flaming-opus (8186) | more than 10 years ago | (#7341562)
Furthermore, every time one of the computers is powered off the system will wait for that machine to come back, or will treat it like a dead disk. Even with high performance raid devices, degraded mode is mighty slow. Then when the device comes back you will have to rebuild the raid. A long/slow/agonizing process even with fast hardware.
I think rsync in a cron tab is a much better idea.
Coda File System (0)
Anonymous Coward | more than 10 years ago | (#7341582)
Why is Coda promising and potentially very important?
Availability (1)
raphae1 (695666) | more than 10 years ago | (#7341583)
I suppose it could work well in a server room, but if your home setup is anything like mine - open cases and cat5 crisscrossing the house - or you have a screwdriver on your desk, you might experience a lot of downtime...
My wife would have me by the curlies.
yes, I'm a soldering iron wielding programmer
Umm, but what about? (1)
mschuyler (197441) | more than 10 years ago | (#7341593)
The point is: 8 computers in the house won't help diddly in a real disaster. That's a lot of work just to see it burn up. (I know it will never happen to you; it was 2,000 other houses that burned to the foundation.
And further, I've had two RAID systems go TU in the last few years. For me RAID doesn't cut it at all. Distributed File System works pretty cool--but so does a fire safe. | http://beta.slashdot.org/story/40073 | CC-MAIN-2014-15 | refinedweb | 2,647 | 77.87 |
- Author:
- Leonidas
- Posted:
- June 3, 2007
- Language:
- Python
- Version:
- .96
- db mixin database plugin orm
- Score:
- 6 (after 6 ratings).
Usage is (due to use of meta-classes) quite simple. It is recommended to save this snippet into a separate file called
positional.py. To use it, you only have to import
PositionalSortMixIn from the
positional module and inherit from it in your own, custom model (but before you inherit from
models.Model, the order counts).
Usage example: Add this to your
models.py
from positional import PositionalSortMixIn class MyModel(PositionalSortMixIn, models.Model): name = models.CharField(maxlength=200, unique=True)
Now you need to create the database tables:
PositionalSortMixIn will automatically add a
postition field to your model. In your views you can use it simply with
MyModel.objects.all().order_by('position') and you get the objects sorted by their position. Of course you can move the objects down and up, by using
move_up(),
move_down() etc.
In case you feel you have seen this code somewhere - right, this snippet is a modified version of snippet #245 which I made earlier. It is basically the same code but uses another approach to display the data in an ordered way. Instead of overriding the
Manager it adds the
position field to
Meta.ordering. Of course, all of this is done automatically, you only need to use
YourItem.objects.all() to get the items in an ordered way.
Update: Now you can call your custom managers
object as long as the default manager (the one that is defined first) still returns all objects. This Mix-in absolutely needs to be able to access all elements saved.
In case you find any errors just write a comment, updated versions are published here from time to time as new bugs are found and fixed.?<hr />.
#
Please login first before commenting. | https://djangosnippets.org/snippets/259/ | CC-MAIN-2020-24 | refinedweb | 305 | 58.28 |
Consecutive Primes Solution Google Kickstart 2021
Problem
Ada has bought a secret present for her friend John. In order to open the present, Ada wants John to crack a secret code. She decides to give him a hint to make things simple for him. She tells him that the secret code is a number that can be formed by taking the product of two consecutive prime numbers, such that it is the largest number that is smaller than or equal to ZZ. Given the value of ZZ, help John to determine the secret code.
Formally, let the order of prime numbers 2,3,5,7,11,2,3,5,7,11, … be denoted by p1,p2,p3,p4,p5,p1,p2,p3,p4,p5, … and so on. Consider RiRi to be the product of two consecutive primes pipi and pi+1pi+1. The secret code is the largest RjRj such that Rj≤ZRj≤Z.
Input
The first line of the input gives the number of test cases, TT. TT lines follow.
Each line contains a single integer ZZ, representing the number provided by Ada as part of the hint.
Output
For each test case, output one line containing
Case #xx: yy, where xx is the test case number (starting from 1) and yy is the secret code – the largest number less than or equal to ZZ that is the product of two consecutive prime numbers.
Limits
Time limit: 15 seconds.
Memory limit: 1 GB.
1≤T≤1001≤T≤100.
Test Set 1
6≤Z≤20216≤Z≤2021.
Test Set 2
6≤Z≤1096≤Z≤109.
Test Set 3
6≤Z≤10186≤Z≤1018.
Sample
Sample Input
2 2021 2020
Sample Output
Case #1: 2021 Case #2: 1763
For Sample Case #1, the secret code is 20212021 because it is exactly the product of consecutive primes 4343 and 4747.
For Sample Case #2, the secret code is 17631763 because the product of 4141 and 4343 is 17631763 which is smaller than 20202020, but the product of 4343 and 4747 exceeds the given value of 20202020.
Also Read:
- Truck Delivery Solution Google Kickstart 2021
- Longest Progression Solution Google Kickstart 2021
- Increasing Substring Solution Google Kickstart 2021
Solution:
#include <bits/stdc++.h>
using namespace std;
#define ll long long
#define ff first
#define ss second
mt19937 rng(chrono::steady_clock::now().time_since_epoch().count());
const int N = 1000005;
const ll MOD = 1000000007;
const ll INF = 0x3f3f3f3f3f3f3f3f;
bool isPrime(ll a){
if(a == 1) return 0;
if(a == 2) return 1;
if(a % 2 == 0) return 0;
for(ll i = 3; i * i <= a; i += 2)
if(a % i == 0) return 0;
return 1;
}
void go(){
ll z;
cin>>z;
if(z < 15){ cout<<6<<‘\n’; return; } ll sq = 0; for(ll i = 30; i >= 0; –i)
if((sq + (1LL << i)) * (sq + (1LL << i)) <= z)
sq += (1LL << i);
ll p2 = sq;
while(isPrime(p2) == 0)
–p2;
ll p3 = sq + 1;
while(isPrime(p3) == 0)
++p3;
if(p2 * p3 <= z){
cout<<p2 * p3<<‘\n’;
return;
}
ll p1 = p2 – 1;
while(isPrime(p1) == 0)
–p1;
cout<<p1 * p2<<‘\n’;
}
int main(){
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
int tt; cin>>tt; for(int i = 1; i <= tt; ++i){ cout<<"Case #"<<i<<": "; go(); }
}
Consecutive Primes Solution Google Kickstart 2021 | https://www.techinfodiaries.com/consecutive-primes-solution/ | CC-MAIN-2022-27 | refinedweb | 543 | 67.79 |
The idea is the same as the problem of finding a path from root to leaf ("pathSum_m" method). The difference is we can have a path not starting from root which add to the "sum". So we can add two more recursions from left and right child of the root. Below is the code. Honestly it took me a while to figure it out. Hope it's helpful.
public class Solution { public int pathSum(TreeNode root, int sum) { if(root==null) return 0; else{ Total total= new Total(); int orig_sum=sum; Set<TreeNode> visited = new HashSet<TreeNode>(); pathSum_m(root, sum, total); return total.all+pathSum(root.left,sum)+pathSum(root.right,sum); } } public void pathSum_m(TreeNode root, int sum, Total total) { if(root==null) return; else{ int subsum = sum-root.val; //sum=orig_sum; if(subsum==0){ total.all++; } if(root.left!=null){ pathSum_m(root.left, subsum, total); } if(root.right!=null){ pathSum_m(root.right, subsum, total); } } } public static class Total{ private int all=0; } } | https://discuss.leetcode.com/topic/64551/java-recursion-solution-with-explaination | CC-MAIN-2017-39 | refinedweb | 164 | 59.8 |
ListReport
For space requirements this blog has been split in 5 parts:
Now that you have seen how to create a Smart Template application, let’s see how to configure its ListReport page with SAP Web IDE.
1 – Open your SAP Web IDE and go into your STDemo app
2 – Create a new folder named annotations under webapp
3 – Right click on this new folder and choose New –> Create Annotation.
4 – Create a new annotation by choosing its name (i.e. annotation1.xml) and by selecting the service data source (in this case GWSAMPLE_BASIC), then click on Next
5 – Click on Finish
6 – If you double click on the manifest.json file, you can check that now a new annotation is present in the app and that it has been tied to the metadata.xml file for the selected service
7 – Right click on the new annotation annotation1.xml and choose Open with –> Annotation Modeler
8 – The AM opens. Now we need to choose for which entity set we want to create an annotation. Since ProductSet is the entity set we have chosen as Data Binding OData collection, this is the one automatically proposed. Keep this choice and click on Annotate
9 – A new annotation is created for ProductSet. What we want is to display a list of all members belonging to this collection, so we need to add to this annotation file a new UI.LineItem annotation term: it will take care of displaying this list. Click on the “+” sign located on the Local Annotations row
10 -Add a new UI.LineItem annotation term and click on OK
11 – Once this term has been added, we need to define columns for this table. So click on the “+” sign on the UI.LineItem row
12 – Add a UI.DataField annotation term and click on OK
13 – Do the same for 4 times more. You should have now 5 UI.DataFields added to the UI.LineItem annotation term
14 – Select the first one and bind it to the ProductID value, then click on Apply
15 – Do the same for the other UI.DataFields assigning them to Name, Category, Description and Price
16 – If you start now the application, you will be able to see the ListReport page fully populated
17 – We have not finished yet: we would like to place a filter bar on top of this UI.LineItem component being able to filter by ProductID and Category. Let’s go again on the AM and click on the “+” sign aside the Local Annotations row. Select the UI.SelectionFields annotation term and click on OK
18 – Choose first the ProductID field, then click on Add Path and specify Category as well. Then click on the Apply button
19 – Save the annotation
20 – Refresh or restart the application and you should be able to see two search fields just in the page header
21 – As last step in this chapter, we would like to add a button on top of the UI.LineItem object. We would like, with this button, to execute some special Function Import coming from the backend. These functions, if present, are listed in the metadata.xml file. Click on the “+” sign on the UI.LineItem annotation and add a new UI.DataFieldForAction component.
22 – Define a label for this component and select as action the RegenerateAllData function; then click on Apply. As you can see here this is a special function which has its specification in the metadata.xml file. You just need a button to call it. This function is executed on the backend service.
NOTE: Pay attention that in this case this function does nothing here: it’s just an example to show how you can attach a button which triggers an action on the backend service
23 – This is how the final layout looks like with the new button
Let’s continue with the next part where we’ll learn how to add a Object Page to our application which is displayed when you click on one of the ListReport’s rows: How to use Smart Templates with SAP Web IDE – Object Page!
Thanks for this wonderful post ! Im following these steps with same standard oData service but having trouble while adding a button. Actually im not able to see action “RegenerateAllData” in the drop down even though its available in metadata.xml. Could you please advise on this.
Thanks
Hi Sudhanshu,
could you please attach here a screen shot of your DataFieldForAction configuration, where you say that your Dropdown box is empty?
Regards,
Simmaco
Hello Simmaco,
Please find attached screen shot.
Thank you!
Hi Sudhanshu,
it seems that your application looks different from mine: you have several annotation files and in this annotation you are using content from another. Not sure if this can be do problem. In order to investigate deeper I would need to have you app. Is it ok for you to send me it privately? Or maybe you can try to follow exactly my process and see if you have the same.
Regards,
Simmaco
Hello Simmaco,
That’s absolutely correct. I deleted two auto created Annotations and it showed me all Function imports for this service. Thanks again !
Also, can i use only function import in DataFieldForAction or any other CRUD operation.
please advise me on this.
Thanks!
Hi Sudhanshu,
in OData V2, an action can only be a function import.
Furthermore, for other operations like CREATE or UPDATE, you would need the payload to be sent, and for DELETE you need to address one specific entry.
It has a different semantic than the function import
So I guess the answer is no…
Cheers,
Carlos
Dear Simmaco,
I have added annotation1.xml file as u have shown. but it is giving following error as attached in screenshots
Hi,
yes this is a know issue with the latest version of SAP Web IDE. To workaround it you can try to add manually the annotation file in the manifest.json file.
In this way it should work.
Regards,
Simmaco
Thank U Simmaco.
I have solved the problem by creating annotation file locally..
Is it possible to have a column in the list report that shows an icon? If yes, how can I do this? Thx
Hi Simmaco,
really good contributions, these posts – just as I like them to be.
The only thing I would ask for….:
Please keep on publishing 😉
Carlos
Hi
Thanks for the amazing blog. I tried using annotations(created in backend) with a smart table on type Analytical table because I want the table to display subtotals but i got error Select at least one column to perform the search which I haven’t been able to resolve. I was wondering if you could check out this post that I created for . Maybe you might have an idea of what I’m doing wrong.
Kind regards
Thanks – great blog !
Hi Simmaco,
Its a very helpful blog.
I have performed all the steps as mentioned.
I have a small obstacle in this. When I added the DataFieldForAction annotation with label and action, I got a button but without text on it.
Where could I have gone wrong?
Regards,
Divya
Hi Simmaco,
Thanks for sharing such an excellent use case of UI Annotations.
Unfortunately when I am trying to annotate the local annotation file the “Local Annotation” node is not appearing. The pane displays a message that the OData entity is not annotated.
I am using OData Service GWSAMPLE_BASIC and I have created the local annotation file from “New -> Create Annotation” menu option. manifest.json file is also adjusted automatically.
Can you please guide me on this?
Thanks.
Tapas.
Hi Tapas,
It happened the same in my Web IDE (Version: 170119) which by the way has no option for “Create Annotation”, instead there is NEW > “Annotation File”.
It worked to me when I deleted the project and start from scratch. When I was creating the Annotation file using the wizard I named as “annotations”, so the file is annotations.xml. And my manifest.json is below. After that I was able to “annotate” the ProductSet.
Cheers,
Valter
Hi Valter,
I am also facing the same issue, I tried deleting the project number of times but every time I am getting same issue. What could be the reason?
Thanks,
Sunil
Hi Sunil,
It might be the Web IDE plugin.
Try to add the entry manually in the manifest.json.
Thank you Valter
Hi All,
There is one more reason why we can get this issue, and that is because of schema namespace value in the metadata.xml file. In my case since we were using our own registered namespace the schema namespace value has “/” symbol in it. When I removed this “/” from namespace value, it started working.
Regards,
Sunil Ghatage
Hi Simmaco,
The “Regenate All” is not showing when I execute the app. The function import is listed in the action box at the AM, the code is annotated as you can see below but the “action” does not appear in the top of the table.
I am using the latest Web IDE (Version: 170119).
Is there anything I am missing?
Below is my annotation:
Very helpful blog, congrats!
Thanks,
Valter
Edited:
I found the problem. There is an issue in the SAP Web IDE about the i18n.
i18n.properties:
I changed from:
to:
and it worked.
Cheers,
Valter
Hi,
I’m using HCP Canary (classic HCP). Generate with smart template ‘List Report Application’. Then when I try to open annotation.xml with Annotation Modeler, I got an error saying ‘Odata metadata can’t be loaded. Please check the OData service URI in the app descriptor (manifest.json file) of this pro.’ But I didn’t change anything after template generation. Not sure what need to adjust.
In the console, there is a message as following:
(destination) Destination ‘webide_di’ is either not a Web IDE destination or not valid because the WebIDEUsage or URL properties are missing in the destination settings.
Check your destination settings in the SAP Cloud Platform Cockpit.
Can someone help on what values should be set on the destination webide_di in HCP?
Thanks!
Wenshan
I have the same issue….
OData metadata can’t be loaded. Please check the OData Service URI in the app descriptor (manifest.json file) of this project.
Hi,
Can someone tell me how to enable the “Export to Excel” button (Button isn’t visible)?
Thanks!
Hi,
I am not able to create annotation file.
“Metadata cannot be loaded” error is coming while creating new annotation file.
Please help.
Thanks,
Amit
I have the same issue, I can’t find solutions to this problem anywhere. Did anyone find something?
After creating the app, I reloaded the OData service. (Overwriting the old) Then it worked. At least in my case.
Hi,
what can i do, that the list in the report is shown from start automatically?
Without having to press the “GO” key.
Best Regards, Thorsten.
Dear Team,
I am not able to create annotation file. It says “metadata not loaded”.
I am trying to follow the steps given in the blog.
Can someone provide steps to add manually or resolve the issue.
Any how this blog is good enough.
Thx,
Raghu
Hi,
I have tried creating the annotation file, but when I try to open it, I have the following message:
Do you have any idea why?
Hi,
I managed to solve the issue “Metadata cannot be loaded.” After creating the Overview Project, you need to modify the manifest.json file as follows:
in the dataSources section you need to delete “/destination/” from the “uri” field and add the “odataVersion”: “2.0” to the settings. After that, try to create a new annotation file and you shouldn’t get the error anymore.
Before:
After:
Cheers,
Roxana | https://blogs.sap.com/2016/04/14/how-to-use-smart-templates-with-sap-web-ide-listreport/ | CC-MAIN-2018-05 | refinedweb | 1,971 | 65.83 |
The complex menu has caused issues for the chain
These moves can modify your mood
Join the NASDAQ Community today and get free, instant access to portfolios, stock ratings, real-time alerts, and more!
ETFs as an investment avenue are often associated with passive
fund management style which enables them to be more cost
effective (in terms of expense ratios) than their mutual fund
cousins. However, with the growth of the ETF industry as a whole,
ETF managers are continuously striving for flexibility and new
investors in order to capture more assets.
This paves the way for actively managed ETFs to take center
stage, especially in a highly dynamic market environment. Having
said this, it is prudent to note that there are a number of
actively managed ETFs available in the market today.
However, this article highlights some of the positives of
three such bond ETFs which investors could consider for stability
as well income, especially in this ultra low interest rate
environment.
So far this year investors have been fairly upbeat on the bond
ETFs space. In fact some of the biggest names in this front like
iShares iBoxx $ Investment Grade Corporate Bond
(
LQD
), iShares iBoxx $ High Yield Corporate Bond (
HYG
) and Vanguard Total Bond Market (
BND
)
have witnessed significant popularity this year in terms of asset
accumulation.
There clearly has been a reversal in investor risk appetite in
the third quarter as investors shifted focus from the traditional
'low risk' fixed income ETFs which was pretty much the way to go
for investors in the second quarter, to ETFs tracking riskier
asset classes. However, by no means does it imply that investor
appetite has subsided in the bond ETF space (see
Q3 ETF Asset Report: Investors Back in the
Market?
).
Yet this is by no means limited to the passive market, as the
actively managed bond ETF space, many products have seen
significant inflows in their asset base in fiscal 2012. In fact
the
WisdomTree Emerging Markets Local Debt ETF (
ELD
), Peritus High Yield ETF (
HYLD
) and PIMCO Total Return ETF (
BOND
)
have witnessed positive inflows of around $200 million, $69
million and $2.67 billion respectively in their asset bases so
far this year (source:
Index Universe
).
The
WisdomTree Emerging Markets Local Debt ETF (
ELD
)
seeks exposure in sovereign debt securities of emerging markets
denominated in their local currency. ELD is exposed to a variety
of emerging markets, however, it is fairly upbeat on Mexico
(10.63%) and Brazil (10.50%).
Also, allocating a substantial portion of its assets in
countries with stronger balance sheets such as Malaysia (10.27%)
and Russia (7.00%) giving ELD a relatively low amount of worries
on the currency front (read
Buy These Emerging Asia ETFs to Beat China,
India
).
However, ELD will be subject to number of emerging market
currencies in total, so the risk is by no means removed.
Additionally, investors should note that the product has an
effective portfolio duration of 4.73 years and an average
maturity of 6.15 years, suggesting that it tracks the
intermediate term of the yield curve and will be subject to
moderate levels of interest rate risk.
The ETF goes beyond tracking an index and strives for steady
income and capital appreciation. A one year look suggests that it
has managed to deliver as it is up by 12.64% on a one year basis
as of September 30
th
2012.
Additionally the yield is quite solid as it has a distribution
yield of 4.25%. Lastly, even though it is an actively managed
ETF, it charges just 55 basis points in fees and expenses.
From the actively managed high yield ETF space we have the
Peritus High Yield ETF (
HYLD
)
which primarily aims at consistently high levels of cash flow
streams in the form of interest income.
It invests in a variety of non-investment grade corporate debt
securities by primarily employing a bottom up approach of
securities selection. Some of the features of its highly active
portfolio management are to select value creating securities and
at the same time ensure minimum exposure to default risk.
It does this by eliminating risky leveraged buy outs (LBOs)
based bonds which the company sees as not worth the headaches.
Additionally, as a means of managing risk, it develops trigger
points which exhibit a 'position sell' for individual securities
in its portfolio when it violates a particular level, thereby
managing losses.
Furthermore, as a hedge against negative market movements it
can invest in U.S. Treasuries as and when the need arises (see
Two Intriguing Financial ETFs with a REIT
Focus
).
Thanks to the active management employed by HYLD, it charges a
hefty expense ratio of 1.36%. Nevertheless, the high costing
seems justified when the one year return (as of 30th September
2012) of 15.48% is taken into account.
Moreover, thanks to the ultra low rate policy of the Fed, it
is an appropriate choice for income starved investors as it pays
out a solid distribution yield of 8.27%. However, it comes at the
expense of credit quality. The product has amassed an asset base
of $137.73 million and an average daily volume of about 31,000
shares.
The
PIMCO Total Return ETF (
BOND
)
is the ETF version of PIMCO's flagship blockbuster mutual fund
the
PIMCO Total Return Institutional Fund (
PTTRX
).
However, the $3.21 billion ETF has been outperforming its
gigantic $169.32 billion mutual fund cousin since its inception
in March of 2012.
The ETF has returned 9.93% since its inception while the
mutual fund has returned 6.23% for the same time period. While
having a much lower asset base certainly has allowed BOND to do
well, one has to wonder if this outperformance can continue into
the future.
Still, just to highlight the disparity between the two, BOND
and PTTRX have a correlation of just 65% between the two since
the launch of the ETF. Also, their one month rolling correlation
has never increased 87% and has even gone to the extent of
hitting a low of 30%, although there is admittedly a small sample
size.
The ETF targets to maintain its weighted average duration in
alignment with that of the Barclays Capital U.S. Aggregate Bond
Index, with a maximum deviation of two years either way. The ETF
measures the performance of investment grade debt securities
which are issued by corporates, government and other institutions
(see more in the
Zacks ETF Center
).
Interestingly, around 86% of the total assets of the ETF is
allocated to Mr. Gross'
"Ring of fire"
(i.e. countries with highest levels of fiscal deficit as a
percentage of their GDP) countries. These are U.S 76%, France 1%,
Japan 2%, Spain 3% and United Kingdom 4%(source
xtf.com
) (read
Time to Consider Chinese Yuan ETFs?
).
Nevertheless the ETF could be an appropriate core holding for
investors seeking an exposure to total bond markets. It charges
investors 55 basis points in fees and expenses and has been one
of the highest asset accumulating ETFs this year.
On average, the product does about 477,000 shares daily and
targets the intermediate end of the yield curve, thereby
maintaining an effective duration of 5.2 years. The ETF has a 30
Day SEC yield of 2.09% so it isn't exactly a big yielder although
it arguably is a more stable choice in the? | http://www.nasdaq.com/article/3-actively-managed-bond-etfs-for-stability-and-income-etf-news-and-commentary-cm186771 | CC-MAIN-2014-52 | refinedweb | 1,236 | 61.36 |
CLR SPY and Customer Debug Probes: The PInvoke Calling Convention Mismatch Probe
Defining a PInvoke signature and using DllImportAttribute correctly can be difficult to do, and you normally get little-to-no diagnostic information if you make a mistake. Some mistakes that can be made with PInvoke would be impossible for the CLR to detect, but many mistakes pass through without validation because PInvoke is designed for high-performance access to unmanaged APIs. Thanks to the PInvoke Calling Convention Mismatch probe, however, at least one source of errors can now be caught.
One of the named parameters that can be set on DllImportAttribute is called CallingConvention, which can be used to specify the calling convention of the unmanaged DLL export. The following enumeration values can be used (defined in the System.Runtime.InteropServices namespace):
- CallingConvention.Cdecl. The caller is responsible for cleaning the stack.
- CallingConvention.StdCall. The callee is responsible for cleaning the stack.
- CallingConvention.ThisCall. Used for calling unmanaged methods defined on a class.
- CallingConvention.Winapi. This isn't a real calling convention; it's an alias for the platform's default calling convention. On Windows (excluding Windows CE), the default calling convention is StdCall.
DllImportAttribute assumes CallingConvention.Winapi if none is specified, so users of Win32 APIs typically don't need to specify any calling convention.
Suppose, however, that you want to use PInvoke to call the C runtime library's Bessel function _j0 because you're interested in electromagnetic wave theory, yet there's no equivalent managed API for this. (Not surprisingly, I've never heard anyone complain about this omission!) You might write C# code like the following:
[DllImport("msvcr71.dll")] // There's a bug here!
static extern double _j0(double x);
public static void Main ()
{
double result = _j0(2.345);
}
With the PInvoke Calling Convention Mismatch probe enabled, you would get the following error message when running the program:
Stack imbalance may be caused by incorrect calling convention for method _j0 (msvcr71.dll)
This probe reports such errors whenever it detects that the calling convention of a PInvoke signature does not match that of the target unmanaged method. The problem here is that the CLR treats the function _j0 as if it has the Winapi calling convention (since none was explicitly specified) yet the header file for this function (math.h) shows that it really has the Cdecl calling convention:
_CRTIMP double __cdecl _j0(double);
Therefore, the correct managed definition for the _j0 function would have been the following:
[DllImport("msvcr71.dll",
CallingConvention=CallingConvention.Cdecl)]
static extern double _j0(double x);
Without this probe enabled, this type of error can be very easy to make since, depending on the exact calling convention mismatch, the CLR may still recover without any problems! But in general, this is a serious problem that could cause stack corruption. With this probe enabled, the CLR performs various heuristics to determine if the callee's behavior doesn't match the calling convention that the CLR is told to follow. Note that sometimes this probe detects a signature problem other than an incorrect calling convention, if it still causes a stack imbalance. But when you see a message from this probe, you know there's some kind of bug present!
Because this this is an "error probe," you can force a debug break whenever this situation is detected. This is the "Break on Error Messages" feature in CLR SPY.
I also want to repeat that you'll see this probe reporting a problem when running Windows Forms applications that take advantage of the new v1.1 Application.EnableVisualStyles feature (which gives you Windows XP themes without the use of a manifest). This is due to a bug in a PInvoke signature inside System.Windows.Forms for the Win32 DeactivateActCtx API. There are 3 workarounds:
- Disable the PInvoke Calling Convention Mismatch probe, or
- Uncheck "Break on Error Messages" in CLR SPY so you can ignore this message and not provoke the crash, or
- Use an XML manifest to enable XP themes, rather than using the EnableVisualStyles API. | https://docs.microsoft.com/en-us/archive/blogs/adam_nathan/clr-spy-and-customer-debug-probes-the-pinvoke-calling-convention-mismatch-probe | CC-MAIN-2020-34 | refinedweb | 674 | 52.6 |
landscape-sysinfo crashed with ImportError in <module>()
Bug Description
The Landscape Team has requested an SRU for this bug. The required information follows below, and the original bug description is available in the link below.
=== Statement explaining the impact ===
This bug is typically triggered by release upgrades from jaunty to karmic. During the upgrade some Python modules are not importable and cause a crash in the landscape-sysinfo script, which is in turned run by pam-motd upon user login. For this reason the bug potentially affects everyone upgrading from jaunty to karmic. The bug in itself has no other effect than firing an apport bug, however this is likely to make the user think that something went wrong, so fixing it is really important from a user-experience point of view.
=== How the bug has been addressed ===
The landscape-sysinfo script has been modified to catch possible module import errors and exit in that case. The fix is already included in version 1.4.0-0ubuntu0.
=== Patch ===
The comment #27 of this bug sports a patch created against the landscape-client 1.3.2.3-
=== How to reproduce the bug ===
Issue a release-upgrade from jaunty to karmic and switch user or login as a new user.
=== Regression potential ===
The change is very isolated and there's no possibility of regression.
/usr/lib/
import itertools, md5
/usr/lib/
import sha
System load: 0.39 Swap usage: 2% Users logged in: 1
Memory usage: 34% Processes: 249
=> There is 1 zombie process.
Try:
python -c 'import zope.interface'
This works fine. I commented in my dup'd bug that this happened during upgrade, so I suspect that zope gets into a funny state while update-motd tries to run landscape-sysinfo while the upgrade is happening.
Matt, Kees, is there something going on with python packaging that we should fix? Or is it perhaps just jaunty's transient nature (alpha, beta, etc)? At some point during the upgrade, landscape-sysinfo just wasn't able to find zope.interface, but the package is installed. At least at the end of the upgrade it is.
I think the issue is that crontab is running landscape-sysinfo (by way of update-motd) in the middle of an upgrade. It would be best if landscape-sysinfo handled this in a more graceful way (instead of causing a traceback). This is especially true since it is installed by default.
I expect Kees is right.
Probably the cron job should not run if the package is unconfigured.
But in this case it was a dependency that was being installed, not landscape-common itself (which contains landscape-sysinfo). It was python-
On Mon, Mar 30, 2009 at 09:39:14PM -0000, Andreas Hasenack wrote:
> But in this case it was a dependency that was being installed, not
> landscape-common itself (which contains landscape-sysinfo). It was
> python-
The packaging system takes care of this for you if your dependencies are
correct. It won't allow a package to be configured until its dependencies
have been configured.
--
- mdz
I'm sorry, I don't mean to transform this into a Debian Packaging 101 :) We can continue elsewhere, or just point me to some docs or other packages that have a similar problem.
So what if only python-
On Mon, Mar 30, 2009 at 10:03:28PM -0000, Andreas Hasenack wrote:
> I'm sorry, I don't mean to transform this into a Debian Packaging 101 :)
> We can continue elsewhere, or just point me to some docs or other
> packages that have a similar problem.
>
> So what if only python-
> sysinfo is installed and is not being upgraded in this example. Then
> suddenly when python-
> version), the cron job hits. How can the cron job detect this situation
> and decide to not run?
Oh, that wasn't clear from this report. If only python-
being upgraded, I would expect the window for this race to be very small
(but still present). The window is very large, though, when upgrading from
8.10 to 9.04. Because so many packages are being upgraded at once, the
packages will stay unconfigured (and potentially non-functional) for a long
time.
I'm surprised this hasn't come up before (which means it probably has and
I'm not aware of it). There may be a standard pattern for dealing properly
with this. I suggest taking the question to <email address hidden>.
--
- mdz
So, the situation has changed a bit with Karmic as update-motd is gone, replaced by pam-motd. landscape-sysinfo is not run by cron anymore, apparently on login directly. But the problem is basically the same: if you login during an upgrade, an apport bug will be fired. Note that the motd won't be broken, probably because it detects the error code and thus don't copy the new motd.
Anyway, the way to fix it is to catch the error in landscape-sysinfo, so that the exception doesn't bubble up, but still exit with an error status, so that the motd is not updated. This is what I've done in the branch.
Nice simple fix, +1
Untested, but effective, +1.
I'll need to decide if we want this fix to be included in karmic before it gets out.
The fix has been merged in landscape-client trunk in r148.
qa + 1, apport stays quiet now regarding import errors in landscape-common, while with the previous landscape-common an import error would trigger it.
This bug was fixed in the package landscape-client - 1.3.2.4-
---------------
landscape-client (1.3.2.
*)
-- Free Ekanayaka <email address hidden> Fri, 09 Oct 2009 18:21:24 +0200
The fix is released in Karmic. The problem happens with the jaunty version still running during the upgrade.
I have updated to Karmic with all available updates. After a reboot there are still jaunty packages active?
Synaptic does not report any obsolete packages whatsoever.
Walldorf2000, the crash happened *during* the upgrade, but it is being *reported* to you via apport after the reboot when you login.
Unless I completely misunderstood how apport crash reports work...
This is happening to me near the end of the upgrade to Karmic.
Hi,
The Landscape team would like to apologise for the confusion that this bug has caused, and the bad experience users have had during their upgrades.
This bug was originally identified during the early releases of Karmic, but we didn't get a fix in place until the 10th of October 2009 - after the freeze for Ubuntu had occurred, essentially the bug was caused by the automated cron job that updates the landscape-sysinfo information continuing to run during the upgrade process, which failed because some of the libraries that it relied on were removed briefly during the actual upgrade routine.
The error that is reported is caused by the import of these libraries, and is trivial in nature, it just means that the cron job didn't complete once during the upgrade, unfortunately apport catches these errors and reports them as bugs (which we accept it is).
A fix has now been put in place which will hopefully avoid this problem in future, and this will make it into Ubuntu at the earliest possible opportunity.
Thanks,
The Landscape Team
Dear Landscape Team,
Trust us, we are grateful that this is fixed!
If this was Windows 7 or Vista.. it wouldn't be fixed until the next service pack update... interaction like this is why we all adopted Ubuntu as our OS of choice.
Best Regards,
Roger
Since this appears to be a regression in jaunty-updates, I'm bumping this to critical.
Martin: it's not a regression, this bug has been present for a while. It's fixed in Karmic, but the fix was done after the last SRU was submitted.
This bug was actually present in the jaunty package, so I'm removing the regression-update tag. As mentioned in comment #19, this is already fixed in Karmic.
The attached diff created against lp:ubuntu/jaunty-proposed/landscape-client solves the situation. I'm going to ask Mathias Gug to sponsor an SRU upload to jaunty-proposed for it.
Was it present in Jaunty at release?
@Scott: yes and no.
Yes, in the sense that if you try to upgrade a jaunty system with the original landscape-client package version from jaunty at release, you will hit the bug, in the same way you would it with the landscape-client package version now in jaunty-updates. The code of landscape-sysinfo hasn't changed at all between these two versions.
No, in the sense that the landscape-sysinfo script in the original jaunty landscape-client package is working fine in jaunty per se. The bug is triggered by the upgrade, and in particular because of the transition from update-motd (in jaunty) to pam-motd (in karmic), which makes the script run at every login.
Uploaded to jaunty-proposed.
Marking Fixed Released in Karmic and Lucid
Accepted landscape-client into jaunty-proposed, the package will build now and be available in a few hours. Please test and give feedback here. See https:/
The updated landscape-sysinfo script works as expected, and doesn't tracebacks if a Python module is not importable. Please move the landscape-client package on to jaunty-updates.
This bug was fixed in the package landscape-client - 1.3.2.3-
---------------
landscape-client (1.3.2.
* Fix crash in landscape-sysinfo due to an import error during system wide
upgrades (LP: #349996)
-- Free Ekanayaka <email address hidden> Tue, 03 Nov 2009 11:44:39 +0100
Hello Jim,
can you please run this command in a terminal, as your user?
landscape-sysinfo
Please paste the whole output here. | https://bugs.launchpad.net/ubuntu/+source/landscape-client/+bug/349996 | CC-MAIN-2019-35 | refinedweb | 1,631 | 63.39 |
Model monkey-patching how-toJon Tara Mar 20, 2015 12:02 PM
You might want to monkey-patch a model (in Ruby) to override one or more of the standard methods.
As an example, you might want to delete related records automatically when a record is deleted. So, you can monkey-patch destroy in your model.
Unfortunately, Rhom models lack the handy callbacks provided by (Ruby on Rails') ActiveRecord. ActiveRecord has callbacks that you can define that will be called at various points - for example, before or after deleting a record. Rhom models have no such callbacks. (They would be awfully handy, hint, hint! Maybe something to consider for newORM?)
(Aside: I took a look at newORM. One major difference is that much of the code is now written in (cross-platform) C++, rather than Ruby. There is still a Ruby object factory, but most of the core functionality, e.g. CRUD is done in C++ code.)
Note that you cannot "subclass", because models are not derived from some common class. They are produced by an object factory, and worst, it is done for you automagically through a file-naming convention.
Here's how I monkey-patched my Venue model to delete any related Song records.
Note: unfortunately, delete_all does not call destroy for each record. For my own use case, this is fine, since I only ever delete Venue records individually, using destroy(). Again, those callbacks would be really handy! It might require considerable refactoring of the Rhom code to provide some of the callbacks, though, since I'd imagine Rhom just hands parameters for, e.g. delete_all() to SQLite, and so there would need to be some means for SQLite to send some callback for each record deleted.
Note: in the code below, the {{logInfo}} is just some preprocessing I do on my Ruby files. It inserts a call to Rho::Log (or not, depending on build-time options options).
BTW, did you know that using transactions both protects against partial updates, as well as bringing a HUGE performance boost? Make sure to use transactions whenever you create or modify a bunch of records. It's especially effective when seeding a database from e.g. some downloaded data. And also especially so when you are adding records to fixed-schema tables with a lot of indices. While in the transaction, SQLite is just writing flat "log" records, which is very fast. (I'm not sure if another thread is starting to insert real records while the log accumulates.) In any case, the transaction is completed and control returned to your code much sooner than without the transaction.
venue.rb
# Redefine destroy so that related Song records are also destroyed. # # We cannot "subclass" destroy, becuase the model is created by a factory. # So, we have to monkey-patch. # # For an explanation of the "method wrapping" used here, see: # # # Note that destroy is NOT called if you call the delete_all class method. # This is fine for our use case, as we will only be deleting venues using the # destroy instance method. orig_destroy = instance_method :destroy define_method :destroy do db = ::Rho::RHO.get_src_db self.class.name.to_s db.startTransaction begin Song.delete_all_with_venue object orig_destroy.bind(self).() db.commitTransaction {{#logInfo}} "Successfully destroyed Venue and contigent records" {{/logInfo}} rescue db.rollbackTransaction end # begin end # define_method :destroy
song.rb
def self.delete_all_with_venue(venue_id) {{#logInfo}} "Deleting all Songs with venue_id = #{venue_id}" {{/logInfo}} delete_all :conditions => {:venue_id => venue_id} end
Re: Model monkey-patching how-toJon Tara Mar 22, 2015 11:25 AM (in response to Jon Tara)
Note that this may not really be the best way to deal with deleting contingent records.
SQLite can do this on it's own, since it supports triggers.
That seems much preferable to me, and so I am going to explore just how much and how easily we can use SQLite directly (we have find_by_sql, but is there an easy way to issue ANY arbitrary SQL statement?) Need to see what version we have, if it supports triggers, and if Rhodes builds SQLite with trigger feature enabled.
Has anybody tried it?
Re: Model monkey-patching how-toJon Tara Mar 23, 2015 11:21 AM (in response to Jon Tara)
In 5.x, the new Database object provides access to execute an SQL statement. executeSQL, executeBatchSQL
Re: Model monkey-patching how-toJon Tara Mar 23, 2015 3:18 PM (in response to Jon Tara)
Rhodes' SQLite is built to support triggers. In fact, Rhom uses triggers itself to deal with synchronization. (Discovered that by exploring a database with SQLite command-line.) | https://developer.zebra.com/thread/30128 | CC-MAIN-2017-34 | refinedweb | 759 | 65.73 |
Created on 2014-05-05 23:11 by cool-RR, last changed 2020-01-10 12:13 by Zac Hatfield-Dodds. This issue is now closed.
I want to use big numbers for length.
>>> class A:
... __len__ = lambda self: 10 ** 20
>>> len(A())
Traceback (most recent call last):
File "<pyshell#5>", line 1, in <module>
len(A())
OverflowError: cannot fit 'int' into an index-sized integer
While this is classed as a CPython implementation detail (see issue 15718) it doesn't sound like it is likely to be changed (see issue 2723).
Whoops; sorry -- accidental title change by typing `__len__` into something that wasn't the search box.
Stupid fingers...
(I suspect this issue is a duplicate of an existing issue.)
Mark: I thought it was too, but the two I noted were the closest I could find. Maybe you'll find something even more on point :)
If `len()` signature can't be changed to return Python int objects (unlimited) then the OverflowError may contain the actual `.length`
property instead (based on msg66459 by Antoine Pitrou)
operator.length():
def length(sized):
"""Return the true (possibly large) length of `sized` object.
It is equivalent to len(sized) if len doesn't raise
OverflowError i.e., if the length is less than sys.maxsize on
CPython; otherwise return OverflowError.length attribute
"""
try:
return len(sized)
except OverflowError as e:
return e.length
That's pretty evil. :-)
I recommend this be closed: too much impact on existing code for too little benefit.
CPython has historically imposed some artificial implementation specific details in order make the implementation cleaner and faster internally (i.e. a limit on the number of function arguments, sys.maxsize limits, etc.) | https://bugs.python.org/issue21444 | CC-MAIN-2020-45 | refinedweb | 281 | 57.98 |
Introduction
A common challenge I came across while learning Natural Language Processing (NLP) – can we build models for non-English languages? The answer has been no for quite a long time. Each language has its own grammatical patterns and linguistic nuances. And there just aren’t many datasets available in other languages.
That’s where Stanford’s latest NLP library steps in – StanfordNLP.
I could barely contain my excitement when I read the news last week. The authors claimed StanfordNLP could support more than 53 human languages! Yes, I had to double-check that number.
I decided to check it out myself. There’s no official tutorial for the library yet so I got the chance to experiment and play around with it. And I found that it opens up a world of endless possibilities. StanfordNLP contains pre-trained models for rare Asian languages like Hindi, Chinese and Japanese in their original scripts.
The ability to work with multiple languages is a wonder all NLP enthusiasts crave for. In this article, we will walk through what StanfordNLP is, why it’s so important, and then fire up Python to see it live in action. We’ll also take up a case study in Hindi to showcase how StanfordNLP works – you don’t want to miss that!
Table of Contents
- What is StanfordNLP and Why Should You Use it?
- Setting up StanfordNLP in Python
- Using StanfordNLP to Perform Basic NLP Tasks
- Implementing StanfordNLP on the Hindi Language
- Using CoreNLP ‘s API for Text Analytics
What is StanfordNLP and Why Should You Use it?
Here is StanfordNLP’s description by the authors themselves:
StanfordNLP is the combination of the software package used by the Stanford team in the CoNLL 2018 Shared Task on Universal Dependency Parsing, and the group’s official Python interface to the Stanford CoreNLP software.
That’s too much information in one go! Let’s break it down:
- CoNLL is an annual conference on Natural Language Learning. Teams representing research institutes from all over the world try to solve an NLP based task
- One of the tasks last year was “Multilingual Parsing from Raw Text to Universal Dependencies”. In simple terms, it means to parse unstructured text data of multiple languages into useful annotations from Universal Dependencies
- Universal Dependencies is a framework that maintains consistency in annotations. These annotations are generated for the text irrespective of the language being parsed
- Stanford’s submission ranked #1 in 2017. They missed out on the first position in 2018 due to a software bug (ended up in 4th place)
StanfordNLP is a collection of pre-trained state-of-the-art models. These models were used by the researchers in the CoNLL 2017 and 2018 competitions. All the models are built on PyTorch and can be trained and evaluated on your own annotated data. Awesome!
Additionally, StanfordNLP also contains an official wrapper to the popular behemoth NLP library – CoreNLP. This had been somewhat limited to the Java ecosystem until now. You should check out this tutorial to learn more about CoreNLP and how it works in Python.
Below are a few more reasons why you should check out this library:
- Native Python implementation requiring minimal effort to set up
- Full neural network pipeline for robust text analytics, including:
- Tokenization
- Multi-word token (MWT) expansion
- Lemmatization
- Parts-of-speech (POS) and morphological feature tagging
- Dependency Parsing
- Pretrained neural models supporting 53 (human) languages featured in 73 treebanks
- A stable officially maintained Python interface to CoreNLP
What more could an NLP enthusiast ask for? Now that we have a handle on what this library does, let’s take it for a spin in Python!
Setting up StanfordNLP in Python
There are some peculiar things about the library that had me puzzled initially. For instance, you need Python 3.6.8/3.7.2 or later to use StanfordNLP. To be safe, I set up a separate environment in Anaconda for Python 3.7.1. Here’s how you can do it:
1. Open conda prompt and type this:
conda create -n stanfordnlp python=3.7.1
2. Now activate the environment:
source activate stanfordnlp
3. Install the StanfordNLP library:
pip install stanfordnlp
4. We need to download a language’s specific model to work with it. Launch a python shell and import StanfordNLP:
import stanfordnlp
then download the language model for English (“en”):
stanfordnlp.download('en')
This can take a while depending on your internet connection. These language models are pretty huge (the English one is 1.96GB).
A couple of important notes
- StanfordNLP is built on top of PyTorch 1.0.0. It might crash if you have an older version. Here’s how you can check the version installed on your machine:
pip freeze | grep torch
which should give an output like
torch==1.0.0
- I tried using the library without GPU on my Lenovo Thinkpad E470 (8GB RAM, Intel Graphics). I got a memory error in Python pretty quickly. Hence, I switched to a GPU enabled machine and would advise you to do the same as well. You can try Google Colab which comes with free GPU support
That’s all! Let’s dive into some basic NLP processing right away.
Using StanfordNLP to Perform Basic NLP Tasks
StanfordNLP comes with built-in processors to perform five basic NLP tasks:
- Tokenization
- Multi-Word Token Expansion
- Lemmatisation
- Parts of Speech Tagging
- Dependency Parsing
Let’s start by creating a text pipeline:
nlp = stanfordnlp.Pipeline(processors = "tokenize,mwt,lemma,pos")
doc = nlp("""The prospects for Britain’s orderly withdrawal from the European Union on March 29 have receded further, even as MPs rallied to stop a no-deal scenario. An amendment to the draft bill on the termination of London’s membership of the bloc obliges Prime Minister Theresa May to renegotiate her withdrawal agreement with Brussels. A Tory backbencher’s proposal calls on the government to come up with alternatives to the Irish backstop, a central tenet of the deal Britain agreed with the rest of the EU.""")
The processors = “” argument is used to specify the task. All five processors are taken by default if no argument is passed. Here is a quick overview of the processors and what they can do:
Let’s see each of them in action.
Tokenization
This process happens implicitly once the Token processor is run. It is actually pretty quick. You can have a look at tokens by using print_tokens():
doc.sentences[0].print_tokens()
The token object contains the index of the token in the sentence and a list of word objects (in case of a multi-word token). Each word object contains useful information, like the index of the word, the lemma of the text, the pos (parts of speech) tag and the feat (morphological features) tag.
Lemmatization
This involves using the “lemma” property of the words generated by the lemma processor. Here’s the code to get the lemma of all the words:
This returns a pandas data frame for each word and its respective lemma:
Parts of Speech (PoS) Tagging
The PoS tagger is quite fast and works really well across languages. Just like lemmas, PoS tags are also easy to extract:
Notice the big dictionary in the above code? It is just a mapping between PoS tags and their meaning. This helps in getting a better understanding of our document’s syntactic structure.
The output would be a data frame with three columns – word, pos and exp (explanation). The explanation column gives us the most information about the text (and is hence quite useful).
Adding the explanation column makes it much easier to evaluate how accurate our processor is. I like the fact that the tagger is on point for the majority of the words. It even picks up the tense of a word and whether it is in base or plural form.
Dependency Extraction
Dependency extraction is another out-of-the-box feature of StanfordNLP. You can simply call print_dependencies() on a sentence to get the dependency relations for all of its words:
doc.sentences[0].print_dependencies()
The library computes all of the above during a single run of the pipeline. This will hardly take you a few minutes on a GPU enabled machine.
We have now figured out a way to perform basic text processing with StanfordNLP. It’s time to take advantage of the fact that we can do the same for 51 other languages!
Implementing StanfordNLP on the Hindi Language
StanfordNLP really stands out in its performance and multilingual text parsing support. Let’s dive deeper into the latter aspect.
Processing text in Hindi (Devanagari Script)
First, we have to download the Hindi language model (comparatively smaller!):
stanfordnlp.download('hi')
Now, take a piece of text in Hindi as our text document:
hindi_doc = nlp("""केंद्र की मोदी सरकार ने शुक्रवार को अपना अंतरिम बजट पेश किया. कार्यवाहक वित्त मंत्री पीयूष गोयल ने अपने बजट में किसान, मजदूर, करदाता, महिला वर्ग समेत हर किसी के लिए बंपर ऐलान किए. हालांकि, बजट के बाद भी टैक्स को लेकर काफी कन्फ्यूजन बना रहा. केंद्र सरकार के इस अंतरिम बजट क्या खास रहा और किसको क्या मिला, आसान भाषा में यहां समझें""")
This should be enough to generate all the tags. Let’s check the tags for Hindi:
extract_pos(hindi_doc)
The PoS tagger works surprisingly well on the Hindi text as well. Look at “अपना” for example. The PoS tagger tags it as a pronoun – I, he, she – which is accurate.
Using CoreNLP’s API for Text Analytics
CoreNLP is a time tested, industry grade NLP tool-kit that is known for its performance and accuracy. StanfordNLP has been declared as an official python interface to CoreNLP. That is a HUGE win for this library.
There have been efforts before to create Python wrapper packages for CoreNLP but nothing beats an official implementation from the authors themselves. This means that the library will see regular updates and improvements.
StanfordNLP takes three lines of code to start utilizing CoreNLP’s sophisticated API. Literally, just three lines of code to set it up!
1. Download the CoreNLP package. Open your Linux terminal and type the following command:
wget
2. Unzip the downloaded package:
unzip stanford-corenlp-full-2018-10-05.zip
3. Start the CoreNLP server:
java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 15000
Note: CoreNLP requires Java8 to run. Please make sure you have JDK and JRE 1.8.x installed.p
Now, make sure that StanfordNLP knows where CoreNLP is present. For that, you have to export $CORENLP_HOME as the location of your folder. In my case, this folder was in the home itself so my path would be like
export CORENLP_HOME=stanford-corenlp-full-2018-10-05/
After the above steps have been taken, you can start up the server and make requests in Python code. Below is a comprehensive example of starting a server, making requests, and accessing data from the returned object.
a. Setting up the CoreNLPClient
b. Dependency Parsing and POS
c. Named Entity Recognition and Co-Reference Chains.
What I like the most here is the ease of use and increased accessibility this brings when it comes to using CoreNLP in python.
My Thoughts on using StanfordNLP – Pros and Cons
Exploring a newly launched library was certainly a challenge. There’s barely any documentation on StanfordNLP! Yet, it was quite an enjoyable learning experience.
A few things that excite me regarding the future of StanfordNLP:
- Its out-of-the-box support for multiple languages
- The fact that it is going to be an official Python interface for CoreNLP. This means it will only improve in functionality and ease of use going forward
- It is fairly fast (barring the huge memory footprint)
- Straightforward set up in Python
There are, however, a few chinks to iron out. Below are my thoughts on where StanfordNLP could improve:
- The size of the language models is too large (English is 1.9 GB, Chinese ~ 1.8 GB)
- The library requires a lot of code to churn out features. Compare that to NLTK where you can quickly script a prototype – this might not be possible for StanfordNLP
- Currently missing visualization features. It is useful to have for functions like dependency parsing. StanfordNLP falls short here when compared with libraries like SpaCy
Make sure you check out StanfordNLP’s official documentation.
End Notes
There is still a feature I haven’t tried out yet. StanfordNLP allows you to train models on your own annotated data using embeddings from Word2Vec/FastText. I’d like to explore it in the future and see how effective that functionality is. I will update the article whenever the library matures a bit.
Clearly, StanfordNLP is very much in the beta stage. It will only get better from here so this is a really good time to start using it – get a head start over everyone else.
For now, the fact that such amazing toolkits (CoreNLP) are coming to the Python ecosystem and research giants like Stanford are making an effort to open source their software, I am optimistic about the future.You can also read this article on Analytics Vidhya's Android APP
2 Comments
Very nice article. Specially the hindi part explanation. It will open ways to analyse hindi texts. Thanks for sharing!
Hey Rakesh,
Thanks for your comment. Indeed, not just Hindi but many local languages from all over the world will be accessible to the NLP community now because of StanfordNLP.
Sanad | https://www.analyticsvidhya.com/blog/2019/02/stanfordnlp-nlp-library-python/ | CC-MAIN-2019-18 | refinedweb | 2,305 | 63.8 |
Hi *,
I'm not on the users list, so please reply all if you want me to see the
I have written a batch file for windows that outputs a C# class which can
be included in your project to get the status of the working copy the
project was built on.
Using a batch file to allow 2000 & XP windows system to run with only a
dependency on `svnversion`. No need for python, perl, etc on the build
system.
Preferably this should be hooked to the build system in visual studio to
be run pre-build.
Find attached and remove the .txt extension to use.
USAGE:
genSvnVer.bat <namespace>
If it is found to be useful, please place in contrib as that was the first
place I looked for an example, then the ML.
Thanks to Dominic Anello on the ML for pointing out how to use the FOR
loop to get `svnversion` output into an environment variable.
Cheers,
Chris
--
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
This is an archived mail posted to the Subversion Users
mailing list. | http://svn.haxx.se/users/archive-2006-02/0473.shtml | CC-MAIN-2013-20 | refinedweb | 192 | 71.85 |
Ticket #5024 (closed defect: fixed)
Errors with Python API information => fixed in SVN
Description
First:
On page 25 of SDKRef.pdf included in the SDK:
"change to bindings/webservice/python/samples/ ... then run ./vboxshell.py"
vboxshell.py is in bindings/glue/python/sample
Second:
On page 37 of the SDKRef.pdf:
#from vboxapi import VirtualBoxManager
I don't know for sure, but I believe that the line got broken up and should be:
from vboxapi import VirtualBoxManager
Third:
The makefile in bindings/webservice/python/samples/ contains
PYTHONPATH=../lib python ./clienttest.py
There is no clienttest.py
Forth:
sdk/installer is missing the vboxapisetup.py file that's found in sdk/install within the VirtualBox installed directory
Change History
comment:2 Changed 6 years ago by ni81036
- Owner set to ni81036
Thanks, will correct docs appropriately.
comment:3 Changed 6 years ago by ni81036
- Summary changed from Errors with Python API information to Errors with Python API information => fixed in SVN
Note: See TracTickets for help on using tickets.
Fifth:
On page 38,
should be, | https://www.virtualbox.org/ticket/5024 | CC-MAIN-2015-48 | refinedweb | 174 | 54.42 |
One of the great things about Gutenberg is the ability to compartmentalize different types of content within blocks. One of the blocks that I’ve been using a lot of recently is the code block. This block by default will render something like this:
#include "stdio.h" int main() { // printf() displays the string inside quotation printf("Hello, World!"); return 0; }
While this is acceptable, it’s not very pretty. I used to use the SyntaxHighlighter Evolved.
Unfortunately this doesn’t work perfectly with Gutenberg at the moment, and I was hoping for something in a block. Luckily I found this…
UPDATE: SynxtaxHighlighter Evolved now works with Gutenberg, but I still like how code-syntax-block works with the core block and isn’t a block of its own.
Marcus Kazmierczak has made a plugin to extend the core code block to allow syntax highlighting:
#include <stdio.h> int main() { // printf() displays the string inside quotation printf("Hello, World!"); return 0; }
I really like this and I think it compliments Gutenberg nicely 🙂
4 replies on “Gutenberg, Code, and Highlighting”
According to these docs, Gutenberg already includes highlighting.
But, you need to enable it by editing the Gutenberg files.
That’s awesome! Thanks for sharing 🙂
Hopefully it’ll be on by default in the future.
Uhh, we are talking about Gutenberg for WordPress right, not a completely different CMS (which that is)?
You are correct! I didn’t read deep enough, and that is not for the new WordPress editor 🙂 | https://derrick.blog/2018/06/22/gutenberg-code-and-highlighting/ | CC-MAIN-2020-29 | refinedweb | 250 | 73.47 |
Create a VM object.
#include <zircon/syscalls.h> zx_status_t zx_vmo_create(uint64_t size, uint32_t options, zx_handle_t* out);
zx_vmo_create() creates a new virtual memory object (VMO), which represents a container of zero to size bytes of memory managed by the operating system.
The size of the VMO will be rounded up to the next page size boundary. Use
zx_vmo_get_size() to return the current size of the VMO.
One handle is returned on success, representing an object with the requested size.
The following rights will be set on the handle by default:
ZX_RIGHT_DUPLICATE - The handle may be duplicated.
ZX_RIGHT_TRANSFER - The handle may be transferred to another process.
ZX_RIGHT_READ - May be read from or mapped with read permissions.
ZX_RIGHT_WRITE - May be written to or mapped with write permissions.
ZX_RIGHT_MAP - May be mapped.
ZX_RIGHT_GET_PROPERTY - May get its properties using object_get_property.
ZX_RIGHT_SET_PROPERTY - May set its properties using object_set_property.
The options field can be 0 or ZX_VMO_NON_RESIZABLE to create a VMO that cannot change size. Children of a non-resizable VMO can be resized.
The ZX_VMO_ZERO_CHILDREN signal is active on a newly created VMO. It becomes inactive whenever a child of the VMO is created and becomes active again when all children have been destroyed and no mappings of those children into address spaces exist.
TODO(ZX-2399)
zx_vmo_create() returns ZX_OK on success. In the event of failure, a negative error value is returned..
zx_vmar_map()
zx_vmo_create_child()
zx_vmo_get_size()
zx_vmo_op_range()
zx_vmo_read()
zx_vmo_replace_as_executable()
zx_vmo_set_size()
zx_vmo_write() | https://fuchsia.googlesource.com/fuchsia/+/4b2a7378368301a5c187a6111ae336f4fe2cccda/zircon/docs/syscalls/vmo_create.md | CC-MAIN-2021-21 | refinedweb | 235 | 59.19 |
Sirs, thousand salutes to all of you. I cannot imagine what it would feel like to complete such a noble deed, but I'm sure the smiles on the faces of the locals are immeasurable.
Again, thousand salutes.
Rao.
MashAllahhats off to you guys!
Aoa friends Mashallah we had a very successful trip this weekend. Following are the details on the various projects
Bridges:
Site 1 - Anakar A beautifully built 12 feet wide and about 60 foot long bridge that is connecting the village of Anakar (pop. approx.1000) to Kalam and the rest of the world. This is a truckable bridge which is capable of withstanding a fully loaded truck crossing it.
Site 2 - Ghayal A smaller (approx 10 feet wide and about 30 feet long) bridge connecting Ghayal with Kalam and the rest of the world.
Site 3 - Palir A foot bridge that connects the village of Palir to the main road across the river Swat.
Site 4 - Baffer100 bags of cement have been arranged for this bridge. We could not visit this site for shortage of time. IA the next team visiting Kalam will have positive updates from here as well.
Power Projects:Site 1 - Anakar A 25 KVA power project was inaugurated. This is now supplying electricity to 180 odd houses of Anakar. The local people have worked exceptionally hard to build a new water channel for this. The channel is supported by a 3 foot wide retention wall that is, at its highest, about 10 foot high. This was build on a self help basis by the people of Anakar at no cost.
Site 2 - Nazimabad A 24 KVA power project for which the channel is being built. The channel is a whopping 200 meter long. The retention wall being built for it is 2.5 to 3 feet wide and 2 feet (at the start) to about 15 feet high (at the highest point). Again this work is being done totally free of cost by the people of Nazimabad on a self help basis. What is more commendable is that the locals have donated parts of their land for this channel. This will IA be completed in a week to ten days after we arrange the penstock pipe for them .
Two more power generation sets have been delivered to our local representative in Kalam (Gulzada Khan). Sites for these will be finalized this week and the team going to Kalam this coming weekend will inaugurate work on these.
Protective Retention wall: We had delivered G-wire for building of two protective walls in the valley of Gabral. These were for protecting Mosques from the raging Swat river. People from one of these two sites have started work on their wall and IA we will have more details on that soon.
School: We have, now, a MBA graduate from Abbotabad, Mr. Mehroze Kiani (my cousin) who has shifted to Anakar to help us establish and run the school there. For this we will pay him a fixed stipend (that I have already arranged for). We have also hired two local teachers from the same village for this school. We held a detailed meeting with the local elders about this school and its running and they were very happy and enthusiastic about it. I am glad and feel proud to report to you that today we had 67 children turn up at the school. I am getting daily reports from Mehroze about the progress, issues and needs for the school. Since this will is another Project (Project Educate) in IJC's Flood Relief Initiative, I will post a separate post on the relief website for this and try to update that regularly IA.
There are a lot of stories and interesting events to share and IA I will be doing so with you all in the coming few days. Thank you all again Ehsan
Thank you all for all the nice words and the encouragement. We could not have achieved this neither can we go on to achieve more goals without the support and prayers of friends Thank you againEhsan
Hats off to Ehsan Kiani, Asad Marwat and Siddique for making this all possible. Also Mr. Gulzada Khan is the man on the ground in Kalam who deserves all the credit and applause for these achievements! Lets keep this up!
Dear Ek,Asad,Siddique. Ever since you guys volunteered to take care of these victims, I have admired your unselfishness.I just want you to know that you are doing a terrific job,Congratulations on getting up the power plant & for the next phase of development you are becoming something of a legend for all of us. We should learned more from you than from any teacher we ever had!! I am always there & just a call away. Love you Janj
Thanks a lot colleagues for your kind words. Honestly its a team work and where we have team that comprises of each one of you i dont see why we cant achieve the unachievables. God bless.
Thank you all for the encouragement and for all the support and help. As Asad bhai said, this is a teamwork that involves, IJC, Engaged and many other friends. I seriously believe that this is just the beginning and is nothing as compared to the phenomenal destruction that has been caused. May Allah give us all the strength, time, support and courage to build on this start and continue with the flood relief efforts. Thank you again Ehsan
Nazimabad channel construction continued.
Near the construction site found Trout (too young to catch/eat)
Parapilot very well catches wish u best of luck.
Desert Devil, Juma Khan, Mehroze Kiani, Jehangir Mir and myself went to Kalam over the weekend to assess various releif operations that IJC and the group have been helping with and supporting.
True to IJC style, it turned out to be a typical marathon driving session. Total distance ISB to Swat was 250km (journey 3hour 40minutes), Swat to Kalam 100km (Journey 7 hours with breakfast break), we drove around 30-35 km around kalam going to and from various work sites etc. Return from kalam to Swat took 7 hour 30 minutes (with 30-40 minutes mechanical repair work) and return from swat to ISB 4 and half hours (including 1/2 hour nap in-between). So total distance covered in return trip was 735km in approx 36 hours.
Desert devil, Juma and Mehroze left at 7.00pm on friday night for Mingora, where as I finished work and after picking up Jehangir at 9:15pm we left for mingora as well. Roads were empty, so we managed in reasonable time. We reached Mingora PTDC at 1am where Desert Devil had already booked a room for us. We were up at 4 again and left before 5am for Kalam to avoid traffic on one lane dirt tracks. Unfortunately we still encountered plenty traffic and reached kalam just before 12 mid day. The condition of road between swat and Kalam was horrendous and with me and Jahangir in my Patrol Pickup and no load at the back. Jahangir and myself are definitely convinced that out internal organs have shifted around quite a few times due to extreme bouncing around. Infact Jehangir almost swore on his life that he won’t go back Kalam on this vehicle J. Most likely Jehangir is visiting an orthopaedic surgeon today to get his back / spine realigned (the passenger bench seat had no lateral support) :(
Must thank Jahangir for his wonderful company, support and technical / mechanical expertise during the trip. I did allow him to smoke now n again so that he could keep his spirits high and eyes wide awake J <?xml:namespace prefix = o<o:p></o:p>In and around Kalam we went around all the various places n projects that IJC + Group are undertaking/ overseeing/ helping with. We crossed the vehicles on the newly constructed bridges etc. Lunch was very graciously hosted for us by people of Anakar with a long speech of thnx in their local language. (The word Shukria is used the same and was said about a hundred times, so I am assuming it was a thnx speech J)<o:p></o:p><o:p></o:p> Mr Khanzada kindly hosted tea for us twice and his house for us to rest and straighten our backs. We left around 08:15pm from kalam for the return journey, and contrary to our expectations, the return journey had probably even more traffic. We reversed numerous times on the single narrow temporary tracks on the side of mountains, sometime even upto 200 meters or so, and it was rather challenging. Did develop an un-explained engine idle/ throttle issue during return journey, which made life rather difficult. But we eventually managed every thing and finally reached home safely.
More details n pic later………….this is just to bump up the thread for now
The following text, I am copying from Asad's email, it contains the relevant info. Hopefully Asad won't mind
Alhamdollilah came back safe and sound this morning via wonron expressfrom Kalam. Team comprised of; NN, Juma Khan, Jahangir, Mehroz Kiyaniand Myself left for Kalam on friday night. Overnite, rather 4 hoursstay at Mingora and reached Kalam via a road very few have travelled(I mean ijcians ) got stuck up in head ontraffic jam for 3 hoursplus.
Sitrep on projects in Kalam,
Anakar road and bridge opened and first ijc vehicles crossed over toAnakar valley. Had lunch with the elders.
Visited the power house at Anakar which is now up and running.
Drove to nazimabad and then walked down to the turbine placementlocation. Monitored the water channel construction. Everything was onschedule. The locals carried 1 ton of Turbine on their shoulders andat times on make shift log pallet. Unbelievable till we witnessed it.The slope was very steep where NN and myself had to get help, from ourcomrades, to get down.
Visited Ghayal bridge which is now under regular usage.
Identified 2 more locations for pedestrian bridges. One could be apotential vehicular bridge.
Bafar bridge work is in full swing.One power plant including turbine and alternator will be handed overto Ghayal elders and the second to a village near Gabral. Elders ofboth the villages will meet with our rep Mr. Gulzada and Projectcoordinator Mehroz Kiyani tomorrow.
One additional village needed a smaller less costly turbine henceordered has already been placed and the same will be shipped in aweeks time. This will serve a small community of 25 houses.
Palir pedestrian bridge has been completed and in service bt the workwas below our required quality....
Good write up doc.... u missed one part your pick up runs on 14 number ke chabi
Doc nn,Many thanks for the update! Wonderful to read details of the drive from Mingora to Kalam. I can understand fully what you guys went through, though the dala I took up there had a fair amount of load at the back and was probably somewhat more comfy! Nonetheless, that track is not easy going by any means!
Wonderful to hear great things on the projects we have started. I am sure you would agree that its a great feeling to visit the place, meet the people and see the things in person!
May God bless you for the committment and effort here!
DesertDevil Sahib, hats off once again for all your effort and contribution! Wonderful to hear that things are going well and projects are proceeding at pace! Its become quite a mega-initiative I must say and may Allah give us the strength to see this to the finish line!
On second thoughts, there probably is no finish line here we are in this one for life..... hahaha
was that one hell of a ride or was that one hell of a ride? in all honesty, I have neverrr traveled a road less travelled:pi think all the jumping around in the passenger seat has also messed up my head
the key words of this trip were: asad bhai aaloo ka pata kar lain, yar gari phir band ho gayee and doc speed up yaar! hahahaha...
what a beautiful place Kalam! I had never been there ever before so it was a treat.. the hospitality of the people, the picturesque scenery and the weather...
Now the horrors: the dust and the never ending bumps... if i had a rupee for everytime i jumped out of my seat, i would not be sitting here writing this rather would be out buying myself a car atleast
I am very proud of the extraordinary work done by IJC with the help of the locals. All the projects that were undertaken by Ehsan Bhai and Asad bhai were either complete or near completion. The site of the school being run by IJC and friends was also as beautiful as the cause itself!
A big shout out to doc saab for the wonderful company and to asad bhai and juma bhai for the entertainment over the radio! hahahah.. You guys made all the bumping and jumping less painful! God Bless you guys!
Finally! I would urge everyone to visit these places for themselves to really understand how badly our countrymen have been affected. No words or pictures can truly put across their misery and the amount of destruction in those areas.Also i would recommend that please chose a ride other than a Nissan Patrol Pickup! hahaha
God Help us all...
Jahangir.
very well said Mir..... btw Aloo is for Rs 20/kg in Kalam. and thats a good quality Aloo.
Wow, just read the thread! great going guys, I'm speechless.
Jehangir has very kindly taken just 376pics (2.5gb) in under 12 hours :).
So be patient, I have to go through them when I have enough time so i can resize and upload them.
For now am just posting 2 pics of our vehicular crossing of Anakar bridge!
(true to form, did do a short burn-out near the end of bridge, forgot about the man standing on the rear bed and and he nearly fell-off ) | https://www.pakwheels.com/forums/t/project-reconnect-swat-ijcs-flood-relief-initiative/136084?page=6 | CC-MAIN-2017-30 | refinedweb | 2,363 | 80.21 |
To. This technique, also called “bag of words” is a simple first approach and much more effective than it seems.
We will first see the general principles of this technique and then we will see how with scikit-learn (Python) we can set up a practical case and see its execution.
The principle of the bag of words
In the end, the principle of the bag of words is quite simple. You can say that it even looks like the one-hot encoding that we saw in a previous article . Its principle can be summed up in 3 phases:
- The decomposition of words. This is also called tokenization.
- The constitution of a global dictionary which will in fact be the vocabulary.
- The encoding of the character strings in relation to the vocabulary formed previously.
Here is a block diagram
Tokenization
Tokenization is a simple but essential step. The principle is simple: you have to cut the sentence (s) into phonemes or words. Of course we immediately think of cutting with spaces, but we will not forget the punctuation elements either. Fortunately Scikit-Learn helps us in this step with its ready-to-use tokenization functions:
CountVectorizer() et TfidfVectorizer()
Vocabulary creation
It is obvious that we are not going to deal with a single sentence! we will have to process a large number of them, and therefore we are going to constitute a sort of dictionary (or vocabulary) which will consolidate all the words that we have tokenized. See diagram above.
This dictionary (vocabulary) will then allow us to perform the encoding necessary to “digitize” our sentences. In fact and to summarize, this step allows us to create our bag of words. At the end of this step, we will therefore have all the (unique) words that make up the sentences in our data set. Attention to each word will also be given an order, this order is very important for the encoding step which follows.
Encoding
This is the step that will transform our words into numbers. Once again the idea is simple, from a sentence in your dataset, you match the vocabulary previously formed. Be careful of course to resume the order established in the previous step!
So for each new sentence, it must be tokenized (same method as for the constitution of the vocabulary) and then confront each word. If the word of the sentence exists in the vocabulary, then it suffices to put a 1 for the location (order) of the word in the vocabulary.
Implementation with Scikit-Learn
For example we will take the data set that we had scraped here (video games).
import pandas as pd from sklearn.feature_extraction.text import CountVectorizer T = pd.read_csv("../webscraping/meilleursjeuvideo.csv") cv = CountVectorizer() texts = T["Description"].fillna("NA") cv.fit(texts)
The CountVectorizer () function is a magic function which will split (via regular expression) the sentence. Here we will create a vocabulary from the Description column and “tokenize” all the columns.
cv = CountVectorizer() texts = T["Description"].fillna("NA") cv.fit(texts) print ("Taille: {}", len (cv.vocabulary_)) print ("Contenu: {}", cv.vocabulary_)
Here is the vocabulary automatically created by Scikit-Learn (we get it via vocabulary_):
Taille: {} 2976Contenu: {} {'dans': 681, 'ce': 435, 'nouvel': 1793, 'épisode': 2944, 'de': 688, 'god': 1197, 'of': 1827, 'war': 2874, 'le': 1480, 'héros': 1303, 'évoluera': 2970, 'un': 2755, 'monde': 1676, 'aux': 273, 'inspirations': 1357, 'nordiques': 1784, 'très': 2731, 'forestier': 1118, 'et': 1001, 'montagneux': 1683, 'beat': 332, 'them': 2654, 'all': 122, 'enfant': 938, 'accompagnera': 55, 'principal': 2080, 'pouvant': 2056, 'apprendre': 184, 'des': 710, 'actions': 71, 'du': 806, 'joueur': 1420, 'même': 1741, 'gagner': 1159, 'expérience': 1030, 'the': 2652, 'legend': 1486, 'zelda': 2903, 'breath': 381, 'wild': 2882, 'est': 1000, ..., 'apparaît': 177, 'tribu': 2715, 'wario': 2875, 'land': 1471, 'pyramide': 2157, 'peuplant': 1960}
You will notice that it is a list made up of words and numbers. These numbers are the encoding orders that we mentioned above, quite simply.
You will also notice that the vocabulary is made up of 2976 words. This means that the sentences that make up the Description column contain 2976 distinct words.
Now we have to create the representation of the data itself, for that we have to call the transform () function:
bow = cv.transform(texts) print ("Sac de mots: {}", bow) sdm = bow.toarray() sdm.shape print(sdm)
The result is a matrix (200, 2976) because we have 200 rows and 2976 new columns.
array([[0, 0, 0, ..., 0, 0, 0], [0, 1, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 1], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]])
Limit vocabulary size
If you take a look at the vocabulary you will notice that a lot of the words are unnecessary. We are talking about link words or other numbers which will not really add any values thereafter. In addition, the number of columns with the word bag technique is likely to grow impressively, we must find a way to limit them.
A good method is therefore to remove unnecessary words from the vocabulary. For this we have at least 2 approaches:
- Limit vocabulary creation via the number of occurrences
- Use stop words
Limitation via the number of occurrences
By default when we created our vocabulary from datasets we added an entry when it appeared at least 2 times. We can simply increase this parameter which will have the effect of limiting the number of entries in the vocabulary and therefore giving more importance to words which are used several times.
With scikit-Learn just use the min_df parameter as follows:
cv = CountVectorizer(min_df=4).fit(texts) print ("Taille: {}", len (cv.vocabulary_)) print ("Contenu: {}", cv.vocabulary_)
The size of the vocabulary then decreases considerably:
Taille: {} 379
Stop Words
But what are Stop Words? Simply a list of explicit words to be removed from the vocabulary.
Scikit-Learn offers a predefined list of words in its API but unfortunately in English, for other languages you will have to create your own list.
Of course you can start from scratch and create your own excel file. You can also start from an existing list. For French, a good starting point is on the site . You just have to copy this list via the site or why not also scrap it ! In short, I did it for you and do not hesitate to resume the list here (Github) .
Then nothing could be simpler,
stopwordFR = pd.read_csv("stopwords_FR.csv") cv = CountVectorizer(min_df=4, stop_words=stopwordFR['MOT'].tolist()).fit(texts) print ("Taille: {}", len (cv.vocabulary_)) print ("Contenu: {}", cv.vocabulary_)
The size decreases further:
Taille: {} 321
There you go, now instead of sentences you have recovered digital data that is much more usable than textual information. | http://aishelf.org/bag-of-words/ | CC-MAIN-2021-31 | refinedweb | 1,116 | 64.3 |
I am revisiting this issue after 6 months. I stopped working on this
because everyone was busy working towards 1.7 release. The then thread
was getting little attention.
Link to the issue is
and following is the initial comment in the issue tracker. Pasting it
here.
<snip>
Add an option to ignore files with only property changes and no content
changes. e.g. svn log --ignore-properties
Motivation: many users are not interested in reviewing changes to
property changes and only care about content changes.
</snip>
I would like to give summary of discussions that happened regarding this
issue.
In issue desc2 Hyrum said the following.
A subset of this problem (namely, ignoring changes to svn:mergeinfo) is
being addressed on the ignore-mergeinfo branch:
But I was in the opinion that --ignore-properties requirement is bit
different from --ignore-merge-info. So I submitted a patch against trunk
adding new option --ignore-properties to log sub command. This patch
implements new functionality on ra_local layer alone.
Hyrum replied to the patch asking me why I was not working against
ignore-mergeinfo branch. Then I started looking at the code in
ignore-mergeinfo branch and tried to use the ignored_prop_mods parameter
to filter out the properties. I started submitting series of patches
against the branch. But later I was stuck because we are allowed to
define any props we want in the 'svn:' namespace, and ignored_prop_mods
takes list of pre-defined keywords.
Also I agree with what Hyrum said:
> The "skip any and all prop mods" functionality is actually easier than
> the "skip selective prop mods" functionality because of the way we
> store history in the FS and walk history in the repos. Long story
> short: knowing that there was a prop mod is almost a free operation
> (in the context of other stuff); finding out *what* was modified (and
> therefore what should be filtered) takes a bit more work.
The entire thread can be read here.
I believe that my patch attached in the above thread is very straight
forward and simple. It will be great if someone could take a look at the
patch and give comments.
Later when I pinged the list on a separate thread about the patch,
Daniel Becroft said the following here
>.
I think we can eventually add another option to log command
--properties-only to list revisions that have properties changes alone.
Finally Stefan Sperling said the following here.
> I haven't been following this closely. But as Hyrum points out,
> it seems that more design work is needed before much coding can be done.
>
> Branch or not, you'll need to find a full committer willing to help
> with the design and review the implementation.
>
> The problem with that is that most developers are currently focused
> on working towards the 1.7 release. There is little room at the
> moment for designing new features that aren't planned to appear in 1.7.
>
> So maybe we can postpone work on this feature for later?
>
> In the meantime, there are quite a number of issues with milestones
> 1.7.0 and 1.7-consider. Those are likely to catch more attention at
> the moment, since everyone is focused on getting the release done.
I agreed with Stefan and stopped working on this. Now since we are about
to branch for 1.7, I hope this is the right time to work on this
again. With respect to design issue, it will be great if the patch I
submitted initially is reviewed and give comments about the approach. I
think it is very straight forward and simple.
I can re-work my patch against latest trunk as soon as we branch 1.7.
Thanks and Regards
Noorul
Received on 2011-07-01 15:17:15 CEST
This is an archived mail posted to the Subversion Dev
mailing list. | http://svn.haxx.se/dev/archive-2011-07/0005.shtml | CC-MAIN-2016-26 | refinedweb | 645 | 64.3 |
Pretty printing objects with multiline strings in terminal with colors
Pacharapol Withayasakpunt
・2 min read
If you have use JavaScript for some time, you should notice that pretty printing JSON in Node.js is as simple as
JSON.stringify(obj, null, 2).
(Also, if you need multiline strings, there is js-yaml.)
- But there is never coloring
An alternative is
console.log, which in Node.js, it is not as interactive as web browsers with Chrome DevTools, and the depth in by default limited to 2.
- How do you maximize depths?
- Easy, use
console.dir(obj, { depth: null })-- console.dir
BTW, in my test project, I got this,
Even with proper options (
{ depth: null, breakLength: Infinity, compact: false }), I still get this
So, what's the solution?
You can customize
inspect by providing your own class.
import util from 'util' class MultilineString { // eslint-disable-next-line no-useless-constructor constructor (public s: string) {} [util.inspect.custom] (depth: number, options: util.InspectOptionsStylized) { return [ '', ...this.s.split('\n').map((line) => { return '\x1b[2m|\x1b[0m ' + options.stylize(line, 'string') }) ].join('\n') } }
(BTW, worry about
\x1b[2m? It is How to change node.js's console font color?)
And, replace every instance of multiline string with the class.
function cloneAndReplace (obj: any) { if (obj && typeof obj === 'object') { if (Array.isArray(obj) && obj.constructor === Array) { const o = [] as any[] obj.map((el, i) => { o[i] = cloneAndReplace(el) }) return o } else if (obj.constructor === Object) { const o = {} as any Object.entries(obj).map(([k, v]) => { o[k] = cloneAndReplace(v) }) return o } } else if (typeof obj === 'string') { if (obj.includes('\n')) { return new MultilineString(obj) } } return obj } export function pp (obj: any, options: util.InspectOptions = {}) { console.log(util.inspect(cloneAndReplace(obj), { colors: true, depth: null, ...options })) }
Now the pretty printing function is ready to go.
If you only need the pretty printing function, I have provided it here.
patarapolw
/
prettyprint
prettyprint beyond `JSON.stringify(obj, null, 2)` -- Multiline strings and colors
I also made it accessible via CLI, and possibly other programming languages, such as Python (via JSON / safeEval, actually).
Demystifying Open Source Contributions
This quick guide is mainly for first-time contributors and people who want to start helping open sour...
Thanks for sharing. That Stack Overflow post with all of the colors listed out is very handy. I don't do a ton of node.js. I use a lot of python in the back end and really like the colorama package. | https://dev.to/patarapolw/pretty-printing-objects-in-terminal-with-multiline-strings-with-colors-3jd7 | CC-MAIN-2020-16 | refinedweb | 408 | 61.43 |
Consuming Salesforce Data using the REST API from a .NET WCF Message Handler
- Posted in:
- salesforce
- .net
- integration
(Puedes ver este artículo en español aquí)
In a previous article, A .Net WCF Service Handler for Salesforce Workflow Outbound Messages that Calls Back to Salesforce using SOAP API, I explained how to create a .Net WCF service to handle an outbound message and how to get additional data from Salesforce using the SOAP API. In this article I use the same example but instead of using the SOAP API I will use the REST API. You might wonder why do I need to change my SOAP code to use REST instead, and the answer is simple: you might have the SOAP API disabled (because of the Salesforce edition you have) in your organization and only have the REST API available.
Allow me to refresh what we would like to accomplish: our client wants to integrate accounts in Salesforce with accounts in their ERP, so every time a new opportunity is marked as “Closed Won” in Salesforce the account is created on the ERP. I suggest you to read the previous article which explains how to set things up for the workflow and the Visual Studio project to create the WCF service that handles the workflow outbound message.
Salesforce is able to expose its metadata as a REST service. As we did in the case of SOAP, we could use the REST API to query the account information, but this could be simplified by exposing an APEX class as a REST service. Salesforce makes this very simple. You can follow the steps explained in the Force.com Apex Code Developer's Guide for an overview on how to do this. In this article I will expose an APEX class as a REST service and then I will consume this service from our message handler created in .Net
Exposing the REST Service in Salesforce
This is actually very simple, all you need to do is open the Salesforce Developer Console and from the menu select “File->New->Apex Class”. Name it AccountRestService and replace the code with the following:
@RestResource(urlMapping='/Account/*') global with sharing class AccountRestService { @HttpGet global static Account doGet() { RestRequest req = RestContext.request; RestResponse res = RestContext.response; String accountId = req.requestURI.substring(req.requestURI.lastIndexOf('/')+1); Account result = [SELECT Id, Name, BillingStreet, BillingCity, BillingState, BillingPostalCode FROM Account WHERE Id = :accountId]; return result; } }
Notice in line 1 that we use the special @RestResource to tell Salesforce that this is actually a REST service. In line 3 we specify that the doGet method will be called by HTTP GET. The URL for this service will be the following:
https://{instanceName}.salesforce.com/services/apexrest/Account/{accountId}
The https://{instanceName}.salesforce.com/services/apexrest is the URL base address for all REST services, and the /Account/{accountId} is specified by our class definition in the urlMapping parameter of the @RestResource tag. For example you could use the following URL to get the details for a specific account:
There is one thing we haven’t considered yet: security. If you put the above URL into a browser you will get an INVALID_SESSION_ID error. If you read the documentation you will learn that the HTTP request issued against the REST service needs an Authorization HTTP header. You could create a connected app and use OAuth to call the login REST service and get a session id, but this is actually complex (I will explain it in a future article) but in our case, since we are calling this service from an outbound message handler, we already have the session id. Remember from the previous article that we marked the outbound message to “Send Session ID”:
So, all we need to do is to build the right HTTP request from our WCF message handler to call our REST service.
Calling the REST Service from Visual Studio
Open the Visual Studio project you created in the previous article (you can get a sample from here). We will use RestSharp as our REST client to call the service. Using NuGet, add the RestSharp package to your project. RestSharp can automatically transform the JSON text returned from a REST service to a strong typed object. Let’s create a model to encapsulate the data returned from the REST service: in Visual Studio, create a Model folder and add a class named Account to it. Replace the code with the following:
namespace WorkflowNotificationServices.Model { public class Account { public string Id { get; set; } public string Name { get; set; } public string AccountNumber { get; set; } public string BillingStreet { get; set; } public string BillingCity { get; set; } public string BillingState { get; set; } public string BillingPostalCode { get; set; } public string BillingCountry { get; set; } } }
We have defined an account class that encapsulates the data from the account object in Salesforce. Notice that for simplicity we have named the properties the same as the object fields in Salesforce (you don’t have to name things the same, you could use JSON.Net to get around it, but is not a topic we would like to do in this article).
Now, open the file OpportunityNotificationService.svc.cs and change the method CreateAccount with the following code:
private bool CreateAccount(string url, string sessionId, WorkflowNotificationServices.Opportunity opportunity) { int recordsAffected = 0; Uri uri = new Uri(url); RestClient restClient = new RestClient(new Uri(String.Format("https://{0}", uri.Host))); RestRequest request = new RestRequest("services/apexrest/Account/{id}"); request.AddUrlSegment("id", opportunity.AccountId); request.AddHeader("Authorization", String.Format("Bearer {0}", sessionId)); IRestResponse<Model.Account> response = restClient.Execute<Model.Account>(request); Model.Account account = response.Data; ConnectionStringSettings connectionString = ConfigurationManager.ConnectionStrings["ERP"]; using (SqlConnection cn = new SqlConnection(connectionString.ConnectionString)) { using (SqlCommand command = new SqlCommand("salesforce_crearCliente",; }
The lines highlighted are the important ones. Notice in line 6 how we use the URL we got from the SOAP message sent by Salesforce to our WCF service and we get the host (and thus the instance name) we need to send the HTTP request to. In line 7 we create an HTTP request using RestSharp and specify the endpoint of our REST service, as explained before. In line 8 we specify the account Id we got from the SOAP of the outbound message on opportunity. In line 9 we set the security part we need to make this work. We need to set the Authorization header to the value Bearer {sessionId}. The session id we get it again from the SOAP of the outbound message sent by Salesforce (remember we marked the “Send session ID” field in the outbound message definition). Finally, in line 12 we make the HTTP call and tell RestSharp to convert the result to our Account object we created before. The rest of the code is just the same, using the values returned by the REST service (now strongly typed into a class) to call a stored procedure on the ERP.
Testing the Call Back
To test our web service we just follow the same steps outlined in the previous article. Notice that we only changed the API used to obtain data from Salesforce: we were using SOAP before and now we use REST
You can get the sample project here:
Nice Post helped me a lot, Thank you for uploadingSumit Datta | http://bloggiovannimodica.azurewebsites.net/post/consuming-salesforce-data-using-the-rest-api-from-a-net-wcf-message-handler | CC-MAIN-2020-45 | refinedweb | 1,208 | 51.48 |
89457/can-i-know-where-is-java-logs-on-centos-7
Whenever you require to explore the constructor ...READ MORE
export JAVA_HOME="$(/usr/libexec/java_home -v 1.6)"
or
export JAVA_HOME="$(/usr/libexec/java_home -v 1.7)"
or
export ...READ MORE
Here are two ways illustrating this:
Integer x ...READ MORE
import java.io.BufferedWriter;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
public class WriteFiles{
...READ MORE
While programming we often write code that ...READ MORE
Let's say your file is in C:\myprogram\
Run ...READ MORE
Please check the below-mentioned syntax and commands:
To ...READ MORE
Here is what you can do.Just use packagesmatching to ...READ MORE
Specinfra is escaping the characters in the with_version chain ...READ MORE
Make sure that you have removed the ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/89457/can-i-know-where-is-java-logs-on-centos-7 | CC-MAIN-2020-50 | refinedweb | 145 | 65.08 |
Reviewing Structured Editors - Part Deux
July 8, 1998
The Seybold Report on Internet Publishing
Special for XML.com
In our first look at the new crop of XML editors, we noted that the application space was expanding along a continuum of publishing requirements, from layout-intensive programs like Bladerunner from Interleaf to basic text and structure editing in new programs like Henry Thompson's XED. In this update, we review some of the XML editors shown in Paris (including the role of XML in Office 9), look at editing support for XSL, and give you our thoughts on where this market is heading.
To balance our coverage of the new, the small, the weird, and the extreme in this
market,
we have collected links to Seybold coverage of the mainstream, market-leading editors,
added
them to the mix and consolidated a list of links to Seybold coverage of XML editors.
XML Editors in the Paris Springtime
SGML/XML Europe '98, held in Paris May 19-21, provided an opportunity to catch up
on some
structured editors not covered in our earlier article (Bladerunner, Raven, XED, and
XMetal).
Vervet Logic's XML Pro, now in release 1.0, is the only brand-new entrant over the
last two
months, so, while it wasn't shown in Paris, we include coverage of it here.
But the top news was a peek, albeit a brief one, at the type of angle brackets we can expect from Office 9.
Excosoft—A new entrant in the U.S. market, Excosoft's Documentor has been under development for years in the demanding tech-pubs department of the Swedish telecom giant, Ericsson. You can buy it out of the box with a companion application that supplies minimal file management capability. And it is the only editor to support namespace-like cut-and-paste between documents created with different DTDs.
Grif—About all we can say about Grif is that it is not dead again, at least not yet. This is the French company that fielded SGML and HTML editors, only to go on the auction block last year. It resurfaced in Paris as part of the Toronto-based I4I.
I4I (Infrastructures for Information)—In addition to showing Grif's Symposia Pro, which it just acquired, it has yet another variation on how to turn Word into an XML editor. I4I's S4 implementation looks promising, offering real-time validation for arbitrary DTDs.
Microsoft—HTML is clearly Microsoft's focus for the next release of Office, but XML does play an interesting supporting role. Even though this is not a structured editor like the others reviewed here, because of its stature in the market, we go into some detail on Microsoft's use of XML in Word. We also got a preview of new HTML support in PowerPoint and Excel.
Stilo—Still waiting at the starting line, Stilo's editor nevertheless embodies some interesting ideas that someone ought to commercialize, sooner or later.
TimeLux—TimeLux's little Luxembourgian editor, EditTime, is starting to look as if it might outgrow its niche as the multilingual editor of choice for the European Union. Definitely worth a look-our Seybold coverage shows you how this one has developed over the past three years.
Vervet Logic—Here is our first look at XML Pro, a new take on what the heck an XML editor can be. (And a read-my-McLipps T-shirt to whomever market justifies this name.)
With StyleIn Paris, the chorus line was singing: "When the XSL spec settles down, we'll support it." Are the vendors just whistling DSSSL, or will the market really implement XSL when it is a stable spec? Several vendors have initial XSL creation utilities, including XSL Styler from ArborText, and several can export an XSL style sheet. Only one, to our knowledge, EditTime from TimeLux, can import an XSL style sheet created elsewhere and display the result.
We feel that the prospects for XSL implementation are much stronger than they were for DSSSL, but here, too, time is working against the XSL committee.
Why stronger? First, XSL is a second-generation general formatting language for structured markup and therefore can benefit from knowledge of DSSSL's strengths and weaknesses. Second, XSL is simpler. Third, there is a larger potential market, although how many users not doing commercial publishing will need XSL above CSS remains to be seen. Last, the notion of an application-independent style specification itself is gaining ground through CSS implementation. The risk, of course, is that CSS will be so well established by the time XSL comes out that it will be betamaxed and never gain widespread use.
The primary mitigating factor for the XSL group is that CSS will just not support print in the manner that XSL can support print. And finer and more flexible control over layout may not remain a problem reserved for print. Jon Bosak, the chair of the XML Working Group, speculates that in the long term, the requirements for on-screen layout will surpass those for print because of the endless variety of display devices that must be supported and the eventual need for complex hyperlinking and navigation. If this bears out, serious (read: business-critical) Web sites will be lining up for the sheet music as soon as the ink is dry on XSL.
Will XML Editors Ever Become Mainstream Products?
Microsoft Office brings in $500 million each month. How many of the people buying
it
really care about direct control over structure or metadata? According to the research
of
CAP Ventures, the worldwide market for SGML editing software in all of 1997 was well
under
$500 million. Not nearly enough, evidently, for Microsoft to pay attention.
So as existing SGML applications and new HTML applications migrate to XML, will structured editors remain a tiny slice of the overall editorial marketplace? Or is this a market poised for growth? Does everyone who needs a structured editor for writing those honking big SGML tech manuals already have an Adept or FrameMaker or Author/Editor license? Does anyone writing business reports, letters, catalogs, personal Web pages, letters, messaging metadata, memos, World Cup predictions, or Biblical exegeses really care if XML editors become mainstream, commercial, end-user products?
The size of this market lies somewhere between the market for Word and the niche market for SGML editors with its relatively high license costs and extremely steep start-up curves. This is a big spread, and it leaves quite a bit of room for speculation and development but, at this point, no obvious path to large scale commercial success.
Our view of available tools indicates that each is developing its own direction. While the mainstream SGML editors, which were used primarily for tech doc, were able to consolidate a core feature set that began to define "SGML editor," there is no such consensus on what the core features are, or even what the core market is, for a general-purpose XML editor. The traditional publishing audience is just one audience, but even in that sector, there are specialized needs for translation, catalog publishing, technical documentation, dictionaries, and other applications. Add to this the audience for all sorts of dynamic, personalized Web pages, for application messaging, and for Web metadata, and the sum is that it is much too early to predict the eventual size of the market for structured editors, especially since the two close companions to XML-XSL and Xlink-are not yet ready or well known or well understood.
For the time being, XML for documents is not all that different from SGML for documents, at least until we have full display of XML with XSL styles in Web browsers. Good programmers don't work for free, so, with all this fresh development going on, customers should be prepared to spend more on licensing a specialized XML editing tool than they do for a mass-market word processor. Or they could just jigger their own workarounds to get those angle brackets into the files. | https://www.xml.com/pub/a/SeyboldReport/ipx9806.html | CC-MAIN-2018-34 | refinedweb | 1,340 | 57 |
A priority queue is a dynamically resizing container adopter in the C++ Standard Library. Elements in a priority queue are arranged in non-decreasing order such that the first element is always the greatest element in the queue. The
priority_queue::push() is a method available in the STL which inserts a new element to the priority queue.
Syntax:
priorityq_name.push(val);
Parameter: The
priority_queue::push() accepts a single parameter:
•The value that has to be added to the container
Return value: none
Example of priority_queue::push() method
#include <iostream>> #include <queue> using namespace std; int main () { priority_queue<int> q; q.push(10); // inserts 10 to queue, top = 10 cout<<"Top: "<<q.top()<<endl; q.push(20); // inserts 20 to queue, top = 20 (largest element) cout<<"Top: "<<q.top()<<endl; q.push(40); // inserts 40 to queue, top = 40 cout<<"Top: "<<q.top()<<endl; q.push(30); // inserts 30 to queue, top is still 40 cout<<"Top: "<<q.top()<<endl; return 0; }
Output:
Top: 10 Top: 20 Top: 40 Top: 40 | https://prepfortech.in/documentation/cpp/priority-queue-push-method | CC-MAIN-2021-17 | refinedweb | 170 | 55.34 |
1.7. Accelerating HORTON code with Cython¶
HORTON was designed to prioritize ease of programming over performance. This is a reasonable decision in light of the fact that the vast majority of time in a quantum chemistry code is only spent in a relatively small section of code. In HORTON, we rewrite these critical portion s of code in C++ and link them into Python using the Cython framework. We identify these critical portions using profiling tools.
1.7.1. Before you begin¶
There are several downsides to accelerating code with Cython. Please make sure they are acceptable to you before starting to code.
- Developer tools will break. PDB cannot read C++ debug symbols. cProfile will break. IDEs will not syntax highlight correctly. Valgrind will report false positive memory leaks.
- Cython is still a fairly new project. The API may not be stable. Don’t be surprised if your code breaks after a few versions.
- A steep learning curve. Unless you are already familiar with C/C++ and Python profiling tools, you may not obtain the speed up you expected. Memory allocation and management of arrays is particularly tricky.
1.7.2. Basic example¶
1.7.2.1. Background¶
We will take a simplified example from the slicing code of the matrix class. The original Cholesky decomposed 2-electron integrals had an operation to slice along 3-indices which was consuming a significant portion of the time in the code. This was implemented using the
numpy.einsum method.
The original code is here
def get_slice_slow(self): return numpy.einsum("ack, bck-> abc", B, B_prime)
This code takes a slice of B where the indices
c are kept the same and then contracts across the last index.
A quick check using the python cProfile module (
python -m cProfile -o slice.pstats get_slice_slow.py; gprof2dot -f pstats slice.pstats | dot -Tpng -o slice.png) showed
get_slice_slow method required almost 40% of the total code runtime. Since this operation was simple to implement in C++, it was a good candidate for Cythonizing.
The C++ code to implement the same operation is below:
//get_slice_abcc.cpp void get_slice_abcc(double* B, double* B_prime, double* out, long nbasis, long nvec){ new long k; //example for (k=0; k<nvec; k++){ for (long a=0; a<nbasis; a++){ for (long b=0; b<nbasis; b++){ for (long c=0; c<nbasis; c++){ out[a*nbasis*nbasis + b*nbasis + c] += inp[k*nbasis*nbasis + a*nbasis + c] * inp2[k*nbasis*nbasis + b*nbasis + c]; } } } } delete k; //example }
and the header is below:
//get_slice_abcc.h void get_slice_abcc(double* inp, double* inp2, double* out, long nbasis, long nvec);
This code now needs to be interfaced with Python using Cython.
1.7.2.2. Cythonizing your code¶
First create a Cython .pxd header. This file provides information for Cython to link your compiled code to the cython file later.
#get_slice_abcc.pxd cdef extern from "get_slice_abcc.h": void get_slice_abcc(double* B, double* B_prime, double* out, long nbasis, long nvec)
You’ll notice here that the Cython header is remarkably similar to the C++ header. There are a few keywords introduced here, the most significant being
cdef. The .pxd files are python syntax with a few other keywords and syntax for pointers. See the Cython documentation on C++ below for more details on how to Cythonize classes and more.
The .pyx file is where brunt of the work by Cython is done. It is also python syntax with a few extra keywords.
#cext.pyx cimport get_slice_abcc def get_slice_fast(np.ndarray[double, ndim=3] B not None, np.ndarray[double, ndim=3] B_prime not None, np.ndarray[double, ndim=3] out not None, long nbasis, long nvec): assert B.flags['C_CONTIGUOUS'] assert B.shape[0] == nvec assert B.shape[1] == B.shape[2] == nbasis #etc... get_slice_abcc.get_slice_abcc(&B[0, 0, 0], &B_prime[0, 0, 0], &out[0, 0, 0], nbasis, nvec) return out
There are several things to note here:
- The arguments are statically typed.
- The Numpy arrays have their datatypes declared as well as the number of dimensions
- It is good practice to have safety checks because the code in .pyx files will not give clean stack traces.
- Python and Numpy use
longdatatypes by default.
- You can pass the address of the first element of a Numpy array to a function expecting
double*as long as it is contiguous.
There are several other nuances not illustrated in this example, but they are well covered in the Cython documentation below. Users should be particularly cognizant of whether variables are Python-style (dynamic typed) or C-style (static typed). In our example above, everything is static typed as the method declaration declares everything.
1.7.3. Additional notes¶
The above example leaves all memory management to the Python interpreter. This is not always possible, especially when implementing iterative algorithms in C/C++ code. There is no issue when memory is allocated and deallocated dynamically in the C++ code as in the example above. However, if memory must be allocated by C++ and freed by Python, it can be much more complicated. The reverse case, memory allocated by Python and freed by C++, should be much more rare and won’t be covered here.
The most common form of memory allocated in C++ and passed back to Python for management is likely Numpy arrays. We will show a code snippet for managing this.
cdef double* data = NULL cdef np.npy_intp dims[3] nvec = calculate_cholesky(&data) dims[0] = <np.npy_intp> nvec dims[1] = <np.npy_intp> nbasis dims[2] = <np.npy_intp> nbasis result = numpy.PyArray_SimpleNewFromData(3, dims, np.NPY_DOUBLE, data)
The method
PyArray_SimpleNewFromData creates a new Numpy array from memory which has already been allocated. The numpy data types must be specified, as well as the dimensionality. Data is simply a 1D
double* array of size nvec * nbasis * nbasis.
1.7.4. Further reading¶ | http://theochem.github.io/horton/2.1.0b3/tech_dev_cython.html | CC-MAIN-2019-51 | refinedweb | 975 | 58.89 |
Locating single Tcl script to create a Tk GUI and therefore this should be as pain free as possible.
Any easy way around this is to turn your Tcl file into an array that can be included from your source file. This article explores how to do this with Tcl and C.
Converting the Tcl Script
To include the Tcl script it needs to be converted into a C array. This can be done from Unix with the
xxd -i command. So to convert
my.bin to
my.bin.h you would run:
$ xxd -i my.tcl my.tcl.h
If don’t have access to
xxd, you can use bin2c downloadable as an archive from here. To do as above with
bin2c:
$ tclsh bin2c.tcl my.tcl my_tcl my.tcl.h
This will create a file similar to the following:
unsigned char my_tcl[] = { 0x70, 0x75, 0x74, 0x73, 0x20, 0x22, 0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x2c, 0x20, 0x77, 0x6f, 0x72, 0x6c, 0x64, 0x21, 0x22, 0x0a, 0x70, 0x75, 0x74, 0x73, 0x20, 0x22, 0x49, 0x20, 0x68, 0x6f, 0x70, 0x65, 0x20, 0x79, 0x6f, 0x75, 0x20, 0x6c, 0x69, 0x6b, 0x65, 0x64, 0x20, 0x74, 0x68, 0x69, 0x73, 0x20, 0x61, 0x72, 0x74, 0x69, 0x63, 0x6c, 0x65, 0x20, 0x66, 0x72, 0x6f, 0x6d, 0x3a, 0x20, 0x68, 0x74, 0x74, 0x70, 0x3a, 0x2f, 0x2f, 0x74, 0x65, 0x63, 0x68, 0x74, 0x69, 0x6e, 0x6b, 0x65, 0x72, 0x69, 0x6e, 0x67, 0x2e, 0x63, 0x6f, 0x6d, 0x22, 0x0a }; unsigned int my_tcl_len = 89;
In the example below you can see that
my.tcl.h has been included into the function which will load the script. The array created above is then evaluated by
Tcl_EvalEx() using the created array:
my_tcl, and its associated length variable:
my_tcl_len.
#include <tcl.h> static Tcl_Interp *interp; int Script_init() { // Include my.tcl which has been converted to a char array using xxd -i #include "my.tcl.h" interp = Tcl_CreateInterp(); if (Tcl_Init(interp) == TCL_ERROR) { return 0; } if (Tcl_EvalEx(interp, my_tcl, my_tcl_len, 0) == TCL_ERROR ) { fprintf(stderr, "Error in embedded my.tcl\n"); fprintf(stderr, "%s\n", Tcl_GetStringResult(interp)); return 0; } return 1; }
Conclusion
This process makes it much easier to distribute an executable. Once an initial Tcl script has been loaded, you can use something like the xdgbasedir module to easily locate other scripts for the program. To automate this process, take a look at Using Dynamically Generated Header Files with CMake. | http://techtinkering.com/2013/02/20/compiling-a-tcl-script-into-an-executable/ | CC-MAIN-2018-05 | refinedweb | 388 | 69.92 |
Using WMI Windows PowerShell Cmdlets to Manage the BITS Compact Server
Windows PowerShell provides a simple mechanism to connect to Windows Management Instrumentation (WMI) on a remote computer and manage the Background Intelligent Transfer Service (BITS) Compact Server. The BITS Compact Server is an optional server component that must be installed separately. For information about installing the Compact Server, see the BITS Compact Server documentation.
Connect to the BITS provider.
The Get-Credential cmdlet requests the user's credentials to connect to the remote computer and assigns the credentials to the $cred object.
The objects returned by the Get-WmiObject cmdlet are assigned to the $bcs variable. In the preceding example, the Get-WmiObject cmdlet retrieves the BITSCompactServerUrlGroup class in the root\Microsoft\BITS namespace of Server1. Static methods exposed by the BITSCompactServerUrlGroup class can be called on the $bcs object. For more information about BITS remote management, see BITS provider and BITS provider classes.
Note The grave-accent character (`) is used to indicate a line break.
Create a URL group on the server.
The "" URL prefix string is assigned to the $URLGroup variable. The $URLGroup variable is passed to the CreateUrlGroup method, which creates the URL group on Server1.
You can specify a different URL group. The URL group must conform to a valid URL prefix string. For more information about URL prefixes, see UrlPrefix Strings.
Host a file on the URL group.
The BITSCompactServerUrlGroup instance returned by the Get-WmiObject cmdlet is assigned to the $bcsObj variable. The CreateUrl method is called for the $bcsObj with the "url.txt" URL suffix, the "c:\\temp\\1.txt" source path for the file, and an empty security descriptor string as parameters. The "url.txt" suffix is added to the URL group prefix. Clients can download the file from the following address:.
Clean up the URL and the URL group.
The system.object Delete method deletes the $bcsObj object.
Related topics | http://msdn.microsoft.com/en-us/library/ee663887(v=vs.85).aspx | CC-MAIN-2014-52 | refinedweb | 320 | 59.09 |
Web Service GuyWeb service stuff Server2003-06-01T19:35:00ZRDF does you good<FONT size=2>It seems to have worked for </FONT><A href=""><FONT size=2>Derek</FONT></A><FONT size=2> ...</FONT><img src="" width="1" height="1">VictorL journey into OWL-S<P><FONT size=2>Judging by the semantic blogs and lists, </FONT><A href=""><FONT color=#0000ff size=2>OWL-S</FONT></A><FONT size=2>, a vocabulary for describing web services in RDF, seems to be the hot schema of the moment. So, with a view to producing some OWL-S documents for the <A href="">SchemaWeb web services</A>, I have downloaded the </FONT><A href=""><FONT color=#0000ff size=2>example files</FONT></A><FONT size=2> and have started reading up the specs starting with </FONT><A href=""><FONT color=#0000ff size=2>Semantic Markup for Web Services</FONT></A><FONT size=2>.</FONT></P> <P><FONT size=2>Firstly congrats to the team for producing high quality documentation. These specs are concise and very readable.</FONT></P> <P><FONT size=2>The story I've gleaned so far is that in OWL-S a 'service' consists of three things.<BR>1. A profile that describes what the service does. A high level description that provides a publication and discovery framework for clients and service providers. This is the UDDI bit.<BR>2. A process model that describes how the service works in terms of IOPEs (input, output, preconditions and effects). This is an abstract description of a service.<BR!</FONT></P> <P><FONT size=2>Some first impressions and comments.<BR>OWL-S combines the functionality of UDDI and WSDL in one model. +1.<BR>OWL-S allows for reverse directories where clients do the advertising and service providers do the discovering; this is not possible with UDDI.<BR.<BR!</FONT></P><img src="" width="1" height="1">VictorL for .Net RDF?<P><FONT size=2>Want to play at Semantic Web?</FONT></P> <P><FONT size=2>The .Net RDF parser that drives <A href="">SchemaWeb</A> is now available under a Creative Commons Attribution-ShareAlike license.</FONT></P> <P><FONT size=2>VicSoft.Rdf Parser binaries, source and documentation are at:<BR></FONT><A href=""><FONT color=#0000ff size=2></FONT></A></P> <P><FONT size=2>This lightweight smush and query software component was developed on Windows. However it should run with Mono for those who work on the sunny side although this is un-tested.</FONT></P><img src="" width="1" height="1">VictorL - Update<DIV> <P><FONT face=Verdana size=2>Thanks <SPAN class=514504107-25112003>to all </SPAN>for the feedback and support received in the last week since the launch of </FONT><A href=""><FONT face=Verdana size=2>SchemaWeb</FONT></A><FONT face=Verdana size=2>.</FONT></P> <P><FONT face=Verdana size=2>New features since launch include 'Schema of the Week' starting off with the mighty RSS 1.0. Also SchemaWeb is now hosting Dr Ont's Semantic Spout, a blog which will carry SchemaWeb news and RDF matters in general.</FONT></P></DIV><img src="" width="1" height="1">VictorL - SchemaWeb Launched<P><FONT face=Verdana size=2><A href="">VicSoft</A> (makers of <A href="">Buddy Browser</A> FOAFware) announce the launch of SchemaWeb, an on-line directory of RDF schemas expressed using the RDFS, OWL and DAML+OIL schema vocabularies.</FONT></P> <P><FONT face=Verdana size.</FONT></P> <P><FONT face=Verdana size=2>SchemaWeb is at:<BR></FONT><A href=""><FONT face=Verdana size=2></FONT></A></P> <P><FONT face=Verdana size=2>Your feedback is welcome.</FONT></P><img src="" width="1" height="1">VictorL do you report a bug to Microsoft?<P><FONT size=2>For the first time since 1997 and the first Microsoft XML toolkit / parser, I have had to hack into XML files using regular expressions (arrrrrg!) in order to get a valid XML document to load into Microsoft XML tools.</FONT></P> <P><FONT size=2>An example of this type of XML is:</FONT></P> <P><FONT size=2><?xml version="1.0"?><?xml version="1.0"?><BR><!DOCTYPE root [<BR><!ENTITY ns ""><BR>]><BR><root xmlns="&ns;"><BR><foo>Some foo</foo><BR><bar>Some bar</bar><BR></root></FONT></P> <P><FONT size=2>The author of this type of XML is using a general entity to represent the default namespace uri. Both XmlDocument and XmlValidatingReader barf on this file with an error of:</FONT></P> <P><EM><FONT size=2>System.ArgumentException: Prefixes beginning with "xml" (regardless of whether the characters are uppercase, lowercase, or some combination thereof) are reserved for use by XML.</FONT></EM></P> <P><FONT size=2.</FONT></P> <P><FONT size=2>Which brings me to my point and request. How do I report this bug to the MS XML team? I have looked on Technet and MSDN but cannot find a way to do this.</FONT></P><img src="" width="1" height="1">VictorL all the way on XML.com<p><font size="2">Wow, two very hot articles on XML.com.</font></p> <p><font size="2">Mark's piece on Atom and RDF (sorry - we failed the audition) and Kendall's on OWL.</font></p> <p><a href=""><font color="#0000ff" size="2"></font></a><br /><a href=""><font color="#0000ff" size="2"></font></a></p> <p><font size="2">So Atom is XML with a maintained, normative XSLT port to RDF. Better than nothing I suppose and most Atom providers will transform on the server and provide both XML and RDF feeds.</font></p> <p><font size="2">Mark, perceptive chap that he is, drills down to the two reasons why RDF won't be on Fame Academy next week. Tool support and RDF / XML syntax.</font></p> <p><font size="2".</font></p><img src="" width="1" height="1">VictorL Latest News<p><a href=""><font size="2">Sam Ruby</font></a><font size="2"> has a moan about RDF / XML. Some people just don't understand.</font></p> <p><font size="2">Also </font><a href=""><font color="#0000ff" size="2">Edd Dumbill</font></a><font size="2"> has a little gripe with </font><a href=""><font color="#0000ff" size="2">RDFDrive</font></a><font size="2">, the only compliant RDF parser available for the .Net platform so I suppose beggars can't be choosers.</font></p> <p><font size=.</font></p> <p><font size="2" <font color="#000000">RdfReader</font> to the latest Working Draft spec and then going to use it as the basis for a proper parser. You know, the one with that nice graph.GetStatements(subject, predicate, object) interface like </font><a href=""><font color="#0000ff" size="2">RAP</font></a><font size="2">. Is there a better query interface? Please please let me know before I start!</font></p> <p><font size="2">One new feature of the Working Draft that I won't be tackling in a hurry is rdf:<font color="#0000ff" size="2">triples</font></a><font size="2"> that this is supposed to produce? And while we are talking syntax, why oh why is </font><a href=""><font size="2">RDF / XML</font></a><font size="2"> syntax so complicated? Any reasons or excuses for this would be appreciated because quite frankly, I just don't understand.</font></p> <p><font size="2">So hang on there all .Net RDF heads (all 2 of you), I will be releasing VicSoft.Rdf.RdfParser + source code in September.</font></p><img src="" width="1" height="1">VictorL is the new sex<font size="2">Has XML lost its buzz for you now that it's established and mainstream? If you are techie minded and like to bend you brain working with cutting edge technologies, then try the </font><a href=""><font color="#0000ff" size="2">FOAF</font></a><font size="2"> and </font><a href=""><font color="#0000ff" size="2">RDF</font></a><font size="2"> clubs on </font><a href=""><font size="2">Ecademy</font></a><font size="2">. RDF is a bit like ..er.. XML and may be the framework for the next big thing, the all knowing and all seeing Semantic Web.</font><img src="" width="1" height="1">VictorL Doctor DotNet<font size="2">Dear Doctor DotNet, I have a problem that you might be able to help with. I am currently extending an ASP.Net application and have noticed that on many pages some idiot has commented out both Option Explicit On and Option Strict On. I just can't bring myself to uncomment these however much I try. Is this plain lethargy or a (professional) attitude problem or just the dread of the two days work that it will probably take to fix the inevitable bugs and sort out the bloody mess. What should I do? And why did MS bring these heinous incitements to bad programming from VB6 into VB.Net? I prefer to use the beautiful C# any day.</font><img src="" width="1" height="1">VictorL Browser 1.1 now available<p><a href="" target="_blank"><font size="2">Buddy Browser</font></a><font size="2"> has been upgraded and version 1.1 is now available.</font></p> <p><font size="2">For those not already in </font><a href="" target="_blank"><font size="2">FOAF</font></a><font size="2"> space, look at the </font><a href="" target="_blank"><font size="2">Buddy Browser help</font></a><font size="2"> for a simple 3 step guide.</font></p><img src="" width="1" height="1">VictorL rocks!<p><font size="2"><br />Just been testing a few RSS feeds with this new </font><a href=""><font color="#0000ff" size="2">Funkidator</font></a><font size="2"> thingary.</font></p> <p><font size="2">Apparently </font><a href=""><font color="#0000ff" size="2">my client</font></a><font size="2">'s </font><a href=""><font color="#0000ff" size="2">feed</font></a><font size="2"> is </font><a href=""><font color="#0000ff" size="2">funky</font></a><font size="2">. Thank goodness for that. Whereas RSS guru </font><a href=""><font size="2">Sam Ruby</font></a><font size="2">'s </font><a href=""><font color="#0000ff" size="2">feed</font></a><font size="2"> is </font><a href=""><font size="2">not funky</font></a><font size="2">.</font></p><img src="" width="1" height="1">VictorL's official - we are not stupid!<P><BR><FONT size=2>I found a link to </FONT><A href=""><FONT color=#0000ff size=2>Dr Mark</FONT></A><FONT size=2>'s excellent </FONT><A href=""><FONT color=#0000ff size=2>slide show</FONT></A><FONT size=2> on RDF and the Semantic Web on </FONT><A href=""><FONT color=#0000ff size=2>Tim Bray's blog</FONT></A><FONT size=2>. Besides being a good overview of things semantic, it objectively looks at the weaknesses of RDF and why take up is slow.</FONT></P> <P><FONT size=2>The major problems identified are:<BR>The complexity of the current XML serialisation of RDF triples.<BR>The performance overhead of RDF databases and triple stores.<BR>Semantic linking.</FONT></P> <P><FONT size=2>Those of you who have 'banged your head' against the RDF wall might relate to the following quote from the slides. You see, we weren't stupid after all.</FONT></P> <P><EM><FONT size=2>The following was heard at a W3C/WAP Forum Workshop:</FONT></EM></P> <P><EM><FONT size=2!</FONT></EM></P> <P><EM><FONT size=2>I thought this was a pretty brave thing to say, since nobody else in the room had dared to say.</FONT></EM></P> <P><FONT size=2>Reading through some other personal anecdotes, it appears that the </FONT><A href=""><FONT size=2>W3C</FONT></A><FONT size=2> is the main stumbling block to a change in RDF XML syntax. Until they put their hands up and say 'Yes, this a dog, the Semantic Web will not happen until it is changed', RDF applications are a no no. Alternative syntax specifications will find it hard to gain critical mass unless they get W3C support.</FONT></P><img src="" width="1" height="1">VictorL Service Definition Debate<p><font size="2">Over in my other blogspace at </font><a href=""><font color="#0000ff" size="2">Ecademy</font></a><font size="2">, there is an ongoing debate in the </font><a href=""><font size="2">Web Services Club</font></a><font size="2"> on that perennial brain teaser, how exactly do you define a web service?</font></p> <p><font size="2">Any input from .Net bloggers with a short snappy definition (marketing department friendly of course) would be appreciated.</font> </p><img src="" width="1" height="1">VictorL Resources?<p><font size="2">I have several components that run with external linked files such as XML Schema and XSLT stylesheets. I like to link rather than embed so I can extend and enhance without re-compile.</font></p> <p><font size="2">The only way I have found to link these files at compile is using the command line compiler and the /linkresource: switch. Does anyone know a way if linking resources at compile using the VS Net IDE? I can only find 'Embedded Resource' in the Build Action property dropdown.</font></p><img src="" width="1" height="1">VictorL | http://weblogs.asp.net/vlindesay/atom.aspx | crawl-001 | refinedweb | 2,268 | 65.01 |
, Suite import thinkbayes2 import thinkplot
Suppose there are 10 people in my Dungeons and Dragons club; on any game day, each of them has a 70% chance of showing up.
Each player has one character and each character has 6 attributes, each of which is generated by rolling and adding up 3 6-sided dice.
At the beginning of the game, I ask whose character has the lowest attribute. The wizard says, "My constitution is 5; does anyone have a lower attribute?", and no one does.
The warrior says "My strength is 16; does anyone have a higher attribute?", and no one does.
How many characters are in the party?
from random import random def flip(p): return random() < p
We can use it to flip a coin for each member of the club.
flips = [flip(0.7) for i in range(10)]
[False, True, False, True, False, False, True, False, False, True]
And count the number that show up on game day.
sum(flips)
4
Let's encapsulate that in a function that simulates a game day.
def game_day(n, p): flips = [flip(p) for i in range(n)] return sum(flips)
game_day(10, 0.7)
8
If we run that function many times, we get a sample from the distribution of the number of players.
sample = [game_day(10, 0.7) for i in range(1000)] pmf_sample = Pmf(sample) thinkplot.Hist(pmf_sample)
The second method is convolution. Instead of flipping a coin, we can create a
Pmf object that represents the distribution of outcomes from a single flip.
def coin(p): return Pmf({1:p, 0:1-p})
Here's what it looks like.
player = coin(0.7) player.Print()
0 0.30000000000000004 1 0.7
If we have two players, there are three possible outcomes:
(player + player).Print()
0 0.09000000000000002 1 0.42000000000000004 2 0.48999999999999994
If we have 10 players, we can get the prior distribution like this:
prior = sum([player]*10) prior.Print()
0 5.9049000000000085e-06 1 0.00013778100000000018 2 0.0014467005000000017 3 0.009001692000000009 4 0.036756909000000025 5 0.10291934520000004 6 0.20012094900000005 7 0.26682793200000005 8 0.2334744405 9 0.12106082099999994 10 0.028247524899999984
Now we can compare the results of simulation and convolution:
thinkplot.Hist(pmf_sample, color='C0') thinkplot.Pmf(prior, color='C1') thinkplot.decorate(xlabel='Number of players', ylabel='PMF')
Finally, we can use an analytic distribution. The distribution we just computed is the binomial distribution, which has the following PMF:
$ PMF(k; n, p) = P(k ~|~ n, p) = {n \choose k}\,p^{k}(1-p)^{n-k}$
We could evalate the right hand side in Python, or use
MakeBinomialPmf:
help(thinkbayes2.MakeBinomialPmf)
Help on function MakeBinomialPmf in module thinkbayes2.thinkbayes2: MakeBinomialPmf(n, p) Evaluates the binomial PMF. n: number of trials p: probability of success on each trial Returns: Pmf of number of successes
And we can confirm that the analytic result matches what we computed by convolution.
binomial = thinkbayes2.MakeBinomialPmf(10, 0.7) thinkplot.Pmf(prior, color='C1') thinkplot.Pmf(binomial, color='C2', linestyle='dotted') thinkplot.decorate(xlabel='Number of players', ylabel='PMF')
Since two players spoke, we can eliminate the possibility of 0 or 1 players:
thinkplot.Pmf(prior, color='gray') del prior[0] del prior[1] prior.Normalize() thinkplot.Pmf(prior, color='C1') thinkplot.decorate(xlabel='Number of players', ylabel='PMF')
There are three components of the likelihood function:
The probability that the highest attribute is 16.
The probability that the lowest attribute is 5.
The probability that the lowest and highest attributes are held by different players.
To compute the first component, we have to compute the distribution of the maximum of $6n$ attributes, where $n$ is the number of players.
Here is the distribution for a single die.
d6 = Pmf([1,2,3,4,5,6]) d6.Print()
1 0.16666666666666666 2 0.16666666666666666 3 0.16666666666666666 4 0.16666666666666666 5 0.16666666666666666 6 0.16666666666666666
And here's the distribution for the sum of three dice.
thrice = sum([d6] * 3) thinkplot.Pdf(thrice) thinkplot.decorate(xlabel='Attribute', ylabel='PMF')
Here's the CDF for the sum of three dice.
cdf_thrice = thrice.MakeCdf() thinkplot.Cdf(cdf_thrice) thinkplot.decorate(xlabel='Attribute', ylabel='CDF')
The
Max method raises the CDF to a power. So here's the CDF for the maximum of six attributes.
cdf_max_6 = cdf_thrice.Max(6) thinkplot.Cdf(cdf_max_6) thinkplot.decorate(xlabel='Attribute', ylabel='CDF', title='Maximum of 6 attributes')
If there are
n players, there are
6*n attributes. Here are the distributions for the maximum attribute of
n players, for a few values of
n.
for n in range(2, 10, 2): cdf_max = cdf_thrice.Max(n*6) thinkplot.Cdf(cdf_max, label='n=%s'%n) thinkplot.decorate(xlabel='Attribute', ylabel='CDF', title='Maximum of 6*n attributes')
To check that, I'll compute the CDF for 7 players, and estimate it by simulation.
n = 7 cdf = cdf_thrice.Max(n*6) thinkplot.Cdf(cdf, label='n=%s'%n) sample_max = [max(cdf_thrice.Sample(42)) for i in range(1000)] thinkplot.Cdf(thinkbayes2.Cdf(sample_max), label='sample') thinkplot.decorate(xlabel='Attribute', ylabel='CDF', title='Maximum of 6*n attributes')
Looks good.
Now, to compute the minimum, I have to write my own function, because
Cdf doesn't provide a
Min function.
def compute_cdf_min(cdf, k): """CDF of the min of k samples from cdf. cdf: Cdf object k: number of samples returns: new Cdf object """ cdf_min = cdf.Copy() cdf_min.ps = 1 - (1 - cdf_min.ps)**k return cdf_min
Now we can compute the CDF of the minimum attribute for
n players, for several values of
n.
for n in range(2, 10, 2): cdf_min = compute_cdf_min(cdf_thrice, n*6) thinkplot.Cdf(cdf_min, label='n=%s'%n) thinkplot.decorate(xlabel='Attribute', ylabel='CDF', title='Minimum of 6*n attributes')
And again we can check it by comparing to simulation results.
n = 7 cdf = compute_cdf_min(cdf_thrice, n*6) thinkplot.Cdf(cdf, label='n=%s'%n) sample_min = [min(cdf_thrice.Sample(42)) for i in range(1000)] thinkplot.Cdf(thinkbayes2.Cdf(sample_min), label='sample') thinkplot.decorate(xlabel='Attribute', ylabel='CDF', title='Minimum of 6*n attributes')
For efficiency and conciseness, it is helpful to precompute the distributions for the relevant values of
n, and store them in dictionaries.
like_min = {} like_max = {} for n in range(2, 11): cdf_min = compute_cdf_min(cdf_thrice, n*6) like_min[n] = cdf_min.MakePmf() cdf_max = cdf_thrice.Max(n*6) like_max[n] = cdf_max.MakePmf() print(like_min[n][5], like_max[n][16])
0.23288163935889017 0.23288163935889017 0.28826338107405935 0.2882633810740594 0.31794402625472684 0.3179440262547268 0.32955796250238156 0.32955796250238156 0.32871475364520075 0.3287147536452008 0.3195146933518256 0.3195146933518255 0.3049352170780888 0.30493521707808885 0.2871203018328896 0.28712030183288956 0.2675970126720095 0.2675970126720096
The output shows that the particular data we saw is symmetric: the chance that 16 is the maximum is the same as the chance that 5 is the minimum.
Finally, we need the probability that the minimum and maximum are held by the same person. If there are
n players, there are
6*n attributes.
Let's call the player with the highest attribute Max. What is the chance that Max also has the lowest attribute? Well Max has 5 more attributes, out of a total of
6*n-1 remaining attributes.
So here's
prob_same as a function of
n.
def prob_same(n): return 5 / (6*n-1) for n in range(2, 11): print(n, prob_same(n))
2 0.45454545454545453 3 0.29411764705882354 4 0.21739130434782608 5 0.1724137931034483 6 0.14285714285714285 7 0.12195121951219512 8 0.10638297872340426 9 0.09433962264150944 10 0.0847457627118644
Here's a class that implements this likelihood function.
class Dungeons(Suite): def Likelihood(self, data, hypo): """Probability of the data given the hypothesis. data: lowest attribute, highest attribute, boolean (whether the same person has both) hypo: number of players returns: probability """ lowest, highest, same = data n = hypo p = prob_same(n) like = p if same else 1-p like *= like_min[n][lowest] like *= like_max[n][highest] return like
Here's the prior we computed above.
suite = Dungeons(prior) thinkplot.Hist(suite) thinkplot.decorate(xlabel='Number of players', ylabel='PMF') suite.Mean()
7.000868145040201
And here's the update based on the data in the problem statement.
suite.Update((5, 16, False))
0.08548474490284354
Here's the posterior.
thinkplot.Hist(suite) thinkplot.decorate(xlabel='Number of players', ylabel='PMF') suite.Mean()
6.940862784521086
suite.Print()
2 0.0005007044860801902 3 0.006177449626400222 4 0.03402191676923563 5 0.10823000113979393 6 0.21684925990462955 7 0.279837163496623 8 0.22697589141459018 9 0.10574761295351938 10 0.02166000020912791
Based on the data, I am 94% sure there are between 5 and 9 players.
suite.CredibleInterval()
(5, 9)
sum(suite[n] for n in [5,6,7,8,9])
0.9376399289091562 | https://nbviewer.jupyter.org/github/AllenDowney/ThinkBayes2/blob/master/examples/dungeons_soln.ipynb | CC-MAIN-2021-10 | refinedweb | 1,444 | 62.64 |
csTextureTrans Class ReferenceThis is a static class which encapsulates a few functions that can transform texture information into a texture matrix/vector.
More...
[Geometry utilities]
#include <csgeom/textrans.h>
Detailed DescriptionThis is a static class which encapsulates a few functions that can transform texture information into a texture matrix/vector.
This class makes it easiers to define textures for polygons given various things.
Definition at line 40 of file textrans.h.
Member Function Documentation
The most general function.
With these you provide the matrix directly.
Similar to the previous function but treat as if the lengths are set to 1.
Use 'v1' and 'len1' for the u-axis and 'v2' and 'len2' for the v-axis.
Otherwise this function is the same as the previous one.
Calculate the matrix using two vertices (which are preferably on the plane of the polygon and are possibly (but not necessarily) two vertices of the polygon).
The first vertex is seen as the origin and the second as the u-axis of the texture space coordinate system. The v-axis is calculated on the plane of the polygon and orthogonal to the given u-axis. The length of the u-axis and the v-axis is given as the 'len1' parameter.
For example, if 'len1' is equal to 2 this means that texture will be tiled exactly two times between vertex 'v_orig' and 'v1'. I hope this explanation is clear since I can't seem to make it any clearer :-)
The documentation for this class was generated from the following file:
- csgeom/textrans.h
Generated for Crystal Space 1.0.2 by doxygen 1.4.7 | http://www.crystalspace3d.org/docs/online/api-1.0/classcsTextureTrans.html | CC-MAIN-2016-18 | refinedweb | 272 | 57.16 |
Generally, for Java-based microservices, we strongly recommend running them in the Open Liberty container, atop Kubernetes. Open Liberty is a modern, cloud-ready, open source Java application server that supports the latest Java Enterprise Edition (EE 7 and 8) and MicroProfile (MP 1, 2, and 3) standards, and is a great fit for creating new cloud-native microservices when your developers have deep Java programming skills. It is also a good target for modernization scenarios, as you migrate your traditional on-premises applications to a Docker/Kubernetes environment (whether that Kube cluster itself runs on-premises, or in the public cloud). That being said, there are a small percentage of applications that are too difficult or time-consuming to migrate straight to Liberty, and for those, there is now an alternative.
IBM recently delivered a containerized version of its traditional WebSphere Application Server, which we often lovingly refer to as tWAS. This is a stand-alone profile of the app server you've known and loved for decades, not the full Network Deployment (ND) version. Kubernetes itself serves many of the roles that ND used to do for us, like clustering, configuring, high availability, scaling, and more. You simply define a Deployment that refers to the tWAS container hosting your application, and then let Kube determine things like how many pods to start, and how to route work to it (from within the cluster and from outside).
Such tWAS-based pods take a bit longer to start and require a bit more memory and CPU than Liberty-based pods. But they offer the full programming model that people have coded to for decades, making it easier to get your complex, legacy applications into an orchestrated containerized environment without code changes. For example, if the Transformation Advisor tool (or the version of it now built in to the tWAS 9.0.5 admin console, that we'll see here later) reports a large number of Severe errors and a large estimate for time to address them (such as if your legacy app was using outdated technologies like JAX-RPC), then the tWAS container may be a good choice.
To get some experience with deploying a microservice to the tWAS container, I decided to back-port my notification-twitter microservice from Liberty to tWAS. I picked this one because it didn't happen to be using any MicroProfile technologies (for example, it uses Twitter4J to send tweets, rather than an mpRestClient). Also, it is an optional microservice that many people don't bother setting up when deploying my IBM Stock Trader sample (many choose the notification-slack version that posts to a Slack channel instead, or don't bother configuring MQ messaging at all).
As a reminder, let's review the Stock Trader architectural diagram, that we just saw in my recent blog entry on using an umbrella helm chart to deploy all of Stock Trader (that helm chart still works with this tWAS-based image, btw - just enter "twas" as the Tag, rather than "latest", for it to grab that flavor of the image off of DockerHub). Usually, this diagram shows all of the Java-based microservices in a light blue color, and those all run on Open Liberty. But now we have an excuse to use WebSphere's traditional dark purple color for this particular microservice.
We should point out that the caller of this microservice (a Liberty-based microservice calling it via an mpRestClient), and the thing it calls (Twitter), are completely unaware that we moved it to tWAS. It still responds to the same REST API call, expecting to be passed the same data and return the same data it always had. Said another way, the OpenAPI for this microservice is completely unchanged due to this alternate choice in app server (although, sadly, tWAS has no mpOpenAPI service, so you can't hit the pod's /openapi/ui endpoint to see its OpenAPI). And it still contains the same war file - mostly (we'll discuss the few minor changes I had to make shortly).
First of all, let's look at the Dockerfile used to construct this image. Just like when working with Liberty, we start from the Universal Base Image (UBI) flavor of tWAS (previously, it was based on Ubuntu, but the UBI, from Red Hat, is a more strategic, and lighter-weight, flavor of Linux, based on a heavily pared-down version of RHEL). Then we copy in our app, and Jython scripts to install it and to load the SSL cert for Twitter into the trust store. See for the full Dockerfile, with comments, etc.
The pattern here is to put your application(s) in the image's /work/app directory, and any configuration scripts that it needs to /work/config. Then when you run configure.sh, it will run those scripts and install and configure your application(s). Note I just copied the installation Jython script from the Hello World sample for tWAS, and changed the name of the .war file to match my app's name. The other script, to register the SSL certificate for Twitter into the app server's trust store, we'll discuss below.
In the spirit of full disclosure, let's discuss a few issues I hit when initially doing a docker build with this. First off, I immediately realized that the tWAS image is so big (about 1.8 GB, compared to Liberty being about 0.2 GB), that I didn't have enough of my disk allocated to the Docker engine on my Mac. I had to click on the Docker whale icon in my upper-right "tray" of system icons and go to the Disk tab of the resulting dialog and move the slider to the right.
With more disk space available to Docker, I got past the FROM statement, but hit problems when the configure.sh script ran. I hit a DeploymentDescriptorLoadException, which pointed to my web.xml and said it was invalid. I looked, and realized it didn't like the (MicroProfile-related) MP-JWT in my login-config stanza. So I had to delete this part from my web.xml:
<login-config>
<auth-method>MP-JWT</auth-method>
<realm-name>MP-JWT</realm-name>
</login-config>
<login-config>
<auth-method>MP-JWT</auth-method>
<realm-name>MP-JWT</realm-name>
</login-config>
Of course, if you were coming from a legacy app that had been running on tWAS for years, you wouldn't have had that in your web.xml, since it wasn't an option there.
The second issue I hit was that I had to add the IBM proprietary deployment descriptor binding and extension files. I don't need those in Open Liberty, but they were pretty much mandatory back in tWAS. Those too, you would already have if working with an app that has run on tWAS for years, so this is mostly an issue with me doing the unusual thing of back-porting from Open Liberty to tWAS. After some quick searching on Stack Overflow, I figured out I needed this in a new ibm-web-bnd.xml in my war's WEB-INF directory:
<?xml version="1.0" encoding="UTF-8"?>
<web-bnd
xmlns=""
xmlns:xsi=""
xsi:
<virtual-host
</web-bnd>
<?xml version="1.0" encoding="UTF-8"?>
<web-bnd
xmlns=""
xmlns:xsi=""
xsi:
<virtual-host
</web-bnd>
And I needed this in my ibm-web-ext.xml:
<web-ext>
<context-root
</web-ext>
<web-ext>
<context-root
</web-ext>
With those added, the Docker image built cleanly, and I was able to run it. Initially I just ran the Docker image directly on my Mac, via docker run -p 9080:9080 -p 9043:9043 notification-twittter:twas, and I was able to see things start cleanly. However, I remembered that my microservice needs four Twitter-specific environment variables, so I added those via -e params, to specify O-Auth things Twitter requires like the consumer key and its secret, and the access token and its secret, for the @IBMStockTrader account I created on Twitter for this sample.
Note the port numbers I specified above (and which will need to appear in the Kube yaml that we'll see soon). I exposed port 9080 (for the default_host virtual host), so I can call my JAX-RS service. And I exposed port 9043 (for the admin_host virtual host), since that's where the admin console lives. As a blast from the past, let's hit that admin console URL () and see tWAS in its full glory, like many of us remember from our "past lives".
The built-in ID (if you haven't run Jython scripts to wire it up to an LDAP or whatever) is wsadmin, and the password - well, it's in a file named /tmp/PASSWORD in the docker image. If you do a docker ps to see what images you have running, you'll see the identifier for your image. Copy that to the clipboard, and then do a docker exec -it {image-id} bash to get into the image, then do a cat /tmp/PASSWORD to see the value (or you could do a docker cp to copy it off the image to your local disk). Enter those on the login page, and you'll see the old-school administrative console.
I personally spent a lot of time here back in the day, working on products that "stacked" on top of tWAS, like IBM BPM and IBM Business Monitor. Expand the Applications section on the left and we'll see our app that our Jython script installed.
Notice the Liberty Advisor in the console. This is a built in version of Transformation Advisor. This actually will analyze your applications and tell you how hard it would be to migrate each from tWAS to Liberty. Interestingly, if you view the report it generated for my app, it actually reports two severe issues, but they are just complaints about stuff used in the Twitter4J jar file in my war, and in this case aren't actually an issue to be concerned about (false positives) - obviously, since I usually run this war file on Liberty!
Let's look at one last thing in the console - how to get an SSL certificate into the trust store. In the past, I've always done this via the console. However, doing that via the console only makes the change in that running copy of the Docker image. If I stop the container and do another docker run of it (or in Kubernetes, kill the pod and let it start a fresh one), everything will be back to the state of just what the Dockerfile had done, with none of the remaining configuration that had been done on that earlier running instance. Therefore, the proper answer here is to write a Jython script that configure.sh will run during the Docker build, to do the import of Twitter's SSL cert into the trust store.
The good news is, the tWAS admin console has the option to turn on recording of a script of what all you do in it. Go to System Administration->Console Preferences and check the "Log command assistance commands" checkbox and hit Apply.
Then just use the console like usual to import the SSL certificate for the api.twitter.com site on port 443.
Now, if you docker exec into your running container, like discussed earlier, you'll find the log of all of the wsadmin commands that actually got run as you clicked on buttons in the console UI. It is under /logs/server1/commandAssistanceJythonCommands_wsadmin.log, which will contain statements like the following, that you can paste into a Jython script that you can have configure.sh run for you when you build your image:
# ()
Now that the server will trust the Twitter SSL certificate, let's actually try out our microservice. Just like when it runs on Liberty, we can test it via curl, passing in the 3-field JSON structure it expects, with values for the owner, old, and new levels (and having to escape each quote with a backslash). Note that I haven't turned on security in the server, so I'm not having to pass any credentials here; if we were really deploying this into production, I'd need to figure out how to get the JWT support enabled in tWAS and would re-enable the Role-Based Access Control (RBAC) in my web app to only allow in properly authenticated users (perhaps as described here:).$
As you can see, we got back a successful result, seeing Scotty reach Dilithium level ("sir, the engines can't take the strain!" - lol). And sure enough, if I log into Twitter, I see the tweet, since I follow the @IBMStockTrader account.
The last thing I'll point out is that the deployment yaml I usually use to deploy this microservice to a Kubernetes environment, such as the OpenShift Container Platform, needed to be updated a bit, since tWAS has higher memory and CPU requirements than Liberty. I also had to remove the stanzas for the readinessProbe and livenessProbe, since I was getting those for free by enabling the mpHealth (MicroProfile Health) feature in Liberty, but there's no equivalent in tWAS (I would have to manually implement the /health endpoint myself). I also don't have the /metrics endpoint available in tWAS, like I do via having the mpMetrics feature enabled in Liberty, so I had to remove the stanza that tells Prometheus to scrape that endpoint. See for the full yaml.
Once we run that deployment yaml (which, to be clear, pulls the notification-twitter:twas image from DockerHub, not from the environment's local Docker image registry), our tWAS-based container will be live and running in our Kubernetes environment. For example, I ran it on our Tribbles2 environment, which is an OpenShift 4.1 atop AWS, and now we can see the results.
As you can see, it is there in our stock-trader namespace, alongside all of the other microservices that comprise this application. And other than the Jenkins pod, it is clearly the largest microservice in terms of memory usage, due to dragging along all of tWAS, whereas the others are all based on Liberty (which only enables features as needed by the apps installed to it). Let's take a closer look at one of the notification-twitter pods, to better understand its resource utilization.
As you can see in the time-based graphs, the pod initially used up a LOT of memory and especially CPU. This is due to the way the tWAS server starts - it briefly consumes essentially all of the CPU that it can get, but then it settles down, once the server is fully up and garbage collection has run. Also, it starts faster if you give it more memory and CPU (if run at the lower settings I usually use for Liberty, it takes like 15 minutes to start, but with these settings, it will start in 2-3 minutes - still not the sub-minute start time of Liberty, but not too bad):
resources:
limits:
cpu: 1000m
memory: 1000Mi
requests:
cpu: 250m
memory: 256Mi
To summarize - I'd always recommend Liberty over tWAS if possible. But if you need to start a modernization journey, and need to see some results quickly, with applications that heavily use legacy features of tWAS that aren't in Liberty, then the tWAS container should be a consideration. Just remember to factor into your plans a second stage, where you complete your modernization journey and make it to Liberty (which often means re-coding old stuff like JAX-RPC to JAX-WS, for example, or uplifting old Java EE 6 or earlier to at least Java EE 7). But at least you'll have seen some results of your legacy app in a Kubernetes-orchestrated containerized environment - whether that is in your own private cloud, or hosted in a public cloud.
Thanks again for following our blog, and feel free to leave us some feedback, ask questions, or suggest future blog entries.
John Alcorn (jalcorn@us.ibm.com)
IBM Cloud Engagement Hub
Owner of the IBM Stock Trader sample | https://www.ibm.com/developerworks/community/blogs/5092bd93-e659-4f89-8de2-a7ac980487f0/entry/Experiences_using_the_tWAS_Dockker_container?lang=en | CC-MAIN-2019-39 | refinedweb | 2,712 | 55.68 |
Our universe exhibits many examples of entities that can change form: A butterfly morphs from larva to pupa to imago, its adult form. On Earth, the normal state of water is liquid, but water changes to a solid when frozen, and to a gas when heated to its boiling point. This ability to change form is known as polymorphism. Modeling polymorphism in a programming language lets you create a uniform interface to different kinds of operands, arguments, and objects. The result is code that is more concise and easier to maintain.
Java supports four kinds of polymorphism:
- Coercion is an operation that serves multiple types through implicit-type conversion. For example, you divide an integer by another integer or a floating-point value by another floating-point value. If one operand is an integer and the other operand is a floating-point value, the compiler coerces (implicitly converts) the integer to a floating-point value to prevent a type error. (There is no division operation that supports an integer operand and a floating-point operand.) Another example is passing a subclass object reference to a method's superclass parameter. The compiler coerces the subclass type to the superclass type to restrict operations to those of the superclass.
- Overloading refers to using the same operator symbol or method name in different contexts. For example, you might use
+to perform integer addition, floating-point addition, or string concatenation, depending on the types of its operands. Also, multiple methods having the same name can appear in a class (through declaration and/or inheritance).
- Parametric polymorphism stipulates that within a class declaration, a field name can associate with different types and a method name can associate with different parameter and return types. The field and method can then take on different types in each class instance (object). For example, a field might be of type
Double(a member of Java's standard class library that wraps a
doublevalue) and a method might return a
Doublein one object, and the same field might be of type
Stringand the same method might return a
Stringin another object. Java supports parametric polymorphism via generics, which I'll discuss in a future article.
- Subtype means that a type can serve as another type's subtype. When a subtype instance appears in a supertype context, executing a supertype operation on the subtype instance results in the subtype's version of that operation executing. For example, consider a fragment of code that draws arbitrary shapes. You can express this drawing code more concisely by introducing a
Shapeclass with a
draw()method; by introducing
Circle,
Rectangle, and other subclasses that override
draw(); by introducing an array of type
Shapewhose elements store references to
Shapesubclass instances; and by calling
Shape's
draw()method on each instance. When you call
draw(), it's the
Circle's,
Rectangle's or other
Shapeinstance's
draw()method that gets called. We say that there are many forms of
Shape's
draw()method.
Like many developers, I classify coercion and overloading as ad-hoc polymorphism, and parametric and subtype as universal polymorphism. While valuable techniques, I don't believe coercion and overloading are true polymorphism; they're more like type conversions and syntactic sugar.
In this article I'll focus on subtype polymorphism. You'll learn about upcasting and late binding, abstract classes (which cannot be instantiated), and abstract methods (which cannot be called). You'll also learn how to do downcasting and runtime-type identification in your Java programs, and you'll get a first look at covariant return types. I'll introduce parametric polymorphism in a future tutorial.
Upcasting and late binding
Subtype polymorphism relies on upcasting and late binding. Upcasting is a form of casting where you cast up the inheritance hierarchy from a subtype to a supertype. No cast operator is involved because the subtype is a specialization of the supertype. For example,
Shape s = new Circle(); upcasts from
Circle to
Shape. This makes sense because a circle is a kind of shape.
After upcasting
Circle to
Shape, you cannot call
Circle-specific methods, such as a
getRadius() method that returns the circle's radius, because
Circle-specific methods are not part of
Shape's interface. Losing access to subtype features after narrowing a subclass to its superclass seems pointless, but is necessary for achieving subtype polymorphism.
Suppose that
Shape declares a
draw() method, its
Circle subclass overrides this method,
Shape s = new Circle(); has just executed, and the next line specifies
s.draw();. Which
draw() method is called:
Shape's
draw() method or
Circle's
draw() method? The compiler doesn't know which
draw() method to call. All it can do is verify that a method exists in the superclass, and verify that the method call's arguments list and return type match the superclass's method declaration. However, the compiler also inserts an instruction into the compiled code that, at runtime, fetches and uses whatever reference is in
s to call the correct
draw() method. This task is known as late binding.
Upcasting and late binding
I've created an application that demonstrates subtype polymorphism in terms of upcasting and late binding. This application consists of
Shape,
Circle,
Rectangle, and
Shapes classes, where each class is stored in its own source file. Listing 1 presents the first three classes.
Listing 1. Declaring a hierarchy of shapes
class Shape { void draw() { } } class Circle extends Shape { private int x, y, r; Circle(int x, int y, int r) { this.x = x; this.y = y; this.r = r; } // For brevity, I've omitted getX(), getY(), and getRadius() methods. @Override void draw() { System.out.println("Drawing circle (" + x + ", "+ y + ", " + r + ")"); } } class Rectangle extends Shape { private int x, y, w, h; Rectangle(int x, int y, int w, int h) { this.x = x; this.y = y; this.w = w; this.h = h; } // For brevity, I've omitted getX(), getY(), getWidth(), and getHeight() // methods. @Override void draw() { System.out.println("Drawing rectangle (" + x + ", "+ y + ", " + w + "," + h + ")"); } }
Listing 2 presents the
Shapes application class whose
main() method drives the application.
Listing 2. Upcasting and late binding in subtype polymorphism
class Shapes { public static void main(String[] args) { Shape[] shapes = { new Circle(10, 20, 30), new Rectangle(20, 30, 40, 50) }; for (int i = 0; i < shapes.length; i++) shapes[i].draw(); } }
The declaration of the
shapes array demonstrates upcasting. The
Circle and
Rectangle references are stored in
shapes[0] and
shapes[1] and are upcast to type
Shape. Each of
shapes[0] and
shapes[1] is regarded as a
Shape instance:
shapes[0] isn't regarded as a
Circle;
shapes[1] isn't regarded as a
Rectangle.
Late binding is demonstrated by the
shapes[i].draw(); expression. When
i equals
0, the compiler-generated instruction causes
Circle's
draw() method to be called. When
i equals
1, however, this instruction causes
Rectangle's
draw() method to be called. This is the essence of subtype polymorphism.
Assuming that all four source files (
Shapes.java,
Shape.java,
Rectangle.java, and
Circle.java) are located in the current directory, compile them via either of the following command lines:
javac *.java javac Shapes.java
Run the resulting application:
java Shapes
You should observe the following output:
Drawing circle (10, 20, 30) Drawing rectangle (20, 30, 40, 50)
Abstract classes and methods
When designing class hierarchies, you'll find that classes nearer the top of these hierarchies are more generic than classes that are lower down. For example, a
Vehicle superclass is more generic than a
Truck subclass. Similarly, a
Shape superclass is more generic than a
Circle or a
Rectangle subclass.
It doesn't make sense to instantiate a generic class. After all, what would a
Vehicle object describe? Similarly, what kind of shape is represented by a
Shape object? Rather than code an empty
draw() method in
Shape, we can prevent this method from being called and this class from being instantiated by declaring both entities to be abstract.
Java provides the
abstract reserved word to declare a class that cannot be instantiated. The compiler reports an error when you try to instantiate this class.
abstract is also used to declare a method without a body. The
draw() method doesn't need a body because it is unable to draw an abstract shape. Listing 3 demonstrates.
Listing 3. Abstracting the Shape class and its draw() method
abstract class Shape { abstract void draw(); // semicolon is required }
Declaring fields, constructors, and non-abstract methods
An abstract class can declare fields, constructors, and non-abstract methods in addition to or instead of abstract methods. For example, an abstract
Vehicle class might declare fields describing its make, model, and year. Also, it might declare a constructor to initialize these fields and concrete methods to return their values. Check out Listing 4.
Listing 4. Abstracting a vehicle
abstract class Vehicle { private String make, model; private int year; Vehicle(String make, String model, int year) { this.make = make; this.model = model; this.year = year; } String getMake() { return make; } String getModel() { return model; } int getYear() { return year; } abstract void move(); }
You'll note that
Vehicle declares an abstract
move() method to describe the movement of a vehicle. For example, a car rolls down the road, a boat sails across the water, and a plane flies through the air.
Vehicle's subclasses would override
move() and provide an appropriate description. They would also inherit the methods and their constructors would call
Vehicle's constructor.
Downcasting and RTTI
Moving up the class hierarchy, via upcasting, entails losing access to subtype features. For example, assigning a
Circle object to
Shape variable
s means that you cannot use
s to call
Circle's
getRadius() method. However, it's possible to once again access
Circle's
getRadius() method by performing an explicit cast operation like this one:
Circle c = (Circle) s;.
This assignment is known as downcasting because you are casting down the inheritance hierarchy from a supertype to a subtype (from the
Shape superclass to the
Circle subclass). Although an upcast is always safe (the superclass's interface is a subset of the subclass's interface), a downcast isn't always safe. Listing 5 shows what kind of trouble could ensue if you use downcasting incorrectly.
Listing 5. The problem with downcasting
class Superclass { } class Subclass extends Superclass { void method() { } } public class BadDowncast { public static void main(String[] args) { Superclass superclass = new Superclass(); Subclass subclass = (Subclass) superclass; subclass.method(); } }
Listing 5 presents a class hierarchy consisting of
Superclass and
Subclass, which extends
Superclass. Furthermore,
Subclass declares
method(). A third class named
BadDowncast provides a
main() method that instantiates
Superclass.
BadDowncast then tries to downcast this object to
Subclass and assign the result to variable
subclass.
In this case the compiler will not complain because downcasting from a superclass to a subclass in the same type hierarchy is legal. That said, if the assignment was allowed the application would crash when it tried to execute
subclass.method();. In this case the JVM would be attempting to call a nonexistent method, because
Superclass doesn't declare
method(). Fortunately, the JVM verifies that a cast is legal before performing a cast operation. Detecting that
Superclass doesn't declare
method(), it would throw a
ClassCastException object. (I'll discuss exceptions in a future article.)
Compile Listing 5 as follows:
javac BadDowncast.java | http://www.javaworld.com/article/3033445/learn-java/java-101-polymorphism-in-java.html | CC-MAIN-2016-26 | refinedweb | 1,886 | 55.34 |
What is IPFS?
IPFS is short for Inter Planetary File System. It is a peer-to-peer, distributed file system to make the web faster, safer, and more open. To shift from present version of web to a distributed version of web, we need IPFS. Essentially, the aim is to replace HTTP.
But hey, why replace HTTP?
- Crazy bandwidth cost: The present Web uses HTTP based on a single client-server model. One always has to approach the central server to download any kind of file. Imagine, if you could get bits of the same file from nodes near you? You can get the file downloaded faster and by using less bandwidth. With video delivery, a P2P approach could save 60% in bandwidth costs.
IPFS makes it possible to distribute high volumes of data with high efficiency. And zero duplication means savings in storage.
2. 404 is so freaking common! : The average lifespan of a web page is 100 days. After that, one can expect to see a 404 message. The present Web is so fragile. Links break all the time. It’s as good as burning books.
IPFS provides historic versioning (like git) and makes it simple to set up resilient networks for mirroring of data.
3. Centralised infrastructure, ugh: All the power of our data rests with the main server. If that breaks down, we are done. If Twitter breaks down, we can’t tweet anymore. If Facebook breaks down, well, it is already broken haha.
IPFS remains true to the original vision of the open and flat web, but delivers the technology which makes that vision a reality.
4. Offline is the new online: In developing countries, during natural disasters, interim bad network, what do we do? Just sit? The networks we’re using are so 20th Century. We can do better.
IPFS powers the creation of diversely resilient networks which enable persistent availability with or without Internet backbone connectivity.
How does IPFS work? (In simple terms)
So if you want to retrieve a data structure or a file is saved on the web using IPFS, you won’t hit the central server at all. You’d ask your peers in the network to give you a path to that file. Your peers will give you something known as a ‘cryptographic hash’ of that file. It is a unique fingerprint of that file.
Suppose you want to get /foo/bar/baz.png and its cryptographic hash is WmXGTaGWTx1uUtfSb2sBAvArMEVLK4rQEc4g5bv7wwdz1U. (This can be generated using SHA1, SHA2, or any other algorithm). You hit the web with this link.
Wikipedia has already started using IPFS:
The URL is in the format:
A bit about Merkle Trees
The research paper on Merkle link can be found here. Ralph Merkle is the brain behind Merkle data structure.
A beautiful illustration of what is a Merkle Tree can be found here.
Basic implementation of Merkle Tree in C++
#include <stdio.h>#include <stdlib.h>#include <iterator>#include <vector>
using namespace std;
// Hashing functions.int multiplyThem(int a, int b) {return a*b;}
int addThem(int a, int b) {return a+b;}
class Merkle {private:vector<int> values;int (*hasher)(int, int);
public:Merkle(int (*f)(int,int)) {this->hasher = f;}
int size() {return values.size();}
void add(int value) {values.push_back(value);}
int root() {vector<int> current;
current = getHashedParents(this->values); while (current.size() != 1) { current = getHashedParents(current); } return current\[0\];
}
private:vector<int> getHashedParents(const vector<int> &children) {vector<int> result;
for (int i=0; i < children.size(); ) { int a = children\[i\], b = children\[i\]; if (++i < children.size()) { b = children\[i++\]; } int hash = this->hasher(a,b); printf("hash(%d, %d)=>%d ", a, b, hash); result.push\_back(hash); } printf("\\n"); return result;
}};
int main(int argc, char** argv) {Merkle merkle(multiplyThem);merkle.add(1);merkle.add(2);merkle.add(3);merkle.add(4);merkle.add(5);
printf("Merkle Root = %d\\n\\n", merkle.root()); merkle = Merkle(addThem); merkle.add(1); merkle.add(2); merkle.add(3); merkle.add(4); merkle.add(5); printf("Merkle Root = %d\\n\\n", merkle.root()); return 0;
}
The heart of IPFS is IPLD.
IPLD is short for Inter Planetary Linked Data. The files/data structures are linked to each other using Merkle links.
(What is Merkle DAG? It is a Merkle directed acyclic graph. It is similar to Merkle tree. However, a Merkle DAG need not be balanced and its non-leaf nodes are allowed to contain data.)
In IPFS, links between two nodes are in form of cryptographic hashes. This is possible because of Merkle DAG data structure. Merkle DAGs provide IPFS many useful properties, including:
- Content Addressing: All content is uniquely identified by its cryptographic hash, including links.
- Tamper proof: All content is verified with its checksum. If data is tampered with or corrupted, IPFS detects it because the hash will change.
- No duplication: All objects that hold the exact same content are equal (i.e. their hash value is equal), and only stored once.
Just by giving away the merkle root to someone, you can handover huge volume of data to that person. Because, a merkle root essentially holds signature of all blocks underneath it.
Interoperability of systems can also persist in a merkle forest, where each tree represents a separate merkle tree. In a forest, one tree can be bitcoin, one can be ethereum, one can be a regular SQL database. So to interchange information between these trees, these content based cryptographic hash functions are efficient. Rather than sending the entire file over, only hash is sent out. Imagine using Ethereum for some transaction and adding in a Git page within the transaction.
Presently this kind of a system is used by:
- Bitcoin
- Ethereum
- Git
- Bit Torrent
There are more, but these are the major ones.
Getting Started | IPFS Docs_If you haven't done so, install IPFS. Install IPFS now During this tutorial, if you have any questions, feel free to…_ipfs.io
It’s, again, a ground breaking technology which is pushing the web from Web2.0 to Web3.0 | https://hackernoon.com/ipfs-and-merkle-forest-a6b7f15f3537 | CC-MAIN-2022-40 | refinedweb | 1,010 | 68.36 |
PROBLEM LINK:
DIFFICULTY:
Cakewalk
PREREQUISITES:
Basic Math and knowledge of Kaprekar numbers ()
PROBLEM:
Given a number, find the turns until which it gets converted to a number which repeats itself.
QUICK EXPLANATION:
As the length of the numbers can be maximum of 8. We can apply brute force and keep storing numbers that we acquire and count the turns.
EXPLANATION:
As the series always terminates fairly quickly even for large numbers, we can apply brute force method.
We need to take the number and sort it’s digits in ascending and descending order to get the
maxa and
mina respectively, which can be done in O(dlog(d)) time (where d are the digits of the number).
Store this difference in an array and check everytime if the number is repeating or not, takes O(k) time (where k is the size of the array).
The iterations may vary but it can be proved that they won’t be more than the number n.
So, the overall complexity becomes O(dlog(d) + m) (where m<n).
SOLUTIONS:
Setter's Solution
def get_num(N): number=list(str(N)) number.sort() smallest = '' largest = '' for i in range(0,len(number)): smallest+=number[i] largest+=number[len(number)-1-i] small_num = int(smallest) large_num = int(largest) return [small_num,large_num] for _ in range(int(input())): N = int(input()) dic={} if(N not in dic): dic[N]=1 count=0 flag=0 check = N while(flag!=1): ans = get_num(check) answer = ans[1]-ans[0] if(answer in dic): flag=1 else: dic[answer]=1 count+=1 check=answer print(count) | https://discuss.codechef.com/t/mgmtx-editorial/81085 | CC-MAIN-2021-10 | refinedweb | 267 | 59.74 |
I've been having a very strange issue with the energies of a PyRosetta script I have been working on. It seems that I am getting NaN returned as the score value as I make mutations to a pose and upon further inspection it seems that the source of the NaNs is the p_aa_pp term. Specifically it seems that the score function subroutine of "eval_ci_1b" is the one producing the NaNs and "eval_ci_2b" doesn't seem to produce NaNs at all. It also seems to be a random occurrence as reinitializing with a new seed sometimes causes p_aa_pp to return a real number although the real number ends up being a different number each time. Is p_aa_pp supposed to be randomized? I wrote a minimal script to identify the problem and p_aa_pp seems to be randomized even though I'm not changing the pose. All the other score terms remain constant. The really weird thing is that this only happens on the manual build I performed for a BlueGene Q cluster. It never happened on the other two clusters that we've been using (which was using the latest downloadable pre-compiled PyRosetta package). Does anyone have any ideas on how to fix this? I don't know if it's a bug or if it's (probably) just something I messed up during compilation (it was a major headache trying to compile on BGQ). I used the instructions in rosetta3.4/rosetta_source/src/python/bindings/building.txt to build it, although I couldn't get all the protocols to build so I only built the ones I needed.
Obviously we'd like to try to get it to work on BGQ for the supercomputing power since we have a lot of parallel code, but if I disable p_aa_pp will that cause problems?
Here are the specifications of the BGQ if it's useful information:
RHEL 6, PPC64, Big-Endian
Python 2.7.3 (also had to be recompiled for BGQ)
As far as I know, the p_aa_pp term should be deterministic. That's not to say that a different seed wouldn't be an issue, as a different seed would result in a different structure, and thus a different p_aa_pp evaluation. Does the NaN come just from scoring, or is it in the context of packing/minimization?
It might help if you could come up with a reproducible example, preferably one that isn't dependent on the RNG seed. Try dumping the structure giving you NaNs as a binary silent file. (PDBs lose precision on coordinates - a binary silent file will keep the full precision.) You can then read in and rescore the structure both under BGQ and the local cluster - that should tell us if it's an issue with the structure itself, or with the p_aa_pp evaluation under BlueGene. I'm guessing it's some bug that only manifests itself under BlueGene. The BlueGene compiler tends to be more finicky than GCC, and we don't do much testing with it.
Another option you could try, if you're using a recent-ish version of PyRosetta, is to set the -corrections:score:use_bicubic_interpolation option to true. This tells Rosetta to use a slightly different evaluation technique for a number of score terms, including p_aa_pp, and so may avoid the NaN issues..
Sorry for my late response, I got busy with other things yesterday and forgot to respond. I dumped the first pose that generated a NaN in my script (using pose.dump_pdb) and then ran the following simple script:
from rosetta import *
init()
pose = pose_from_pdb("bad_pose.pdb")
scorefxn = create_score_function("standard")
print scorefxn(pose)
Output: nan
Although if I reinitialize and do the same thing, it sometimes gives me a real number. I think you're right, it's most likely something weird with the BG compiler. I might try to recompile in the future but it's a major hassle to compile. I tried using the bicubic interpolation option like you suggested by doing 'init(extra_options="-corrections:score:use_bicubic_interpolation")' and it says that it doesn't know what this option is. But it recognizes the option on other architectures. What if I just turn off p_aa_pp? Would that give unrealistic scores?
Are you using the same PyRosetta release on all the computers? The use_bicubic_interpolation option dates from about a year ago, so if your BG PyRosetta version is older than that it might not have the option.
If it's more recent than that, I'd seriously question the state of the compile. I'd highly recommend recompiling, as a botched compile has the possibility to give you bad/inaccurate results even in cases where there isn't an obvious error.
Regarding turning off the p_aa_pp term (probability of amino acid given phi and psi), if you're doing fixed sequence, fixed backbone protocols, the p_aa_pp won't change on you, and you can safely turn it off. (As long as you're not comparing to energies/thresholds made with the p_aa_pp score on.) If you're doing fixed backbone design, you may be alright in turning it off, although you may get some small position-specific sequence biases. If you're doing flexible backbone protocols (either fixed sequence or design), I'd hesitate in turning it off, as you might not get the correct backbone conformations. You may want to benchmark your particular case to see if it greatly affects things. You may need to up-weight the rama term (the conceptual inverse of p_aa_pp) to compensate.
The PyRosetta version on the non-BG computers is the latest version and the version on BG was compiled using Rosetta 3.4, so I think that bicubic interpolation should be available. I think you're right about the compile being bad. I'll try to recompile it next week and let you know what happens. I can't be sure that the rest of the energy calculation isn't messed up in some way as you said in your post.
Hey, sorry for the really late reply but I got busy with other things for the past month. Anyway, I just wanted to let you know that I tried recompiling PyRosetta on BlueGene Q to get rid of the NaNs that were showing up in the p_aa_pp score term. It took a while to recompile but I ended up getting it to work again. But the NaNs were still there! I got frustrated because I thought this would fix the problem, so I logged out of the BlueGene Q to do something else for a while. Then when I came back and tried running a job again to debug it the NaNs magically disappeared, and this happened for several consecutive runs. I was searching through the Rosetta code for anything related to p_aa_pp and found that it was trying to load data from rosetta_database. I thought one of the files in rosetta_database might have been corrupt so I pointed PyRosetta back to the old rosetta_database and sure enough the NaNs came back. So the problem was with rosetta_database all along! At least it seems highly probable that the database was the problem. When I tried submitting the first fresh job the PYROSETTA_DATABASE environment variable had been sourced from the old PyRosetta build, so the next login updated it to the new database. I don't know what happened exactly because there are the same number of files in old and new rosetta_database. I did learn the hard way that the data partition the sysadmin told me to use (because the normal one is too small) is volatile and files get erased automatically after a certain amount of time passes so maybe that has something to do with corrupting the files. Anyway, I just thought I'd post back here in case this helps someone else. Thanks for your help! | https://rosettacommons.org/node/3438 | CC-MAIN-2021-43 | refinedweb | 1,311 | 60.85 |
Many C and C++ programming beginners tend to confuse between the concept of macros and Inline functions.
Often the difference between the two is also asked in C interviews.
In this tutorial we intend to cover the basics of these two concepts along with working code samples..
Here is an example of a simple macro :
#define MAX_SIZE 10
The above macro (MAX_SIZE) has a value of 10.
Now let’s see an example through which we will confirm that macros are replaced by their values at pre-processing time. Here is a C program :
#include<stdio.h> #define MAX_SIZE 10 int main(void) { int size = 0; size = size + MAX_SIZE; printf("\n The value of size is [%d]\n",size); return 0; }
Now lets compile it with the flag -save-temps so that pre-processing output (a file with extension .i ) is produced along with final executable :
$ gcc -Wall -save-temps macro.c -o macro
The command above will produce all the intermediate files in the gcc compilation process. One of these files will be macro.i. This is the file of our interest. If you open this file and get to the bottom of this file :
... ... ... int main(void) { int size = 0; size = size + 10; printf("\n The value of size is [%d]\n",size); return 0; }.
Here are some examples that define macros for swapping numbers, square of numbers, logging function, etc.
#define SWAP(a,b)({a ^= b; b ^= a; a ^= b;}) #define SQUARE(x) (x*x) #define TRACE_LOG(msg) write_log(TRACE_LEVEL, msg)
Now, we will understand the below program which uses macro to define logging function. It allows variable arguments list and displays arguments on standard output as per format specified.
#include <stdio.h> #define TRACE_LOG(fmt, args...) fprintf(stdout, fmt, ##args); int main() { int i=1; TRACE_LOG("%s", "Sample macro\n"); TRACE_LOG("%d %s", i, "Sample macro\n"); return 0; }
Here is the output:
$ ./macro2 Sample macro 1 Sample macro.
2. :
#ifdef PRJ_REL_01 .. .. code of REL 01 .. .. #else .. .. code of REL 02 .. .. #endif
To comment multiples lines of code, macro is used commonly in way given below :
#if 0 .. .. code to be commented .. .. #endif
Here, we will understand above features of macro through working program that is given below.
#include <stdio.h> int main() { #if 0 printf("commented code 1"); printf("commented code 2"); #endif #define TEST1 1 #ifdef TEST1 printf("MACRO TEST1 is defined\n"); #endif #ifdef TEST3 printf("MACRO TEST3 is defined\n"); #else printf("MACRO TEST3 is NOT defined\n"); #endif return 0; }
Output:
$ ./macro MACRO TEST1 is defined MACRO TEST3 is NOT defined.
2.:
void func_test() __attribute__((always_inline));.
#include <stdio.h> void inline test_inline_func1(int a, int b) { printf ("a=%d and b=%d\n", a, b); } int inline test_inline_func2(int x) { return x*x; } int main() { int tmp; test_inline_func1(2,4); tmp = test_inline_func2(5); printf("square val=%d\n", tmp); return 0; }
Output:
$ ./inline a=2 and b=4 square val=25
{ 12 comments… add one }
Cool. Thanks for the valuable information.
Macro SQUARE above is typical of naive macro usage and demonstrates one of the weaknesses of C macros. What happens to the macro
#define SQUARE(x) (x*x)
when you say SQUARE(v + 1)? The macro is expanded to (v + 1*v + 1). The result, which you probably expected to be equal to the arithmetical formula v*v + 2*v + 1 comes out to 2*v + 1.
Fix this by defining SQUARE as follows
#define SQUARE(x) ((x)*(x))
Now SQUARE(v + 1) expands to ((v + 1)*(v + 1)). This is much better, but it won’t save you if you invoke SQUARE with an expression having side effects.
SQUARE(++i) evaluates to ((++i)*(++i)) when the effect you probably wanted was more like (++i, (i*i)). This expression uses the rather obscure C comma operator, about which more could be written.
These difficulties with macros are why C has evolved inline functions.
It is a REAL PLEASURE to read your newsletter!
Very good job!
I hope you get something out of it!
God bless you!
best regards
Giovanni
Hi, all your posts there are very useful on my work! Thanks a lot!
Matias (From Argentina)
Yes, i agree with SeattleC++ please make a change in that particular Macro definition.
>> #define SQUARE(x) ((x)*(x))
Thanks a lot.
It is very useful and very clear to understand.
Need on your opinion why following macro-function doesn’t work:
#define def_1(var) { \
int s[var]; \
}
int main(){
def_1(2);
s[0]=1; s[1]=3;
printf(“s[0]=%d\t s[1]=%d\n”, s[0], s[1]);
return;
}
notice that no space is allowed between the identifier and the parenthesis, as you wrote:
#define SQUARE(x) (x*x)
also pay attention to the pitfalls of such writing, i.e:
int x = (int)SQUARE(a)
it will be replaced with (int)a*a, means that the casting is applied to the left part only.
you can think of other pitfalls for sure. thus, it shoud be written as SQUARE(x)((x)*(x))
To SeattleC++: There is nothing “obscure” about the comma operator, unless you’re unwilling to read. The comma is simply a precedence operator performing checkpointing.
To bassam: Wrong. The macro expansion along with the typecast will look like this: (int)(a*a). Why would the parenthesis go anywhere? It’s part of the macro. And no, no standard or pre-processor handbook will ever tel you that whitespace isn’t allowed between a macro definition and the identifier. That’s stupid.
abhijit: Whenever working with debugging macros, it’s a good idea to actually see the preprocessor’s output. I’m on a phone that suspiciously doesn’t like c/ping so this is paraphrased, but what you have is kinda like this:
void function() {
{
int s[2]
}
/* do stuff with s */
}
The problem is that the curly braces you wrapped around int s[2]; are that variable’s scope. So its scope ends before you actually do anything with it. Probably, you’ve heard the advice to put curly braces around macros so they behave like functions, but if you define a function local variable in function B, then function A which happens to call function B will never be able to safely access that variable… not that macros ever truly behave like functions anyway.
If you use gcc, I believe -E is the option to print out postprocessed code. Very useful for macro debugging.
hi
is the function be inline at the compile time???
can i see the the replacement in .i file
OMG. I got stuck on the macro but not it is very clear after reading your example. Awesome job. Seems you are a very good teacher. Thank you so much! | http://www.thegeekstuff.com/2013/04/c-macros-inline-functions/ | CC-MAIN-2016-18 | refinedweb | 1,126 | 73.78 |
Objects in Perl are my nemesis. It is the one place in Perl where there seem to be too many options, and, in the past, I've just avoided dealing with it.
While learning Perl, I went through the relevant chapters in Programming Perl, perldoc perlboot, perltoot, and perlobj, and tutorials like Tutorial: Introduction to Object Oriented Programming here. All was well and good, I knew how to use the object interfaces of other modules, and my scripts were so small in size and scope that I didn't need to go OO myself. I was writing all of my "heavy" stuff in Java.
I'm currently starting to build a large-ish web application in Perl that is a great candidate for an OO style, and I've been looking at my options. The Cookbook pointed me towards various neat-looking CPAN helper modules, and that spurred me to go poke around CPAN on my own.
There's too much there! From "the whole package" object systems to little mix-and-match helpers in the Class namespace, there seem to be millions of choices.
Based on my search, I'm leaning towards Moose. It's simple, I like the features it supports, and in many cases it makes objects work more like I expect them to work.
The incredible array of options made wonder what other people choose, both out of curiosity and to hear about modules I've not yet explored. So, when you are creating an OO style application in Perl, what tools do you reach for to make the job easier?
I have been a fan of Class::Std but if I was to look at another module I would consider Class::InsideOut.
The more I worked with objects, the more I believe in using inside-out objects. It keeps you and your co-workers from being able to do stupid stuff(like directly accessing object data).
I also like to work with get_* and set_* methods rather than one method that can do both. A number of method generators don't have that option and don't play well with inside-out objects, relying more on blessed hashes.
Another issue is whether your object is going to represent data from a database. There are quite a few distributions for doing that, but I have not been happy with most of them and ended up writing my own. I have recently gone back and took a look at them again and one that is striking my fancy is Rose::DB::Object. The document and syntax is reasonably clear so I will probably start looking at that seriously.
use Test::More;
You are deliberating over what tool to use, but I would invite you to postpone that question and get busy writing a test suite.
What do you want your class to do? Have your test suite cover every interface method (including the constructor and the accessors), but above all convince yourself that a class that passes your tests will accomplish your goals.
I confess I find it hard to write tests when I really want to be coding the implementation, but I remind myself that
Now you can safely pick a tool at random from among your favorite answers in this thread. You can even change your mind later and convert your classes to a different implementation. If your design is properly encapsulated, you'll be fine.
That said, I do have my own preference for inside-out or flyweight implementations. I think building classes in this way is like use strict. It catches dangerous practices (in this case, accessing your object's internals directly) at an earlier stage of development.
I may sound insane, but I like Perl 5's object model as it allows me to choose the right way for the design I'm interested in, while keeping everything Perlish enough. (Seeing that "Pure OO" languages tend to force their OO way-of-thinking on you).On the other hand, I also like JavaScript's prototype system as well, so I might not be the best person to refer to.
Since you wrote that your web-application is a "great candidate for an OO style", I would suggest that you should pursue this idea before writing any code. Find out what style you wish to utilise and the object model that you'll seek, and then find the tools that satisfy your needs. Another way of doing this is actually finding the right tools (which components your web-application will use) and the API that goes with them. Most of CPAN's modules are already displaying a heavy OO API, which may suggest a way for you to work.
While Java (et al.)'s "Everything's an object (usually called Object)" state of mind isn't the only option out there, it seems that most people prefer their OO this way, making Moose a good choice, with its "Everything's a Moose::Object" style.
UPDATE: Just checked and found that in JavaScript Everything is, in fact, an Object.
Software speaks in tongues of man; I debug, therefore I code.
Stop saying 'script'. Stop saying 'line-noise'.
While Java (et al.)'s "Everything's an object (usually called Object)" state of mind isn't the only option out there, it seems that most people prefer their OO this way, making Moose a good choice, with its "Everything's a Moose::Object" style.
Moose is more than just "Everything's a Moose::Object". We can also do prototype style OO, and if you are so inclined inside-out objects with MooseX::InsideOut. And honestly plain Moose is not nearly as "everything is an object" as when you use Moose::Autobox.
Moose is more than just "Everything's a Moose::Object".
I was over-symplifying, of course, in the process of trying to cater for the OP's Java-based background.
We can also do prototype style OO
Now *that* would be interesting, although the URL you link to seems to be broken.
In the rare case where I'm creating a truely generic class I'll usually reach for Class::Accessor::Fast to do the busy work. It's quick, light and gets the job done. I used to like Class::MethodMaker but I'm not a fan of where the module has gone since v2 came out.
-sam
I really).
One day I will likely need something more sophisticated, but until then I'm happy being able to roll up my own automatic accessor method generator if I need those.
-- | http://www.perlmonks.org/?node_id=665591 | CC-MAIN-2015-14 | refinedweb | 1,096 | 70.02 |
This project introduces a Windows Explorer clone in an early state. It contains browsing through all files and folders on your computer, including virtual folders. It uses the same
ContextMenus as Windows Explorer and includes drag and drop support.
I created this project with Visual Studio 2005 (.NET 2.0) and haven't tried it with .NET 1.1. I'm pretty sure it will work with .NET 1.1, but some changes need to be made. For example, I used the
ToolBarMenuStrip which isn't available in 1.1. If anyone wants a 1.1 version and isn't able to convert it, I'm willing to convert it myself and provide the code.
Quite a lot of different new things in this update. The most important new feature is plug-ins. You can now add your own plugins to this application, which will be used to add columns to the details view and to add special views. More on this in the "Plug-ins" section. Another important update is an addition to this article which explains how to use this
Control in your own application and how to use it's functions, see the "Using the Control" section for this update.
Furthermore there are some small updates, additions and bug fixes. See the "History" section for these updates.
A bit of a small update really, but quite a handy one. I added the "New" menu to the standard
ContextMenu of the
ListView. So now it's possible to add new folders and files from within the program.
Also a few bug fixes and another change in the update thread has been made, which also comes with some speed improvement.
This is quite a large improvement since the first version. It doesn't include a lot of new features, but has a lot of fixes and speed improvement. The most important fix is the memory leak fix, which was caused by the update thread. When I was solving this problem I also added another update method. This method uses the
SHChangeNotifyRegister function to retrieve Shell notify messages. These messages are used to make some more updates, like changing icons, inserting media and renaming. So now when you insert a disc into your disc-drive the icon and text of the drive will change to the ones from the disc.
One new feature which needs some attention is the rename function. You can now rename items by selecting the rename item from their
ContextMenu or by pressing F2. Be aware though that this will also change the extension of the file, but it will warn you if you do this. When renaming multiple items it will take the name you entered and add a number to it, different for any item, somewhat like Windows Explorer. I recommend trying the rename function on some test files to see exactly what is does, before using it on other files.
For any other changes made, see the "History" section.
I was looking for something nice to program, when I got the idea of making my own Windows Explorer. I started this project with the idea of making an enhanced version of Windows Explorer with plug-in support. But before being able to enhance the Windows Explorer, you got to have a program which works like Windows Explorer. So I started searching the Internet for solutions.
While searching the Internet I found a lot of programming around Windows Explorer and Shell extensions. But none really had everything I needed and most programs where written in C++ while I really wanted one in C#. Finally I found the article I needed to start my program: An All VB.NET Explorer Tree Control with ImageList Management. Although it was written in VB I could get a really great deal of information from it and the largest part of this project relies on that article. So for more information or for a VB version see that article.
The only problem now was that I had never worked with the Windows Shell before. So first things first, I searched for articles explaining about how the Shell works and what you can do with it. Well, you can do a lot with it, too much to explain here. If you never worked with the Shell before or don't really know how the Shell works, I recommend this article: C# does Shell. This article also provides you with some links to MSDN articles. It takes some time to read them, but it definitely helped me a lot in making this program. You can also find a lot of info about all the Shell methods, structures and enumerations used in this program on MSDN.
After the base of the program was created I started implementing things like the Shell
ContextMenu, drag/drop support and a Windows Explorer like
ComboBox. I didn't really find a nice article on CodeProject for this, but a whole bunch where available on the Internet. I programmed everything with a wrapper around the Shell functions from the Windows API and was surprised how well it worked.
To use this control in your own program, add a reference to the dll to your project. After that you can add the
Browser control to the toolbox and add it to your own project. I've created some properties which allow you to alter the behaviour and look of the control (at design time):
StartUpDirectory is an enumeration of special folders which are used to determine the startup location of the
Browser. If you want to provide your own location, you must set this value to "Other" and provide your own location in the
StartUpDirectoryOther property.
The
ShellBrowser and
PluginWrapper properties are used when you want to add more than one
Browser control to your program. You can link those
Browser controls by setting the
ShellBrowser and
PluginWrapper to the same object. This will make the program run a lot faster and more efficient than using a different
ShellBrowser and
PluginWrapper for the controls. In the demo project you'll see an example of how to add two
Browser controls to a project.
There are also a few properties which you can only use at run-time:
Lastly, there are methods to programmatically do some actions for the
Browser:
SelectPath takes 3 different
Objects to set the current directory. Either the
ShellItem of the folder to select, a
string of the path to the directory (this can also be like "My Documents\My Music") or a value of the
SpecialFolders enumeration.
These are the classes which provide the actual control.
These classes are quite simple and don't need a lot of explanation. To use my control in your project, you actually only need to use the
Browser class. Just add this control to a form and all should be working. For more info, take a look at the comment in my code. Unfortunately at the moment my code doesn't have many comments, but I will try to add more shortly.
These classes provide easy access to Shell functions.
ShellAPI and
ShellImageList are very much like the classes in Jim Parsells' project I mentioned earlier. They are similar to
ShellDll and
SystemImageListManager respectively. For more info about these classes first try his article.
ShellItem comes from his
CShItem class, but I've completely rewritten it, to match my needs. I'm not going to go through the detail of this class, but if many people really need more info about it, I might write an article about it.
These classes provide a wrapper around the drag/drop operations and the
ContextMenus for the control.
These classes are the most important and they are the ones I will explain in the rest of this article.
The first thing you'll notice when trying to find a nice article about getting the Shell
ContextMenu in your program, is that almost all articles are about making extensions to the menu and not about retrieving it for your own program. Luckily I found one blog that did explain this very thoroughly: How to host an
IContextMenu. It was all in C++, so I had to translate it to C#. As this is an article existing of 11 parts, I'll try to explain everything here from a C# point of view. I'm going to assume you are familiar with the Shell namespace and pidls as it would take a lot of time to explain this here and this article is meant to cover the
ContextMenu. So if you are not familiar with these terms, look for an article on those things first. The one I mentioned in the start of this article was all I needed (C# does Shell).
Before I explain the procedure for showing the
ContextMenu, I'll give a short description for the interfaces we are going to use:
So know let's get started with the
ContextMenu stuff. To retrieve the menu you want, you'll need a few things:
IShellFolderinterface from the parent directory
ContextMenufor
IContextMenufrom the same items
Not much has to be done to obtain the
IShellFolder interface. The
ShellItem class provides the
IShellFolder for each directory, so you just have to get the
ShellItem class for the parent directory and then you'll have the
IShellFolder interface. The pidls can also be retrieved from the
ShellItem class. In my control each
TreeNode and
ListViewItem has their own
ShellItem in their
Tag property, so it is also quite easy to get the pidls you need. After this has been done you have everything you to get the
IContextMenu interface. The
IShellFolder interface has a method which will provide a lot of different interfaces for its children, these interfaces include the
IContextMenu. We need to make a call to the
GetUIObjectOf method from the
IShellFolder, like in the following example:
public static bool GetIContextMenu( IShellFolder parent, IntPtr[] pidls, out IntPtr icontextMenuPtr, out IContextMenu iContextMenu) { if (parent.GetUIObjectOf( IntPtr.Zero, (uint)pidls.Length, pidls, ref ShellAPI.IID_IContextMenu, IntPtr.Zero, out icontextMenuPtr) == ShellAPI.S_OK) { iContextMenu = (IContextMenu)Marshal.GetTypedObjectForIUnknown( icontextMenuPtr, typeof(IContextMenu)); return true; } else { icontextMenuPtr = IntPtr.Zero; iContextMenu = null; return false; } }
As you can see, you need an array of
IntPtr. This array includes the pidls of the items for which to retrieve the
IContextMenu. This can be any number, in our program this number depends on how many items are selected. With
GetUIObjectOf you'll get a pointer to the
IContextMenu and to obtain the real interface you need to use the
Marshal class.
Now we need a
ContextMenu and because we are calling only Windows API methods, all we need is a
Handle to a
ContextMenu. To make a new
ContextMenu the Windows API way, we just need to call
ShellAPI.CreatePopupMenu(), which will return a pointer to the new
ContextMenu. You can now add all the menu items from the Shell
ContextMenu by calling the
QueryContextMenu method from the
IContextMenu interface.
contextMenu = ShellAPI.CreatePopupMenu(); iContextMenu.QueryContextMenu( contextMenu, 0, ShellAPI.CMD_FIRST, ShellAPI.CMD_LAST, ShellAPI.CMF.EXPLORE | ShellAPI.CMF.CANRENAME | ((Control.ModifierKeys & Keys.Shift) != 0 ? ShellAPI.CMF.EXTENDEDVERBS : 0));
Now the
contextMenu pointer points to the
ContextMenu we need. After this call you can change the menu in any way you want. To change the menu you can use the API functions
AppendMenu and
InsertMenu from the
ShellAPI class. After that it's time to show our menu to the user. We do this by calling
ShellAPI.TrackPopupMenuEx. This method will wait for the user to select an item and will return the id of the selected item. This id is not just the index of the item in the list, but it's a special id. To execute the command that goes with the selected item we need a
CMINVOKECOMMANDINFOEX structure. We can use this with the
InvokeCommand method from the
IContextMenu to execute the selected command. For more info about this structure see MSDN.
ShellAPI.CMINVOKECOMMANDINFOEX invoke = new ShellAPI.CMINVOKECOMMANDINFOEX(); invoke.cbSize = ShellAPI.cbInvokeCommand; invoke.lpVerb = (IntPtr)cmd; invoke.lpDirectory = parentDir; invoke.lpVerbW = (IntPtr)cmd; invoke.lpDirectoryW = parentDir; invoke.fMask = ShellAPI.CMIC.UNICODE | ShellAPI.CMIC.PTINVOKE | ((Control.ModifierKeys & Keys.Control) != 0 ? ShellAPI.CMIC.CONTROL_DOWN : 0) | ((Control.ModifierKeys & Keys.Shift) != 0 ? ShellAPI.CMIC.SHIFT_DOWN : 0); invoke.ptInvoke = new ShellAPI.POINT(ptInvoke.X, ptInvoke.Y); invoke.nShow = ShellAPI.SW.SHOWNORMAL; iContextMenu.InvokeCommand(ref invoke);
In the previous example the cmd variable is the selected index. All we need to do is cast this to a pointer and the Shell functions know what to do with it. As you can see I also included some code for
ModfierKeys. As you might know, when you delete a file using Windows Explorer, there are two ways to do it: moving it to the recycle bin, or deleting it permanently. When you just press delete, the selected file will be moved into the recycle bin, but when you hold shift and press delete, the file will be deleted permanently. That is why you have to add the
ModifierKeys to the structure.
Another thing to notice is that we add a
POINT to the structure. This
POINT represents the place on the screen where you pressed the right mouse button. Have you ever noticed that when you clicked Properties on the
ContextMenu of Windows Explorer, that the Properties window will be shown on the point where you right clicked your mouse button? Well it does and to have the same effect in your program you will have to set this
POINT.
When all of this worked I was really happy, but soon I found something strange. When you select the "Open With" or the "Send To" submenus, you don't see other menu items in it. As we also want this menus to work, we need to get some more interfaces. The
IContextMenu has two child classes which are needed to get the menu's to work:
IContextMenu2 and
IContextMenu3. To get these interfaces we simply use the
Marshal class like this:
Marshal.QueryInterface( icontextMenuPtr, ref ShellAPI.IContextMenu2_IID, out context2Ptr); Marshal.QueryInterface( icontextMenuPtr, ref ShellAPI.IContextMenu3_IID, out context3Ptr); iContextMenu2 = (IContextMenu2) Marshal.GetTypedObjectForIUnknown(context2Ptr, typeof(IContextMenu2)); iContextMenu3 = (IContextMenu3) Marshal.GetTypedObjectForIUnknown(context3Ptr, typeof(IContextMenu3));
These interfaces will draw the menus for us, but they need to know when to do this. For this we need to override the
WndProc method and check the messages that are being send to it. When these messages are about creating, measuring or drawing the
ContextMenu items we will call the
HandleMenuMsg and
HandleMenuMsg2 methods from the
IContextMenu2 and
IContextMenu3 interfaces respectively, these methods will do the rest of the necessary work.
As you can read on MSDN, the
IContextMenu2 interface will process the
WM_INITMENUPOPUP,
WM_MEASUREITEM and
WM_DRAWITEM messages and the
IContextMenu3 interface will process the
WM_MENUCHAR message. So if you encounter one of these messages while showing the
ContextMenu call the
HandleMenuMsg and
HandleMenuMsg2 methods to handle the specific messages.
protected override void WndProc(ref Message m) { if (iContextMenu2 != null && (m.Msg == (int)ShellAPI.WM.INITMENUPOPUP || m.Msg == (int)ShellAPI.WM.MEASUREITEM || m.Msg == (int)ShellAPI.WM.DRAWITEM)) { if (iContextMenu2.HandleMenuMsg( (uint)m.Msg, m.WParam, m.LParam) == ShellAPI.S_OK) return; } if (iContextMenu3 != null && m.Msg == (int)ShellAPI.WM.MENUCHAR) { if (iContextMenu3.HandleMenuMsg2( (uint)m.Msg, m.WParam, m.LParam, IntPtr.Zero) == ShellAPI.S_OK) return; } base.WmdProc(ref Message m); }
Once you implemented this, you will see that now the submenu's will also work the way they are supposed to.
This is the main idea to get the
ContextMenus to work. My program also adds the Collapse and Expand
MenuItems on a
TreeNode
ContextMenu, like Windows Explorer does. It will also raise an event for showing the
ContextMenuItems help
String when the item is being hovered over. Just check my code if you need to know how to do this.
Before I started implementing the drag/drop support to my program I read the following article by Jim Parsells: Adding Drag and Drop to an Explorer Tree Control and another one by Michael Dunn: How to Implement Drag and Drop Between Your Program and Explorer. These let me in the right way and also made it a bit more challenging for me. At the end of Jim Parsells article he mentions some problems when implementing it the way he did. I think I solved these problems, by implementing it without using the .Net drag/drop methods. So forget all the nice implementations of .Net, we are going to use the Windows API to get this to work.
Three new interfaces are needed for drag and drop support:
The nice thing of the Shell namespace is that it will do all the dirty work for you, the only problem is to figure out how to let the Shell do his job. Once you are working with the Shell and get more familiar with it however, this will get a lot easier and you will find solutions to your problem quite fast. Once I got the hang of the
ContextMenu stuff, it was actually quite an easy job for me to implement drag/drop.
To register your program for drop operations you need a class which implements the
IDropTarget interface. In my program the
BrowserTVDropWrapper and the
BrowserLVDropWrapper are both
IDropTargets. Before we get the necessary events raised in our own classes we need to register them. You can do this by calling the
ShellAPI.RegisterDragDrop method, this method takes two arguments. One argument being the handle of the control to register the drag operation for and the other being the
IDropTarget to receive messages about the drag. You also need to revoke your registration once you program finishes using the
ShellAPI.RevokeDragDrop method. Once you registered your
IDropTarget, your class will receive 4 different messages which need some more attention.
The first message being
DragEnter, which will be called when someone drags an object and enters you control. You will receive a pointer to the
IDataObject being dragged, the current state of the modifier keys and mouse buttons, the location of the mouse pointer and a reference to an instance of the
DragDropEffects enumeration. This is quite a lot of info, but we don't really need to use it all. The Shell provides us with an
IDropTarget from specific Shell objects which will do all the work for us. The only thing we need to do is check which item is being dragged over, obtain the
IDropTarget for that item and pass all the info to that interface. To get the
IDropTarget from an item, we have to call the
GetUIObjectOf method from the parent
IShellFolder interface again (as with the
IContextMenu interface). So the basic idea in code form looks like this:
private ShellDll.IDropTarget GetIDropTarget(ShellItem item, out IntPtr dropTargetPtr) { ShellItem parent = item.ParentItem != null ? item.ParentItem : item; if (parent.ShellFolder.GetUIObjectOf( IntPtr.Zero, 1, new IntPtr[] { item.PIDLRel.Ptr }, ref ShellAPI.IID_IDropTarget, IntPtr.Zero, out dropTargetPtr) == ShellAPI.S_OK) { ShellDll.IDropTarget target = (ShellDll.IDropTarget)Marshal.GetTypedObjectForIUnknown( dropTargetPtr, typeof(ShellDll.IDropTarget)); return target; } else { dropTargetPtr = IntPtr.Zero; return null; } } public int DragEnter( IntPtr pDataObj, ShellAPI.MK grfKeyState, ShellAPI.POINT pt, ref DragDropEffects pdwEffect) { Point point = br.FolderView.PointToClient(new Point(pt.x, pt.y)); TreeViewHitTestInfo hitTest = br.FolderView.HitTest(point); dropNode = hitTest.Node; if (dropNode != null) { ShellItem item = (ShellItem)dropNode.Tag; parentDropItem = item; dropTarget = GetIDropTarget(item, out dropTargetPtr); if (dropTarget != null) { dropTarget.DragEnter(pDataObj, grfKeyState, pt, ref pdwEffect); } } return ShellAPI.S_OK; }
When the
DragEnter method has been called, the
DragOver method will be called many times while the dragged item is over your control. This way you can give specific information on where the dragged item can be dropped and where it can't be. We can once again let the Shell do all the dirty work, just like with the
DragEnter method.
Now there are two methods left, either the drag operation on your control will be canceled, or it will succeed and the item is dropped on your control. For when the operation is cancelled there is the
DragLeave method. There is no additional info given, just a notice that the drag has ended on your control. Nothing much has to be done now, except for preparing your class to receive another drag operation.
If the drop succeeds the
DragDrop method will be called providing you with pretty much the same info as the
DragEnter and
DragOver methods. The only difference being that some action has to be taken. Once again this action will be performed by the Windows Shell. When we call the
DragDrop method from the
IDropTarget which we retrieved earlier, the Shell does all the work for us. All the same notifications and process windows are shown like when you are using Windows Explorer.
Well that's pretty much it for the drop operations. In my classes a bit more has been done to give it all a nice look. This includes selecting the node over which you are dragging an object, and showing the nice ghost image from the content you are dragging which Windows Explorer shows. But these things aren't really necessary to get it all working.
Once the drop part is out of the way, it's time to implement the drag operations. This is the part where it's getting a bit different from the VB explorer. In Jim Parsell's explorer, he uses a special class to create
IDataObjects for the items being dragged. He's doing it the .NET way by making use of .NET's
IDataObject interface, but actually you do not really need to do anything with the
IDataObject interface other than passing it on. That is, if you are doing it the Shell way, which is in my opinion a lot easier than the .Net way.
Because we are going to drag items using API methods, we are going to need an
IDropSource interface. This interface will take care of any drawing or canceling while dragging your item. The
BrowserTVDragWrapper and
BrowserLVDragWrapper classes implement this interface, and they will make sure dragging will be supported.
The first thing we need is to get a notification when an item is being dragged. Both the
TreeView and
ListView have an event for this (
ItemDrag), so we just register to it. Once a drag has been initialized we need to make a call to an API method, to register the wrapper as an
IDropSource and to trigger the drag. The method to call is
ShellAPI.DoDragDrop, it has two input arguments and one output. The two input arguments are the
IDataObject from the item being dragged and an instance of the
DragDropEffects enumeration telling the method which drag/drop effects are allowed. The output argument is also an instance of the
DragDropEffects enumeration specifying which effect has been executed.
The
DragDropEffects are easy to provide, but the
IDataObject needs a bit more work. Fortunately we've already seen the procedure to get this interface twice. We can once again use the
GetUIObjectOf method (this method is really very useful). Notice that when you are dragging multiple items the
ItemDrag event will only be raised once, so you'll have to check which items are selected to get the right
IDataObject.
public ShellDll.IDataObject GetIDataObject(ShellItem[] items, out IntPtr dataObjectPtr) { ShellItem parent = items[0].ParentItem != null ? items[0].ParentItem : items[0]; IntPtr[] pidls = new IntPtr[items.Length]; for (int i = 0; i < items.Length; i++) pidls[i] = items[i].PIDLRel.Ptr; if (parent.ShellFolder.GetUIObjectOf( IntPtr.Zero, (uint)pidls.Length, pidls, ref ShellAPI.IID_IDataObject, IntPtr.Zero, out dataObjectPtr) == ShellAPI.S_OK) { ShellDll.IDataObject dataObj = (ShellDll.IDataObject) Marshal.GetTypedObjectForIUnknown( dataObjectPtr, typeof(ShellDll.IDataObject)); return dataObj; } else { dataObjectPtr = IntPtr.Zero; return null; } }
Once the drag has been initialized, your
IDropSource interface will receive two messages concerning the drag. The first one is
QueryContinueDrag, which asks what to do with a certain situation, either perform the drop, cancel it, or continue dragging. You get some information to determine what to do. You'll get a bool whether the escape key has been pressed, if so the operation has to be cancelled. You also get the state of the modifier keys and the mouse buttons. This is where you have to check whether to continue the drag or perform the drop. If the mouse button which initialized the drag is still pressed, continue the drag, otherwise perform the drop. Then this is what the method is going to look like.
public int QueryContinueDrag(bool fEscapePressed, ShellAPI.MK grfKeyState) { if (fEscapePressed) return ShellAPI.DRAGDROP_S_CANCEL; else { if ((startButton & MouseButtons.Left) != 0 && (grfKeyState & ShellAPI.MK.LBUTTON) == 0) return ShellAPI.DRAGDROP_S_DROP; else if ((startButton & MouseButtons.Right) != 0 && (grfKeyState & ShellAPI.MK.RBUTTON) == 0) return ShellAPI.DRAGDROP_S_DROP; else return ShellAPI.S_OK; } }
The other message your interface will receive is
GiveFeedback. You will get the current
DragDropEffect which applies to the dragged object. This message will allow you to change the
Cursor to match this specific
DragDropEffect. Because the normal
Cursors are the ones we need, this method will only have one line in it. The Shell provides us with an option to just use the standard
Cursors for drag/drop operations, which is exactly what we want. So the method will look like this.
public int GiveFeedback(DragDropEffects dwEffect) { return ShellAPI.DRAGDROP_S_USEDEFAULTCURSORS; }
Well, we're done. That was all you have to do for the drag operations. I haven't said anything about the browsing part of my control just because it would take too much time and make this article too long. If anyone really wants to know, I might write another article about this.
With the 1.3 update I added the option to add plug-ins to the program. These plug-ins are to gain extra information about files and folders. At this moment I have two different plug-ins and one is in the making. The first of the two is a plug-ins is a plug-in to retrieve extra columns for the details view of the
ListView. Without plug-ins you only have the "Name" column which is just to little for a details view. With the demo project I have added a plug-in of this kind which addes the "size", "date created" and "date modified" columns. The second plug-in is a bit more advanced, it is a special view for the
ListView. In the demo project I have added a demo plug-in of this kind which will add the "Image View" to the
ListViews view options. If you select this view, you'll get to see a preview of images once you select them. See the following picture to get a better idea of what I mean.
To make your own plug-ins, you'll have to make a project with a reference to the FileBrowser.dll. Once you've done this there are two interface for the plug-ins I mentioned. One is the
IColumnPlugin, the other is the
IViewPlugin. You have to make a public class which implement one or even both of these interfaces. After you've done that build your project as a Class Library which will create a DLL-file. Add this dll-file to a folder named "plugins" in the folder where you start your program and start your program. Now your plug-in should be loaded and you can use it. For a demo project see the plug-in demo project you can download above.
Before you can build your own plug-in however, you obviously need to know what every method of both interfaces do and when they are called. So I'll give a short explanation of what they do. Both interfaces implement the basic
IBrowserPlugin interface which I will explain first.
IBrowserPlugin
These properties aren't used at the moment, but I will use them later to list the loaded plug-ins and to allow the user to select which plug-in to use.
IColumnPlugin
The
GetFolderInfo and
GetFileInfo methods are called when the current directory changes, they return the info which will be put in the columns for the plug-in. The plug-in will get two arguments when this method is called. Either an
IDirInfoProvider if the item is a directory or
IFileInfoProvider when the item is a file. These interface will provide a structure with info about the file or folder and for files it will also provide a
Stream to that file. The second argument is the
ShellItem of the specific item to provide the info for. With these two arguments the plug-in should retrieve the needed info and provide a
string for the column. To get a better idea of the possibilities see the demo project.
IViewPlugin
The
ViewControl can be about anything you like, so you can make quite a variety of view plug-ins. Just make sure the
Control is initialized when the plug-ins constructor is called or you'll get cross-thread problems. The
FolderSelected and
FileSelected methods will be called when an item is selected and have the same arguments as the
GetFolderInfo and
GetFileInfo methods. In my demo plug-in I use the
Stream I got from the
IFileInfoProvider to read a picture and show it on the
Control.
If you need any more info on the plug-ins please post a message below and I will try to answer as soon as possible. In the next update I hope to include a third plug-in with which you can add commands to the
ContextMenu. For now you can experiment with these two.
While I was writing my program I used a lot of sources on the Internet, because there is just so much written for this subject, so I can't give you all the sites which contributed to my work, but I'll give you the main articles which are the ones I couldn't have done without.
TextBoxpart of the
ComboBox: ImageComboBoxControl
ContextMenu: How to host an IContextMenu
There is definitely room for improvement in my program. The first thing I need to do is add much more comment to my code, because it almost hasn't got any, after that a few main things need to be polished:
ContextMenuof the
ListView
08/23/2006: V1.3.3
SHNotifyRegisterand
SHNotifyDeregisterin order to prevent any problems when calling them
08/22/2006: V1.3.2
PIDLClass, Added IL-functions
TreeViewto browse folders
08/21/2006: V1.3
Browser
Browser
Tooltips for
ListViewItems (only files for now)
ShellHelperclass with methods which are used often
Browserstartup
08/14/2006: V1.2
ListView's
ContextMenu
08/11/2006: V1.1
PIDLclass to use Shell32 imported methods
ListView
ContextMenu
08/05/2006: V1.0
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/miscctrl/FileBrowser.aspx | crawl-002 | refinedweb | 5,117 | 64.81 |
The DFS Namespaces team is interested in learning more about how our customers are using namespaces. The answers to these questions will help them when considering future changes to DFS Namespaces. You can send your response via the “Email” link on this blog.
- What do you use DFS (namespace) for?
– User home directory/drive
– Software distribution
– Other (explain)
- Approximately how many domain-based and standalone DFS namespaces do you have?
- What are the factors influencing your decision on whether you want a domain-based or standalone DFS namespace, and/or using MSCS (clustering) with DFS namespaces?
- In your biggest DFS namespace (standalone or domain-based):
– How many links do you have? Is the maximum decided on the basis of necessity or other factors (please explain)
– What’s the maximum number of link targets do you have for any DFS link? Is the maximum decided on basis of necessity or other factors (explain)?
– If domain-based, how many root targets do you have? How distributed are these (Across the globe etc)?
- How frequently do you
– Create or delete links?
– add/remove link targets?
– Add/remote root targets?
- How do you manage your namespaces? Why?
– Built-in Windows DFS Management UI
– DFSUTIL/DFSCMD(Would you like it to be scriptable using PowerShell and/or .NET)
– Home-grown or free community utilities and scripts. What do they do?
– Commercial tools. What were the critical features they provided?
- What is your wish list related to scalability/performance/manageability/features
- What are the manageability requirements related to:
– Moving shares from one server to another:
– Splitting shares or namespaces
– Adding new storage into the namespace
– Detection and clean-up of stale links
– Detection and clean-up of non-existent users
– ACL delegation at different parts of the namespace
– Quota control at the leaf of the namespace
- What current considerations (if any) related to namespace management, performance or functionality is preventing you from deploying more servers?
- What functionality would let you expand the usage of global, geo-distributed namespaces?
Join the conversationAdd Comment | https://blogs.technet.microsoft.com/filecab/2007/07/23/a-questionnaire-for-our-dfs-namespaces-users/ | CC-MAIN-2016-30 | refinedweb | 336 | 56.96 |
Biggest Changes In C++11 (and Why You Should Care) 385
Esther Schindler writes "It's been 13 years since the first iteration of the C++ language. Danny Kalev, a former member of the C++ standards committee, explains how the programming language has been improved and how it can help you write better code."
13 years? (Score:2, Interesting)
Re: (Score:3)
A post under you already explained it: it refers to the first ISO standard for C++. And yeah, I thought the same thing: "Really? Only 13 years?"
Re:13 years? (Score:5, Funny)
What struck me was that C++ got lambda expressions before Java did!
Re: (Score:3)
C++ have had lambda expressions for more than a decade, they were just really inconvenient to declare and use. Now it is easier.
Re: (Score:3)
i've never heard of such a thing. Then again I'm not (primarily) a c++ guy. Care to provide an example of lambdas before C++ 2011?
Re:13 years? (Score:4, Informative)
The previous C++ standard, C++98, is 13 years old, as the name implies.
Re: (Score:3)
RAII is not important to me and does not implement anything I can't do myself, with extremely fine grained control. To "express" this functionality, I check my allocations, I check my return codes and states, I make sure I provide same, I only do new things based on the known state of things I've already done, and I set up state-testing, abort-capable monitors for operations that might conceivably go open-ended. I try really,
13 years? (Score:5, Informative)
The article is probably referring to the first finished C++ ISO standard, 14882:1998. Hardly the "first iteration" of the language.
Re:13 years? (Score:4, Informative)
After years of development, the C++ programming language standard was ratified in 1998 as ISO/IEC 14882:1998
C++ didn't exist as a standardized language till 13 years ago. It was in development before then.
Re: (Score:3)
It was developed by Bjarne Stroustrup starting in 1979
the C++ programming language standard was ratified in 1998
So you're saying the "first iteration" took 19 years. You must use the word "iteration" differently at your shop than we do at mine.
Re: (Score:2)
Re: (Score:3)
No. He used templates and built it statically. It finished building on Unix in 81. The compiler didn't finish on windows until 98 because of all of the constant win api changes.
Re:13 years? (Score:5, Insightful)
I have to lol at the "constant win api changes" statement. The Win API bends over backwards for backwards compatibility. In Unix, especially Linux, outside of POSIX (which is fairly limited in functionality), backwards compatibility is almost a 4 letter word.
Re:13 years? (Score:4, Funny)
Re: (Score:3)
Re:13 years? (Score:4, Funny)
Now that's waterfall development!
Re: (Score:2)
Nope, the zeroth iteration.
Nice but... (Score:3, Interesting)
Would love to use these features in the new C++, but unfortunately none of the major compilers support the new for-syntax, in class initialization, deleting members and explicit specification of base class methods.
Also I totally don't understand why enum class no longer casts to ints... it totally makes using binary flags impossible unless I revert back to using the old style enums. But then I need to do the ugly namespace myenums { enum myenum { foo = 4, bar = 8
... }; } hack which makes nesting inside classes impossible -_-
Re: (Score:2, Insightful)
If you are putting binary flags in enums then you are using them wrong. And that's why you are no longer allowed to do so. It's a GOOD thing.
You want bitfields: [microsoft.com]
Re:Nice but... (Score:4, Insightful)
Bitfields are not as flexible when you want to change several different bits (belonging to different fields) in a word using a single write.
Re: (Score:2)
Re:Nice but... (Score:4, Interesting)
What so wrong with "const int SOME_BINARY_FLAG = 0xff00ff"?
Re: (Score:2)
Agreed. Low level language features exist for a reason, but their use (outside of a few specialized fields where bit-level optimization is preferred to code-readability-and-use optimization) is limited. Making it slightly less convenient (ie: use an int like we always have) for a (relative) few, while encouraging the many to use the more comprehensible variant, is not a bad idea.
Re: (Score:3)
C++11 has fixed that with the inclusion of the constexpr keyword, which allows you to define compile time constants, functions, and expressions.
For example:
constexpr uint32_t some_binary_flag = 0x8000000u;
constexpr int add(int x, int y) {
return x + y;
}
Re: (Score:3)
Speaking as someone who works on a C++ compiler - you're entirely wrong, for several reasons. The const qualifier means that the compiler can assume that it doesn't change. It can, in the first constant propagation pass, replace all loads of the constant with its value, and can then remove the memory allocation if it's static qualified, because it has no users. In fact, if it's static qualified and not const then, at a higher optimisation level, most compilers will work out that it never has its address
Re: (Score:2)
It is the most used way, not the best one. It is an inappropriate way. Even if bitfields have drawbacks, simple constants defined in a namespace remain a better choice.
It depends on what you mean by the word "best." There must be some reason that doing it this way is the most used way. Evidently, for many, it seems that it is the "best" way to do it. Since "best" is such an ambiguous word, maybe it would be better to elaborate on why it is inferior? Does it use more memory? Is it slower to execute? Is it more difficult to maintain? Depending on one's needs, the answer to any of those questions might influence what "best" means.
Re: (Score:2)
"There must be some reason that doing it this way is the most used way. "
Probably laziness. The fact that a lot of people sue it does not make it good: see: Access.
Best as in: It's poor engineering to use them. If you don't understand that, please stick to VB.
Re: (Score:2)
Not if the bitfields are volatile, which is common for hardware registers.
Bitfield layout portability or lack thereof (Score:2)
I was under the impression that bitfield layout was implementation-defined, and that's undesirable if I want my project to compile on different architectures. The page you linked includes the phrase "Microsoft Specific" around anything mentioning layout, which illustrates my point.
I was also under the impression that code generated by compilers for reading and writing bitfields was still dog slow. Has this changed?
Re: (Score:2)
You are basically saying. "I don't want C++, I want C!"
Re: (Score:2)
Re: (Score:2, Interesting) - after I got the hang of the way
Re:Nice but... (Score:5, Insightful)
Re: (Score:3)
This is definitely true. Concepts were supposed to fix this problem, but they were booted out at the last minute. Like most compiler errors though, you start associating those walls of text with common errors after a few times. STL isn't the worst offender by far in my opinion though, boost is. Most of Boost is hard to really consider C++ anymore, its template meta programming. C++ is there in the core, but if you show a circa 1998 (IE pre-templates) developer most boost libraries, they probably wouldn't re
Re: (Score:3)
Strong typing without type inference is just a stupid idea that should never have been allowed in a production language.
Re: (Score:2)
You're right, that is very cool.
Re: (Score:2)
they decided to make the name Y2K compliant this time
Yes, because a lot of people might be confused, thinking the standard came out in 1911.
;-)
Regarding the type inference, I always found that to be a glaring deficiency in STL. I always thought Borland's style of implementation was much better to use, although I can understand it wasn't as flexible or fast as templates. I did the same thing in my own class library back before I used templates (and they were standard), so I could do for-loops like this:
Re: (Score:3)
If your collection type has
.begin() and .end() you can do
for (auto& v : collection) {
... }
...and 'v' will be a reference to the value_type of the collection that refers to the element currently being iterated over. It's basically short for
for (auto i = collection.begin(); i != collection.end(); ++i) { auto& v = *i;
...
}
You can of course also use it without the &, but then you make a local copy of every element.
Re: (Score:2)
As a long time C++ guy (Borland C++ days), I look at some of these features and think "so what?" (Lambda functions, please.) [...] the STL and made my life much easier - after I got the hang of the way the STL implemented things such as "iterators" and the gotchas associated with them.
When you use iterators, then you probably use algorithms like transform or for_each. for these, lambdas are a perfect supplement.
Re:Nice but... but nothing. They are useful. (Score:3)... I stopped reading at this point...
.
Let me stop you right there for a second. Proper utilization of lambdas and closures pretty much make a lot of design patterns (template, strategy, visitor, for example) unnecessary in many contexts. This obviously will have an impact in how you write/use templates. Furthermore, lambdas and closures help with shrinking class hierarchies even further than delegation alone.
Yes, we had functors before, but now you neither need to define functors classes/structs defining 'operator()', nor have to invoke new o
Re:Nice but... (Score:5, Informative)
Re: (Score:3)
You can do this in C++ pretty easily by overloading the bitwise OR operator on the enum. Thus, when you have an enum type that you know is safe to manipulate as a set of orthogonal (or mostly orthogonal) bitfield values (such as the macros that define flags for open(), for example), you can add the appropriate "operator|" to work on that type.
Example:
#include <iostream>
using namespace std;
enum foo
{
WILMA = 2,
BARNEY = 4,
BETTY = 8
};
foo operator | (foo
Cruft removed? (Score:5, Insightful)
I really like that they added new stuff to the language but
...
Have they *removed* anything at all from it? That's the only way I could get interested in that language again.
See wikipedia (Score:3, Informative) [wikipedia.org]
Re: (Score:2)
Thanks. I wish that list was longer, but it's better than nothing.
Re: (Score:2)
They can't because it would break people's code and most people get upset when that happens.
In a Google Talk about the Go language, Rob Pike make a snide remark about how they made the recent iteration of Go smaller unlike some other languages. The only reason they can get away with that is because there's very little Go code out there.
In contrast, there's tons of C++ and Java code out there. Both languages have cruft that I'd like to see removed too, but it
Time for a flag day (Score:2)
They can't because it would break people's code
Then perhaps C++ and Java are overdue for a flag day [catb.org]. Visual Basic had one from VB6 to VB.NET, and Python more recently had one from Python 2 to Python 3.
Re: (Score:2)
Adding new features makes the code non-backwards compatible. Removing them does the same thing.
I've seen it happen in other languages (ruby, for example) and it wasn't the end of the world. If you really, really need to keep compiling old code, you just use an older compiler.
Looks cool... (Score:2)
I am glad that C++ is still evolving. That last major improvement I remember was the addition of the string class. Then shortly after that my professional focus moved away from C and C++ and towards higher level languages (.NET, PHP, Python, Java, etc...). I just recently started my own personal project so I decided to relearn C++ again, and I noticed there is a fair amount of new stuff that wasn't there before (or I was never taught)
Re: (Score:2)
I am really impatient to try this new language, but previous experience suggests I will have to wait at least 3 years before seeing a standard compliant compiler.
Re: (Score:3)
GCC has it mostly implemented (the new concurrency features remain MIA along with a few other bits) already, including the variadic templates the GP mentions. [gnu.org]
Oh Snap. (Score:2)
So... (Score:2)
Looks like C# from a few years ago. Honestly, it's really good that they're moving C++ forward, it's been lacking these features when other languages have embraced them for some time. I see they still use a plethora of ugly ass underbars, though.
Re: (Score:2)
There's not reason for all languages to look the same.
Re: (Score:2)
It makes me feel like I'm still programming in the 1980s. It's old and ugly. It's not a required "look", it's just an ancient custom. The compiler works just fine without underbars in names.
Re: (Score:2)
..Different languages are different for a reason, they do different jobs
...
C# is trying to be the universal very high level language that embraces all paradigms and can be used for everything
...
C is still a low level language allowing to program low level
C++ is stuck in the middle and so should cover the ground where you want to abstract away from the machine, but don't want a VM getting in the way, if it moves too far towards C# is will start to be ignored
...
Both C# and C++ seem to be falling into the t
Re: (Score:2)
I don't think there is any danger of that. You can make a C++ program just as low level as one in C, if you want. You can still compile any C code with a C++ compiler in C++ mode with generally only very minor tweaks to the expressions, and none of the tweaks changing the binary code generated. You can use as much of the added C++ features as you want; anywhere from none of them to all of them. In an ideal world there would be no reason to use a straight C compiler any more. The ways real life departs
Re: (Score:2)
Except when the C code uses C++ keywords. Words like new, this and delete are common enough that there are plenty of C programs using them.
Another good reason to use a C compiler to compile C code is that the compiler is much simpler, and most likely has fewer bugs.
Re: (Score:2)
C# is trying to be the universal very high level language that embraces all paradigms and can be used for everything
...except non-PC, non-Microsoft platforms. (For example, MonoTouch and Mono for Android are still priced prohibitively for hobbyists.)
But... (Score:5, Insightful)
This is news for nerds. Stuff that matters. I thought
/. abandoned this stuff ages ago...
Re: (Score:2)
No, not at all. There are just as many articles like this as there was 12 years ago... there's just a lot more of political crap, so the ration had changed.
Biggest Change? (Score:5, Funny)
Alternative syntax (Score:3)
Re: (Score:3)
Re: (Score:3)
Re:Alternative syntax (Score:4, Informative)
Would it be that painful to add a lambda keyword?
Yes, actually. Adding keywords to a language is problematic, because lots of existing code will use them as identifier names. If you add a lambda keyword then you break any existing code that contains a variable, function, or type called lambda. C99 had some ugly hacks to get around this for bool: the language adds a __bool type, and the stdbool.h type adds macros that define bool, true, and false in terms of __bool.
Re: (Score:2)
Even worse, it's _Bool with one underscore and uppercase B. If you #include<stdbool.h> you get a #define bool _Bool which you can undefine if you really need the bool name for your own code.
See: stdbool.h [opengroup.org]
Re: (Score:2)
Congrats you just broke every C++ program that used "lambda" (or whatever keyword you chose) as a variable/class/whatever name.
And yes, while it may seem minor to you and something that can be fixed with search-n-replace, that's something to be avoided at almost any cost.
Re: (Score:2)
Anyone who used 'lambda' as a variable class of method should be drummed out of the business. Seriously.
Re: (Score:2)
That's "std::string" to you, bud. Library names get put into a namespace like "std", but you can't do that with keywords.
If your program uses "string" as an identifier, then don't have "using namespace std;" or "using std::string;" in your program, and refer to the C++ library string as "std::string" everywhere you use it. It's a little awkward, but you don't have to figure out which identifier is which. Using "lambda" or "function" as a keyword means these are interpreted as keywords by the parser, a
Design by Committee (Score:3)
Re: (Score:2)
Huge indeed, but judging by the results, not particularly good.
A doc that needs to be read by more people is the C++ Frequently Questioned Answers [yosefk.com].
Re: (Score:2)
A doc that needs to be ignored is the FQA. The author seems to have some sort of thing about denigrating C++, and either doesn't know what he's talking about or lies a lot.
I went through one random section carefully. (If necessary, I can go through another one carefully and report.) It's bad.
Does TFA actually explain things? (Score:4, Insightful)
It's been awhile since I've had to do any C++, so maybe I'm just missing something, but it seems like either there's a lot of retarded functionality here, or there's a lot of TFA which introduces a feature, even motivates it, but doesn't actually explain what the new version looks like. For example, with "Rvalue References":
Ok, first, what? I thought standard library string implementations were supposed to be efficient, and include some sort of copy-on-write semantics, which would (I would hope) make the above a shuffle-pointer-around instruction instead of a copy-data-around instruction.
Second, here's the newer, better syntax:
Regarding the first comment, no, I really don't, unless the point is that this is what the code for "moving" would look like if implemented in older versions of C++. But also:
Ok, cool... But where is this used in the "moveswapstr" example? Does this make the "naiveswap" example automagically faster? Or is there some other syntax? It doesn't really say:
The C++11 Standard Library uses move semantics extensively. Many algorithms and containers are now move-optimized.
...right... Still, unless I actually know what this means, it's useless.
It looks like there's a lot of good stuff here, and the article is decently organized, but the actual writing leaves me balanced between "Did I miss something?" like the above, and enough confusion that I'm actually confident the author screwed up. For example:
Great! Awesome! Of course, this arguably should've been there to begin with, and the 'auto' in front of these variables is still annoying, coming from dynamically-typed languages. But hey, maybe I can write this:
Instead of:
It's almost like C++ wanted to deliberately discourage abstraction by making it as obnoxious as possible to use constructs like the above. Anyway, that's what I expected the article to say, but instead, it says this:
Re: (Score:3)
The article messed up in a number of places as you surmise. You can use auto in for-loops. I don't know why the example didn't show that properly (I was scratching my head).
As for the swap example, what ends up happening is that with move semantics, you can go back to using the naive version, but it will actually be efficient because under the hood, it uses the rvalue references instead of copying. It will behave is if it were written using all that "pseudo-code" (don't know why he called actual code "ps
Re: (Score:3)
Ok, first, what? I thought standard library string implementations were supposed to be efficient, and include some sort of copy-on-write semantics, which would (I would hope) make the above a shuffle-pointer-around instruction instead of a copy-data-around instruction.
The string implementation is just an example. The current problem is that it is difficult to implement move semantics without having a special case for every class. The introduction of rvalue references enables a class to implement both copy and move semantics in a way that lets other classes transparently move the objects around. The move semantics are that the content is copied but the original need not be preserved (since often the original is a temporary). It enables other code (template libraries for c
Re: (Score:3)
Second, here's the newer, better syntax:
Regarding the first comment, no, I really don't, unless the point is that this is what the code for "moving" would look like if implemented in older versions of C++.
Yes, that is what the code looks like without move constructors. That has nothing to do with C++11, and yes, it's really ugly.
This that follows is the declaration (not implementation) of a move constructor.
Ok, cool... But where is this used in the "moveswapstr" example? Does this make the "naiveswap" example automagically faster? Or is there some other syntax? It doesn't really say:
Now you can do
Movable foo = bar();
and it'll call the move constructor rather than the copy constructor when constructing foo. The advantage is important. For example, in the case of a string or a vector you don't copy the contents with the copy constructor just to delete the original when the rvalue goes
Re: (Score:3)
std::vector<int> vi;
for(int& i : vi)
Shouldn't they call this... (Score:2)
C+=10
?
Ignore this article (Score:2)
Do not read this article, it makes C++0x look bad by giving terrible examples of the new features. Even features I've been excited about look stupid after reading this. The article shows how *not* to use a lambda expression. A regular for loop would be better here. Using "auto" on int and long does work, but defeats the purpose. The second example of auto doesn't even make sense since it doesn't actually include the word auto. It should be something more like: auto ci=vi.begin();
Just ignore this and g
Why is a garbage collector even needed? (Score:4, Insightful)
What is the big fuss about getting a garbage collector anyway? Why does it even matter? Good C++ code shouldn't need a garbage collector. If memory was allocated within an object then the destructor should be taking care of it. And with shard_ptr (which people should start using) it's taken care of within there anyway. Is this wanted so everyone can start coding sloppy C++ and forget about the delete calls? I suppose for those using some 3rd party library that behaves poorly and is totally out of your control it could be nice to stop that from leaking all over. Still, it should have been done right in the first place.
I suppose there might be some argument for preventing excessive memory fragmentation. Is there some other benefit to having one?
So, why should I care... (Score:2)
...if I'm not a C++ programmer today?
I stated as a C programmer in the 1980s. I've used three different object-oriented extensions to C, and C++ was neither the first nor the best of them. I'm not in an industry (like video gaming) that pushes me toward C++ with any pressure at all. Every few years I take a look at C++ and conclude that it's safe for me to continue ignoring it.
Is there anything different this time around that would change this, that's easy to explain to someone who's not already a C++ pr
I stopped caring about C++... (Score:3)
Re: (Score:3)
Yeah but this one goes to eleven!
Re:Still playing catch-up to C#. (Score:5, Interesting)
Your comment caught some flack, but I couldn't help but make a similar observation as I read the spec. It seems that they are adding a lot of stuff to C++ that exists in C# (lambda expressions, delegated constructors, automatic deduction, initialization syntax, a dedicated null keyword, etc).
Of course, they added a bunch of stuff that's also NOT in C# (since it's not necessary in a high-level language like C#), but I am glad that they are revamping C++ to incorporate some higher-level functions. Now we just have to wait for compilers to start adopting the new spec...
Re: (Score:2)
Pretty much everything new in C++ 0x was around in the Boost libraries before showing up in C#. (A lot of the Boost stuff is written by C++ committee members). Standards usually follow years behind use. (The SCSI standard is my favorite for that- you know a particular SCSI rev is obsolete when the standard finally gets ratified).
These day you don't even need Boost: most of the stuff in the standard is already available in most compilers, if you set the right switches. Heck, even Visual Studio has a lot
Re:Still playing catch-up to C#. (Score:5, Insightful)
The saddest part about this whole C++0x ordeal is that they're still just playing catch-up to C#.
True. In particular, C++ is light years behind C# in patent FUD. And C++ hasn't even started work on requirements for a 100MB "managed environment" for users to install before running their apps. Nor have C++ developers chosen a monkey species after which to name its 2nd-class-citizen cross-platform implementation.
Re: (Score:2)
What's the patent FUD, specifically? I'm not talking about some obscure part of the Winforms API, I mean in the core language itself.
Any core language is almost useless without its standard and/or de facto libraries. Microsoft's patent policies on the libraries associated with C# (not just Winforms) are intentionally vague and confusing. Nobody really knows what conditions the patents may be used under, and by whom. This passive-aggressive stance on patents is a standard Microsoft strategy, and it's been pretty effective in general.
And you forget that C++ has a giant environment to install as well, but due to age, that is generally part of the OS as is.
Doesn't matter to the average user. If they have to worry about downloading and maintaining a huge pile of
Re: (Score:2)
You're worried about hassle for the "average user" who runs Sudoku apps. These users are on Windows. For tthose who are on Mac, Macs have a good enough package installation system that it's not too big of a deal for packages to have prerequisites.
BBut anyway,why are you citing external factors to somehow prove that C# iis an inferior language? Its "goodness" is in no way diminished by the fact that few platforms have implementations written for them.
Re: (Score:3)
Re: (Score:3)
Cause I really enjoyed the latest
.net Framework 3.5 and 4 security updates that was nearly 400 MB... Thank you MS!
I also enjoyed the way it spent over an hour pre-compiling assemblies during the update. Granted, it was on an old machine, but it still would have been ridiculously long on a modern one.
Re: (Score:3, Insightful)
C++0x is C++11. C++0x was a placeholder name until they actually knew what year it would be finalised.
Re: (Score:3)
You know, like how Clash at Demonhead and Mega Man 2 took place in "200X!"
And it looks like they missed their worst-case deadline by 2 years!
Re: (Score:2)
Re: (Score:2)
And it shows they should never again make assumptions on finish dates. They never did get it done during the decade they targeted or assumed.
Re: (Score:2)
In hindsight (I posted the same thing), it occurs to me they can still claim to hit the target range if they go in hex...
But then it should have been C++ 0x0x
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Does it have a 64-bit compiler yet? | http://developers.slashdot.org/story/11/06/21/036215/biggest-changes-in-c11-and-why-you-should-care?sdsrc=nextbtmnext | CC-MAIN-2014-35 | refinedweb | 4,874 | 71.95 |
Windows Help
It's free to register, to post a question or to start / join a discussion
windows 7 and outlook 2010
Likes # 0
Posted September 10, 2011 at 1:40PM
I have installed MS Office 2010 on a new windows 7 laptop and set up outlook using the same info as on my old xp pc, I seem to be able to send emails but am not receiving them. I have also set up outlook connector for my hotmail account and this is working fine. Any ideas why please
Likes # 0
Posted September 10, 2011 at 6:44PM?
Likes # 0
Posted September 11, 2011 at 11:24AM
def no error in entered server details, basic POP3 address and SMTP address, it gives a green tick when send and receive is done, so cant see a problem there
Likes # 0
Posted September 12, 2011 at 6:41PM
any other ideas anybody?
Likes # 0
Posted September 12, 2011 at 9:57PM
Have you tried creating another account with the same pop 3 and smtp details and see if that works? If yes then you can delete original account.
Also have you configures outlook to leave messages on the server? this can cause download of emails problem.
Likes # 0
Posted September 13, 2011 at 12:27AM
"it gives a green tick when send and receive is done, so cant see a problem there"
What happens when you try to send an email to yourself?
Likes # 0
Posted September 13, 2011 at 5:19PM
If I send and email to myself,it appears on my desktop pc, but not on the lappy, I have deleted and reinstalled outlook a few times making sure the details are correct, it has got me stumped.
Likes # 0
Posted September 13, 2011 at 6:33PM
Well without any further intervention by me it has suddenly decided to | http://www.pcadvisor.co.uk/forums/29/windows-help/4076985/windows-7-and-outlook-2010/ | CC-MAIN-2014-52 | refinedweb | 310 | 69.15 |
scrapliscrapli
Documentation:
Source Code:
Examples:
scrapli -- scrap(e c)li -- is a python 3.7+ library focused on connecting to devices, specifically network devices (routers/switches/firewalls/etc.) via Telnet or SSH.
Key Features:Key Features:
- Easy: It's easy to get going with scrapli -- check out the documentation and example links above, and you'll be connecting to devices in no time.
- Fast: Do you like to go fast? Of course you do! All of scrapli is built with speed in mind, but if you really feel the need for speed, check out the
ssh2transport plugin to take it to the next level!
- Great Developer Experience: scrapli has great editor support thanks to being fully typed; that plus thorough docs make developing with scrapli a breeze.
- Well Tested: Perhaps out of paranoia, but regardless of the reason, scrapli has lots of tests! Unit tests cover the basics, regularly ran functional tests connect to virtual routers to ensure that everything works IRL!
- Pluggable: scrapli provides a pluggable transport system -- don't like the currently available transports, simply extend the base classes and add your own! Need additional device support? Create a simple "platform" in scrapli_community to easily add new device support!
- But wait, there's more!: Have NETCONF devices in your environment, but love the speed and simplicity of scrapli? You're in luck! Check out scrapli_netconf!
- Concurrency on Easy Mode: Nornir's scrapli plugin gives you all the normal benefits of scrapli plus all the great features of Nornir.
- Sounds great, but I am a Gopher: For our Go loving friends out there, check out scrapligo for a similar experience, but in Go!
RequirementsRequirements
MacOS or *nix1, Python 3.7+
scrapli "core" has no requirements other than the Python standard library2.
1 Although many parts of scrapli do run on Windows, Windows is not officially supported
2 While Python 3.6 has been dropped, it probably still works, but requires the
dataclass
backport as well as third party
async_generator library, Python 3.7+ has no external dependencies for scrapli "core"
InstallationInstallation
pip install scrapli
See the docs for other installation methods/details.
A Simple ExampleA Simple Example
from scrapli import Scrapli device = { "host": "172.18.0.11", "auth_username": "scrapli", "auth_password": "scrapli", "auth_strict_key": False, "platform": "cisco_iosxe" } conn = Scrapli(**device) conn.open() print(conn.get_prompt())
* Bunny artwork by Caroline Montanari, inspired by @egonelbre.
The bunny/rabbit is a nod to/inspired by the white rabbit in
Monty Python and the Holy Grail, because there
are enough snake logos already! | https://libraries.io/pypi/scrapli | CC-MAIN-2022-40 | refinedweb | 418 | 65.73 |
In part 2 I got RasbBMC installed, and worked out how to get vnc server running. The next goal was to get to know how the music player works, how to create playlists and most importantly hack around with the API so that I can work out how to control it from a python script.
My original intention was to run the pi entirely headless, but then I remembered I had already bought a small LCD monitor for a reversing camera, that I never got round to fitting. Hooked up to a 12 volt power supply and to the PI using the RCA socket, I now have a tiny monitor, which is handy for keeping an eye on what is going on with the player.
Most of the text is illegible at that size/ resolution, so I upped all the font sizes for the default skin:-
cd .xbmc-current/xbmc-bin/share/xbmc/addons/skin.confluence/720p
sudo nano Font.xml
Now all the line-spacing is wrong, so is more legible in some places, but less so in others. I don’t want to get side-tracked right now into learning how to skin XBMC. Maybe at a later date, it might be a nice project to create (or find) something specifically for very small screens.
IInitially I couldn’t hear any Audio, so I had to change the audio settings in XBMC to use analog audio out rather than HDMI.
I spent ages reading up on the api, the documentation is pretty confusing and lacking in examples, but eventually I found some simple examples on a forum showing how to interact with the JSON-RPC API.
I was expecting to find some examples of using the python API “directly”, but couldn’t really find anything (or at least anything complete enough to understand real-world use). I eventually found this python xbmc json client.
With that library downloaded to the pi, I was able to speak to xbmc from the python prompt:-
>>> from xbmcjson import XBMC
>>> xbmc = XBMC("")
>>> print xbmc.JSONRPC.Ping()
{u'jsonrpc': u'2.0', u'id': 0, u'result': u'pong'}
To open a saved playlist:-
xbmc.Player.Open([{'file':'/home/pi/.xbmc/userdata/playlists/music/test.m3u'}])
I was then able to play/pause XBMC and move to the next track from the python prompt with:-
xbmc.Player.PlayPause([0])
xbmc.Player.GoTo([0,'next'])
Before I worked out how to open a saved playlist via the API, I noticed that the above play/pause command doesn’t work if the player hasn’t been opened yet i.e. when the Pi has been rebooted. I found a way of opening a playlist and pausing at XBMC startup, by creating a file called autoexec.py in /home/pi/.xbmc/userdata with the following in it:-
import xbmc
xbmc.executebuiltin("PlayMedia(/home/pi/.xbmc/userdata/playlists/music/all.m3u)")
xbmc.executebuiltin( "XBMC.Action(Pause)" )
This is run by XBMC at startup, and doing so ensures that the player has a playlist loaded and ready to play at startup. Currently not sure where this script imports xbmc from or how I could have this available to my own application scripts and managed by XBMC, but there must be a way. Either way I seem to be able to do most of what I need with the JSON-RPC API currently, with the exception of shuffle. The following command toggles shuffle on and off:-
xbmc.Player.SetShuffle({"playerid":0, "shuffle":"toggle"})
The JSON response sais “OK”, but annoyingly it has no effect on the XBMC player. When operating XBMC manually you can open a saved playlist, and at any point click the shuffle button, to toggle shuffle on and off, and the next track will either be in order, or “random” from a shuffled list. I was hoping that this toggle on the API would have the same effect, but apparently not. I think I may have to consult the XBMC forum, once i’ve put togther a precise list of commands that i’ve tried, and checked exactly which version of XBMC and API I am using (you know how forums can be if you go asking questions without providing enough background info!).
If it turns out that this is a bug and there is no solution for shuffling the current playlist with the API, I can envisage a workaround involving dynamically creating a shuffled version of each of the saved playlists. As I intend to create some kind of playlist management app for this (outside of XBMC), it would be a simple step on from that.
So, besides working out a solution for shuffle, the next step is to connect the Piface digital I/O expansion board, and work out how to trigger API stuff in response to button presses. | https://rickhurst.co.uk/category/raspberry-pi-5/ | CC-MAIN-2018-34 | refinedweb | 805 | 65.86 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.