text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Here we will write a java program to check whether the input year is a leap year or not. Before we see the program, lets see how to determine whether a year is a leap year mathematically:
To determine whether). Source of these steps.
Example: Program to check whether the input year is leap or not
Here we are using Scanner class to get the input from user and then we are using if-else statements to write the logic to check leap year. To understand this program, you should have the knowledge of following concepts of Core Java Tutorial:
→ If-else statement
→ Read input number in Java program
import java.util.Scanner; public class Demo { public static void main(String[] args) { int year; Scanner scan = new Scanner(System.in); System.out.println("Enter any Year:"); year = scan.nextInt(); scan."); } }
Output:
Enter any Year: 2001 2001 is not a Leap Year. | https://beginnersbook.com/2017/09/java-program-to-check-leap-year/ | CC-MAIN-2018-05 | refinedweb | 151 | 62.17 |
In our project, we have Authorize.NET payment provider which is configured to process a payment as "Sale". After the user makes an order, the customer should be able to cancel an order and refund. But when canceling an order it doesn't do any refund operations.
I found in the documentation that it is possible to create a return for completed order but there is no way to create a refund for canceled order.
Is there any other way how to make "refund" for canceled order?
If I remember correctly, then the Cancel button will trigger the refund when the payment(s) are processed. Payment methods will be called with Credit type so they know they should refund.
The reason for that was because in case of canceled orders, the full amount should be refunded, unlike in cases of return (there might be partial return).
I'll have to check for sure, but that's from my memory.
That's what it should do but I tried multiple times and it did not. Nothing happens.
I did check with our QAs and they confirmed that in Commerce Manager, when you cancel an order the payment will not be automatically refunded - you will have to do it manually in the payment gateway.
Yes that sounds a little weird but it has been that way since long ago. We will unlikely to change the default behavior, as it involves new development in Commerce Manager. But you can of course listen to the order events and act accordingly.
Ok, thanks for the info!
© Episerver 2017 |
About Episerver World | http://world.episerver.com/forum/developer-forum/Episerver-Commerce/Thread-Container/2017/2/how-to-refund-in-the-commerce-manager/ | CC-MAIN-2017-17 | refinedweb | 266 | 72.87 |
SpellCheck.NET is free online spell checking site. Whenever I need to check my spelling I visit this site, so I decided to write a parser for this site. I wrote this parser with C# and wrapped it up in a DLL file and called it Word.dll. In this article I will show you how to parse a HTML page using regular expressions. I will not explain all the source code since it is available for download. My main purpose of this project is to demonstrate how to parse a HTML page using regular expressions.
Before this project I have never worked with regular expressions seriously, so I decided to use regular expressions. In this project I have learned a lot about C# regular expressions and .NET framework. The difficult part was in this project writing regular expressions pattern. So I referred to different sites and books to get the right pattern.
Here are some useful sites to check out.
Word.dll has one public class and two public methods
Include "using Word.dll" at the top of file for the object reference.
SpellCheck word = new SpellCheck();
This method will check the word and return true or false. If the word is correct then it will return true otherwise false.
bool status = false;
status = word.CheckSpelling("a word");
This method will return the collection of suggested words.
foreach(string suggestion in word.GetSpellingSuggestions("a word"))
{
System.Console.WriteLine( suggestion );
}
regular expression pattern @"(correctly.)|(misspelled.)"
regular expression pattern @"(suggestions:)"
regular expression pattern @"<blockquote>(?:\s*([^<]+) \s*)+ </blockquote>"
Source file is included in zip format for download.
Calling Word.dll wrapper class:
This is how you would call this wrapper class in your application.
using System;
//Word.dll
using Word;
/// <summary>
/// Test Harness for SpellCheck Class
/// </summary>
class TestHarness
{
/// <summary>
/// testing Word Class
/// </summary>
[STAThread]
static void Main(string[] args)
{
SpellCheck word = new SpellCheck();
bool status = false;
string s = "youes";
Console.WriteLine("Checking for word : " + s );
// check to see if the word is not correct
// return the bool (true|false)
status = word.CheckSpelling(s);
if (status == false)
{
Console.WriteLine("This word is misspelled : " + s);
Console.WriteLine("Here are some suggestions");
Console.WriteLine("-------------------------");
foreach( string suggestion in word.GetSpellingSuggestions(s) )
{
System.Console.WriteLine( suggestion );
}
}
else if (status == true)
{
Console.WriteLine("This word is correct : " + s );
}
}
}
Run the "compile.bat" file at the DOS prompt, it will create necessary files.
This is how your screen would look like after you execute TestHarness.ex | http://www.codeproject.com/Articles/2469/SpellCheck-net-spell-checking-parsing-using-C | CC-MAIN-2013-20 | refinedweb | 409 | 68.77 |
User Tag List
Results 26 to 36 of 36
Thread: Multipleton Pattern (maybe?)
Originally Posted by Ryan Wray
James Carr, Software Engineer
assertEquals(newXPJob, you.ask(officeOfTheLaw));
- Join Date
- Jan 2005
- Location
- Ireland
- 349
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
Originally Posted by OfficeOfTheLaw
PHP Code:
<?php
function getDatabaseConnection()
{
static $instance;
if (!isset($instance)) {
$instance = new Database;
}
return $instance;
}
?>
Anyhow, it seems to me he simply wants a Singleton (ie. only have one instance at a time available, globally) that can have the instance in question changed. Maybe I just don't understand what he doing.
- Join Date
- May 2005
- 255
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
No matter how you slice it, you can still call functions statically in PHP4, and as such something like this doesn't give you anything that just calling said functions statically doesn't do, aside from some debatably useful CS purity (because you couldn't modify the variable pointing to the most recently created instance as easily).
Of course, any argument that way is shot in the foot by PHP4's most glaring flaw--namely in that objects are more or less just arrays, and I can change the value of any given element at any time I want. Singletons (and their variants, like what you're trying to do here) exist for the sole purpose of making sure that there's only one instance of some particular item, because if there were more than one, it would cause problems.
Generally speaking, Singletons have limited usefulness in this regard, as a class that actually has problems when there is more than one around is either badly programmed, or hardware-limited (semaphores, hardware drivers, and the like are candidates for this sort of thing, although it's extremely rare to encounter those in OOP anyway).
PHP singletons are slightly more useful because they can be used as a hack to solve the lack of namespaces. (Class::Function() instead of Class_Function or ClassFunction or whatever).
Ultimately, the question is this: What tangible, useful advantage would a construct like this actually add to your code? What situations would you encounter where this would be useful in real life?
>> Singletons (and their variants, like what you're trying to do here) exist for the sole purpose of making sure that there's only one instance of some particular item
They also exist to make that single instance "global" without resorting to $GLOBAL['object'] or global $object.
>> A Registry simply makes more sense to me
>> Sorry, I read the thread, but don't really get what you are meaning to do that isn't done by the registry pattern
As I've said several times in this thread, I believe a Registry is a much better approach when one is using multiple objects. Just as I believe a "Singleton Registry" is better than having multiple Singletons.
However, IMO a Registry is overkill for cases of just a single object that can carry multiple instances (dropping the old one when a new one is created), just as IMO, it would be overkill for a case where one needs a single object with a single instance. A singleton is better suited in the latter case, IMO, unless one just likes to needlessly wrap their classes around other classes just because it makes them feel smarter.
As I see it:
- single object, single instance = singleton
- multiple objects, single instance of each = "singleton registry"
- multiple objects, single or multiple instances = registry
- single object, multiple instances = .... nothing?
This is what I came up with to fill the void in a case where I needed it, and nothing already exits to fill it (a registry makes it overflow). That is all it is and nothing more.
>> What tangible, useful advantage would a construct like this actually add to your code?
That is up for you to decide.
>> What situations would you encounter where this would be useful in real life?
I have already described several times the "real life" situation for why I created this. I did not come up with it just for the sake of coming up with it. Whether or not it is useful to you or anyone else, that is up to you and you alone to decide. Some things we have to walk on our own.
>> Would a term more like Writable Singleton make more sense than Multipleton. It isn't holding multiple instance, but rather you can change the instance the singleton is holding and discard the old one (in other words, the Singleton is no longer read-only).
Yes precisely. That is exactly what I needed, and why I came up with this. Finally someone "gets it"... you seem to be one of the few to have actually read the thread. (sorry, but it just gets damn annoying when people continually ask "why not use a registry?", a question to which I have answered several times now).
Singletons limit objects to one instance (once created you cannot change it). Registries are for multiple objects (IMO). I needed a single object whose instance could be changed, and whose current instance could always be returned (to be "global"). Singleton was too limited. Registry was too much. So I did this. Never claimed it would useful to anyone else or to any other circumstances though...
- Join Date
- May 2005
- Location
- Finland
- 608
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
I always thought this was trivial.PHP Code:
class Multipleton
{
private static $instance = NULL;
protected function __construct($param) {/*...*/}
public static function instance() { return self::$instance; }
public static function new($param)
{
self::$instance = new self($param);
return self::$instance;
}
}
BTW and IMO, a Singleton that can hold many instances of itself has/is/uses a singleton registry. It's just that this one only holds one at a time - but I don't think that necessarily makes it anything more than a Singleton.
>> And it's essentially the same in PHP4. What's the fuss all about?
Yeah that is how one could do it in PHP 5. As I've said a million times already, like everything else I've said since people can't read I guess, it is trivial and far easier in PHP 5. But PHP 4 doesn't support static class properties, so it becomes trickier to do the same essential thing.
- Join Date
- May 2005
- Location
- Finland
- 608
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
Sorry, having a hard time reading your thoughts.
Originally Posted by Helge
Bookmarks | http://www.sitepoint.com/forums/showthread.php?291804-Multipleton-Pattern-(maybe-)&p=2125091 | CC-MAIN-2014-15 | refinedweb | 1,088 | 61.06 |
Class access
Posted on March 1st, 2001 {
That is, if the name of your library is mylib any client programmer can access Widget by saying
import mylib.Widget;
or
import mylib.*;
However, there’s an extra pair of constraints:
- There can be only one public class per compilation unit (file). The idea is that each compilation unit has a single public interface represented by that public class. It can have as many supporting “friendly” classes as you want. If you have more than one public class inside a compilation unit, the compiler will give you an error message.
- The name of the public class must exactly match the name of the file containing the compilation unit, including capitalization. So for Widget, the name of the file must be Widget.java, not widget.java or WIDGET.java. Again, you’ll get a compile-time error if they don’t agree.
- It is possible, though not typical, to have a compilation unit with no public class at all. In this case, you can name the file whatever you like..[26]. [27] Here’s an example:
//: handle to an object, which is what happens here. This method returns a handle handle will be discussed later in this book..
[26] Actually, a Java 1.1 inner class can be private or protected, but that’s a special case. These will be introduced in Chapter 7.
There are no comments yet. Be the first to comment! | http://www.codeguru.com/java/tij/tij0060.shtml | CC-MAIN-2016-40 | refinedweb | 241 | 68.26 |
import com.sleepycat.db.*;
public interface DbSecondaryKeyCreate { public abstract Dbt secondary_key_create(Db secondary, Dbt key, Dbt data) throws DbException; } public class Db { ... public int associate(Db secondary, DbSecondaryKeyCreate secondary_key_create, int flags) throws DbException; ... }
The Db.associate function is used to declare one database a secondary index for for more information.
The associate method called should be a method off a database handle for the primary database that is to be indexed. The secondary argument.DB_THREAD flag it is safe to use it in multiple threads of control after the Db.associate method has returned. Note also that either secondary keys must be unique or the secondary database must be configured with support for duplicate data items.
The callback argument should refer to a callback function that creates a secondary key from a given primary key and data pair. When called, the first argument will be the secondary Db handle; the second and third arguments will be Dbts containing a primary key and datum respectively; and the fourth argument will be a zeroed DBT in which the callback function should fill in data and size fields that describe the secondary key.
If any key/data pair in the primary yields a null secondary key and should be left out of the secondary index, the callback function may optionally return Db.DB_DONOTINDEX. Otherwise, the callback function should return 0 in case of success or any other integer error code in case of failure; the error code will be returned from the Berkeley DB interface call that initiated the callback. Note that if the callback function returns Db.DB_DONOTINDEX.
The callback argument may be NULL if and only if both the primary and secondary database handles were opened with the Db.DB_RDONLY flag.
The flags value must be set to 0 or the following value:
If the secondary database has been opened in an environment configured with transactions, each put necessary for its creation will be done in the context of a transaction created for the purpose.
Note that care should be taken not to use a newly-created secondary database in another thread of control until the Db.associate call has returned successfully in the first thread.
The Db.associate method may fail and throw an exception encapsulating a non-zero error for the following conditions:
The secondary database handle has already been associated with this or another database handle.
The secondary database handle is not open.
The primary database has been configured to allow duplicates.
The Db.associate method may fail and throw an exception for errors specified for other Berkeley DB and C library or system methods. If a catastrophic error has occurred, the Db.associate method may fail and throw a DbRunRecoveryException, in which case all subsequent Berkeley DB calls will fail in the same way. | http://doc.gnu-darwin.org/api_java/db_associate.html | CC-MAIN-2018-51 | refinedweb | 467 | 51.58 |
can use to create more complex logic.
The modules in this section provide a mechanism whereby the document author can expression language. The context in which the expressions are evaluated is as follows:
Note that "out of scope" here refers only to the scope of the State Test module, which only involves scalar expressions, the expression context is more fully defined in the Language Profile because it includes the State Test module as well." />
This section is informative.
The SMIL 3.0 UserState Module defined in this document is a new module which was not part of the SMIL 2.1 specification.
This section is informative.
This section introduces a data model that document authors can three elements, state, setvalue and newvalue and four attributes, language, when it becomes active. newvalue element accepts the ref, name and value attributes. Which of these are required depends on the expression language. specifies where it is inserted. For expression languages where ref can refer to a new item name can be omitted.
This section is informative.
For the SMIL 3.0 Language Profile the value of the ref attribute is an XPath expression that must evaluate to a node-set. It is an error if the node-set does not refer to exactly one node. valid XML document. Therefore it should have at most one child element. Alternatively, if the src attribute is used to specify an external XML document the state element itself should be empty.> ...
This section is informative.
Supported events for event-based timing are formally.
Rationale: Raising the stateChange event on the state element instead of on the data model element itself allows for external data models (which have a distinct xmlid namespace) and on non-XML data models (depending on the expression language).
This section is informative.
The SMIL 3.0 UserState one attribute, submission. and replace attributes.
This section is informative
This element was lifted straight from XForms, with the accompanying attributes.. We need to follow the XForms lead here. Add some references.
How to serialize and transmit the data. Allowed values are taken from XForms, examples are post and get. We should either refer to the exact list or copy it here.
What to replace with the reply. Allowed values are all for the whole SMIL presentation, instance for the instance data, none for nothing.
This section is informative.
The SMIL 3.0 Language Profile includes the StateSubmission module, and it defines that the submission element must occur in the head section.
This section is informative.
To be provided.
This section is informative.
The SMIL 3.0 StateInterpolation Module defined in this document is a new module which was not part of the SMIL 2.1 specification.
This section is normative.
This section introduces a mechanism whereby document authors can, clipEnd and clipBegin can. | http://www.w3.org/TR/2007/WD-SMIL3-20070713/smil-state.html | CC-MAIN-2015-11 | refinedweb | 468 | 60.92 |
/* C preprocessor macro tables_obstack.h" #include "splay-tree.h" #include "symtab.h" #include "symfile.h" #include "objfiles.h" #include "macrotab.h" #include "gdb_assert.h" #include "bcache.h" #include "complaints.h" /* The macro table structure. */ struct macro_table { /* The obstack this table's data should be allocated in, or zero if we should use xmalloc. */ struct obstack *obstack; /* The bcache we should use to hold macro names, argument names, and definitions, or zero if we should use xmalloc. */ struct bcache *bcache; /* The main source file for this compilation unit --- the one whose name was given to the compiler. This is the root of the #inclusion tree; everything else is #included from here. */ struct macro_source_file *main_source; /* The table of macro definitions. This is a splay tree (an ordered binary tree that stays balanced, effectively), sorted by macro name. Where a macro gets defined more than once (presumably with an #undefinition in between), we sort the definitions by the order they would appear in the preprocessor's output. That is, if `a.c' #includes `m.h' and then #includes `n.h', and both header files #define X (with an #undef somewhere in between), then the definition from `m.h' appears in our splay tree before the one from `n.h'. The splay tree's keys are `struct macro_key' pointers; the values are `struct macro_definition' pointers. The splay tree, its nodes, and the keys and values are allocated in obstack, if it's non-zero, or with xmalloc otherwise. The macro names, argument names, argument name arrays, and definition strings are all allocated in bcache, if non-zero, or with xmalloc otherwise. */ splay_tree definitions; }; /* Allocation and freeing functions. */ /* Allocate SIZE bytes of memory appropriately for the macro table T. This just checks whether T has an obstack, or whether its pieces should be allocated with xmalloc. */ static void * macro_alloc (int size, struct macro_table *t) { if (t->obstack) return obstack_alloc (t->obstack, size); else return xmalloc (size); } static void macro_free (void *object, struct macro_table *t) { gdb_assert (! t->obstack); xfree (object); } /* If the macro table T has a bcache, then cache the LEN bytes at ADDR there, and return the cached copy. Otherwise, just xmalloc a copy of the bytes, and return a pointer to that. */ static const void * macro_bcache (struct macro_table *t, const void *addr, int len) { if (t->bcache) return bcache (addr, len, t->bcache); else { void *copy = xmalloc (len); memcpy (copy, addr, len); return copy; } } /* If the macro table T has a bcache, cache the null-terminated string S there, and return a pointer to the cached copy. Otherwise, xmalloc a copy and return that. */ static const char * macro_bcache_str (struct macro_table *t, const char *s) { return (char *) macro_bcache (t, s, strlen (s) + 1); } /* Free a possibly bcached object OBJ. That is, if the macro table T has a bcache, it's an error; otherwise, xfree OBJ. */ static void macro_bcache_free (struct macro_table *t, void *obj) { gdb_assert (! t->bcache); xfree (obj); } /* Macro tree keys, w/their comparison, allocation, and freeing functions. */ /* A key in the splay tree. */ struct macro_key { /* The table we're in. We only need this in order to free it, since the splay tree library's key and value freeing functions require that the key or value contain all the information needed to free themselves. */ struct macro_table *table; /* The name of the macro. This is in the table's bcache, if it has one. */ const char *name; /* The source file and line number where the definition's scope begins. This is also the line of the definition itself. */ struct macro_source_file *start_file; int start_line; /* The first source file and line after the definition's scope. (That is, the scope does not include this endpoint.) If end_file is zero, then the definition extends to the end of the compilation unit. */ struct macro_source_file *end_file; int end_line; }; /* Return the #inclusion depth of the source file FILE. This is the number of #inclusions it took to reach this file. For the main source file, the #inclusion depth is zero; for a file it #includes directly, the depth would be one; and so on. */ static int inclusion_depth (struct macro_source_file *file) { int depth; for (depth = 0; file->included_by; depth++) file = file->included_by; return depth; } /* Compare two source locations (from the same compilation unit). This is part of the comparison function for the tree of definitions. LINE1 and LINE2 are line numbers in the source files FILE1 and FILE2. Return a value: - less than zero if {LINE,FILE}1 comes before {LINE,FILE}2, - greater than zero if {LINE,FILE}1 comes after {LINE,FILE}2, or - zero if they are equal. When the two locations are in different source files --- perhaps one is in a header, while another is in the main source file --- we order them by where they would appear in the fully pre-processed sources, where all the #included files have been substituted into their places. */ static int compare_locations (struct macro_source_file *file1, int line1, struct macro_source_file *file2, int line2) { /* We want to treat positions in an #included file as coming *after* the line containing the #include, but *before* the line after the include. As we walk up the #inclusion tree toward the main source file, we update fileX and lineX as we go; includedX indicates whether the original position was from the #included file. */ int included1 = 0; int included2 = 0; /* If a file is zero, that means "end of compilation unit." Handle that specially. */ if (! file1) { if (! file2) return 0; else return 1; } else if (! file2) return -1; /* If the two files are not the same, find their common ancestor in the #inclusion tree. */ if (file1 != file2) { /* If one file is deeper than the other, walk up the #inclusion chain until the two files are at least at the same *depth*. Then, walk up both files in synchrony until they're the same file. That file is the common ancestor. */ int depth1 = inclusion_depth (file1); int depth2 = inclusion_depth (file2); /* Only one of these while loops will ever execute in any given case. */ while (depth1 > depth2) { line1 = file1->included_at_line; file1 = file1->included_by; included1 = 1; depth1--; } while (depth2 > depth1) { line2 = file2->included_at_line; file2 = file2->included_by; included2 = 1; depth2--; } /* Now both file1 and file2 are at the same depth. Walk toward the root of the tree until we find where the branches meet. */ while (file1 != file2) { line1 = file1->included_at_line; file1 = file1->included_by; /* At this point, we know that the case the includedX flags are trying to deal with won't come up, but we'll just maintain them anyway. */ included1 = 1; line2 = file2->included_at_line; file2 = file2->included_by; included2 = 1; /* Sanity check. If file1 and file2 are really from the same compilation unit, then they should both be part of the same tree, and this shouldn't happen. */ gdb_assert (file1 && file2); } } /* Now we've got two line numbers in the same file. */ if (line1 == line2) { /* They can't both be from #included files. Then we shouldn't have walked up this far. */ gdb_assert (! included1 || ! included2); /* Any #included position comes after a non-#included position with the same line number in the #including file. */ if (included1) return 1; else if (included2) return -1; else return 0; } else return line1 - line2; } /* Compare a macro key KEY against NAME, the source file FILE, and line number LINE. Sort definitions by name; for two definitions with the same name, place the one whose definition comes earlier before the one whose definition comes later. Return -1, 0, or 1 if key comes before, is identical to, or comes after NAME, FILE, and LINE. */ static int key_compare (struct macro_key *key, const char *name, struct macro_source_file *file, int line) { int names = strcmp (key->name, name); if (names) return names; return compare_locations (key->start_file, key->start_line, file, line); } /* The macro tree comparison function, typed for the splay tree library's happiness. */ static int macro_tree_compare (splay_tree_key untyped_key1, splay_tree_key untyped_key2) { struct macro_key *key1 = (struct macro_key *) untyped_key1; struct macro_key *key2 = (struct macro_key *) untyped_key2; return key_compare (key1, key2->name, key2->start_file, key2->start_line); } /* Construct a new macro key node for a macro in table T whose name is NAME, and whose scope starts at LINE in FILE; register the name in the bcache. */ static struct macro_key * new_macro_key (struct macro_table *t, const char *name, struct macro_source_file *file, int line) { struct macro_key *k = macro_alloc (sizeof (*k), t); memset (k, 0, sizeof (*k)); k->table = t; k->name = macro_bcache_str (t, name); k->start_file = file; k->start_line = line; k->end_file = 0; return k; } static void macro_tree_delete_key (void *untyped_key) { struct macro_key *key = (struct macro_key *) untyped_key; macro_bcache_free (key->table, (char *) key->name); macro_free (key, key->table); } /* Building and querying the tree of #included files. */ /* Allocate and initialize a new source file structure. */ static struct macro_source_file * new_source_file (struct macro_table *t, const char *filename) { /* Get space for the source file structure itself. */ struct macro_source_file *f = macro_alloc (sizeof (*f), t); memset (f, 0, sizeof (*f)); f->table = t; f->filename = macro_bcache_str (t, filename); f->includes = 0; return f; } /* Free a source file, and all the source files it #included. */ static void free_macro_source_file (struct macro_source_file *src) { struct macro_source_file *child, *next_child; /* Free this file's children. */ for (child = src->includes; child; child = next_child) { next_child = child->next_included; free_macro_source_file (child); } macro_bcache_free (src->table, (char *) src->filename); macro_free (src, src->table); } struct macro_source_file * macro_set_main (struct macro_table *t, const char *filename) { /* You can't change a table's main source file. What would that do to the tree? */ gdb_assert (! t->main_source); t->main_source = new_source_file (t, filename); return t->main_source; } struct macro_source_file * macro_main (struct macro_table *t) { gdb_assert (t->main_source); return t->main_source; } struct macro_source_file * macro_include (struct macro_source_file *source, int line, const char *included) { struct macro_source_file *new; struct macro_source_file **link; /* Find the right position in SOURCE's `includes' list for the new file. Skip inclusions at earlier lines, until we find one at the same line or later --- or until the end of the list. */ for (link = &source->includes; *link && (*link)->included_at_line < line; link = &(*link)->next_included) ; /* Did we find another file already #included at the same line as the new one? */ if (*link && line == (*link)->included_at_line) { /* This means the compiler is emitting bogus debug info. (GCC circa March 2002 did this.) It also means that the splay tree ordering function, macro_tree_compare, will abort, because it can't tell which #inclusion came first. But GDB should tolerate bad debug info. So: First, squawk. */ complaint (&symfile_complaints, _("both `%s' and `%s' allegedly #included at %s:%d"), included, (*link)->filename, source->filename, line); /* Now, choose a new, unoccupied line number for this #inclusion, after the alleged #inclusion line. */ while (*link && line == (*link)->included_at_line) { /* This line number is taken, so try the next line. */ line++; link = &(*link)->next_included; } } /* At this point, we know that LINE is an unused line number, and *LINK points to the entry an #inclusion at that line should precede. */ new = new_source_file (source->table, included); new->included_by = source; new->included_at_line = line; new->next_included = *link; *link = new; return new; } struct macro_source_file * macro_lookup_inclusion (struct macro_source_file *source, const char *name) { /* Is SOURCE itself named NAME? */ if (strcmp (name, source->filename) == 0) return source; /* The filename in the source structure is probably a full path, but NAME could be just the final component of the name. */ { int name_len = strlen (name); int src_name_len = strlen (source->filename); /* We do mean < here, and not <=; if the lengths are the same, then the strcmp above should have triggered, and we need to check for a slash here. */ if (name_len < src_name_len && source->filename[src_name_len - name_len - 1] == '/' && strcmp (name, source->filename + src_name_len - name_len) == 0) return source; } /* It's not us. Try all our children, and return the lowest. */ { struct macro_source_file *child; struct macro_source_file *best = NULL; int best_depth = 0; for (child = source->includes; child; child = child->next_included) { struct macro_source_file *result = macro_lookup_inclusion (child, name); if (result) { int result_depth = inclusion_depth (result); if (! best || result_depth < best_depth) { best = result; best_depth = result_depth; } } } return best; } } /* Registering and looking up macro definitions. */ /* Construct a definition for a macro in table T. Cache all strings, and the macro_definition structure itself, in T's bcache. */ static struct macro_definition * new_macro_definition (struct macro_table *t, enum macro_kind kind, int argc, const char **argv, const char *replacement) { struct macro_definition *d = macro_alloc (sizeof (*d), t); memset (d, 0, sizeof (*d)); d->table = t; d->kind = kind; d->replacement = macro_bcache_str (t, replacement); if (kind == macro_function_like) { int i; const char **cached_argv; int cached_argv_size = argc * sizeof (*cached_argv); /* Bcache all the arguments. */ cached_argv = alloca (cached_argv_size); for (i = 0; i < argc; i++) cached_argv[i] = macro_bcache_str (t, argv[i]); /* Now bcache the array of argument pointers itself. */ d->argv = macro_bcache (t, cached_argv, cached_argv_size); d->argc = argc; } /* We don't bcache the entire definition structure because it's got a pointer to the macro table in it; since each compilation unit has its own macro table, you'd only get bcache hits for identical definitions within a compilation unit, which seems unlikely. "So, why do macro definitions have pointers to their macro tables at all?" Well, when the splay tree library wants to free a node's value, it calls the value freeing function with nothing but the value itself. It makes the (apparently reasonable) assumption that the value carries enough information to free itself. But not all macro tables have bcaches, so not all macro definitions would be bcached. There's no way to tell whether a given definition is bcached without knowing which table the definition belongs to. ... blah. The thing's only sixteen bytes anyway, and we can still bcache the name, args, and definition, so we just don't bother bcaching the definition structure itself. */ return d; } /* Free a macro definition. */ static void macro_tree_delete_value (void *untyped_definition) { struct macro_definition *d = (struct macro_definition *) untyped_definition; struct macro_table *t = d->table; if (d->kind == macro_function_like) { int i; for (i = 0; i < d->argc; i++) macro_bcache_free (t, (char *) d->argv[i]); macro_bcache_free (t, (char **) d->argv); } macro_bcache_free (t, (char *) d->replacement); macro_free (d, t); } /* Find the splay tree node for the definition of NAME at LINE in SOURCE, or zero if there is none. */ static splay_tree_node find_definition (const char *name, struct macro_source_file *file, int line) { struct macro_table *t = file->table; splay_tree_node n; /* Construct a macro_key object, just for the query. */ struct macro_key query; query.name = name; query.start_file = file; query.start_line = line; query.end_file = NULL; n = splay_tree_lookup (t->definitions, (splay_tree_key) &query); if (! n) { /* It's okay for us to do two queries like this: the real work of the searching is done when we splay, and splaying the tree a second time at the same key is a constant time operation. If this still bugs you, you could always just extend the splay tree library with a predecessor-or-equal operation, and use that. */ splay_tree_node pred = splay_tree_predecessor (t->definitions, (splay_tree_key) &query); if (pred) { /* Make sure this predecessor actually has the right name. We just want to search within a given name's definitions. */ struct macro_key *found = (struct macro_key *) pred->key; if (strcmp (found->name, name) == 0) n = pred; } } if (n) { struct macro_key *found = (struct macro_key *) n->key; /* Okay, so this definition has the right name, and its scope begins before the given source location. But does its scope end after the given source location? */ if (compare_locations (file, line, found->end_file, found->end_line) < 0) return n; else return 0; } else return 0; } /* If NAME already has a definition in scope at LINE in SOURCE, return the key. If the old definition is different from the definition given by KIND, ARGC, ARGV, and REPLACEMENT, complain, too. Otherwise, return zero. (ARGC and ARGV are meaningless unless KIND is `macro_function_like'.) */ static struct macro_key * check_for_redefinition (struct macro_source_file *source, int line, const char *name, enum macro_kind kind, int argc, const char **argv, const char *replacement) { splay_tree_node n = find_definition (name, source, line); if (n) { struct macro_key *found_key = (struct macro_key *) n->key; struct macro_definition *found_def = (struct macro_definition *) n->value; int same = 1; /* Is this definition the same as the existing one? According to the standard, this comparison needs to be done on lists of tokens, not byte-by-byte, as we do here. But that's too hard for us at the moment, and comparing byte-by-byte will only yield false negatives (i.e., extra warning messages), not false positives (i.e., unnoticed definition changes). */ if (kind != found_def->kind) same = 0; else if (strcmp (replacement, found_def->replacement)) same = 0; else if (kind == macro_function_like) { if (argc != found_def->argc) same = 0; else { int i; for (i = 0; i < argc; i++) if (strcmp (argv[i], found_def->argv[i])) same = 0; } } if (! same) { complaint (&symfile_complaints, _("macro `%s' redefined at %s:%d; original definition at %s:%d"), name, source->filename, line, found_key->start_file->filename, found_key->start_line); } return found_key; } else return 0; } void macro_define_object (struct macro_source_file *source, int line, const char *name, const char *replacement) { struct macro_table *t = source->table; struct macro_key *k; struct macro_definition *d; k = check_for_redefinition (source, line, name, macro_object_like, 0, 0, replacement); /* If we're redefining a symbol, and the existing key would be identical to our new key, then the splay_tree_insert function will try to delete the old definition. When the definition is living on an obstack, this isn't a happy thing. Since this only happens in the presence of questionable debug info, we just ignore all definitions after the first. The only case I know of where this arises is in GCC's output for predefined macros, and all the definitions are the same in that case. */ if (k && ! key_compare (k, name, source, line)) return; k = new_macro_key (t, name, source, line); d = new_macro_definition (t, macro_object_like, 0, 0, replacement); splay_tree_insert (t->definitions, (splay_tree_key) k, (splay_tree_value) d); } void macro_define_function (struct macro_source_file *source, int line, const char *name, int argc, const char **argv, const char *replacement) { struct macro_table *t = source->table; struct macro_key *k; struct macro_definition *d; k = check_for_redefinition (source, line, name, macro_function_like, argc, argv, replacement); /* See comments about duplicate keys in macro_define_object. */ if (k && ! key_compare (k, name, source, line)) return; /* We should also check here that all the argument names in ARGV are distinct. */ k = new_macro_key (t, name, source, line); d = new_macro_definition (t, macro_function_like, argc, argv, replacement); splay_tree_insert (t->definitions, (splay_tree_key) k, (splay_tree_value) d); } void macro_undef (struct macro_source_file *source, int line, const char *name) { splay_tree_node n = find_definition (name, source, line); if (n) { /* This function is the only place a macro's end-of-scope location gets set to anything other than "end of the compilation unit" (i.e., end_file is zero). So if this macro already has its end-of-scope set, then we're probably seeing a second #undefinition for the same #definition. */ struct macro_key *key = (struct macro_key *) n->key; if (key->end_file) { complaint (&symfile_complaints, _("macro '%s' is #undefined twice, at %s:%d and %s:%d"), name, source->filename, line, key->end_file->filename, key->end_line); } /* Whatever the case, wipe out the old ending point, and make this the ending point. */ key->end_file = source; key->end_line = line; } else { /* According to the ISO C standard, an #undef for a symbol that has no macro definition in scope is ignored. So we should ignore it too. */ #if 0 complaint (&symfile_complaints, _("no definition for macro `%s' in scope to #undef at %s:%d"), name, source->filename, line); #endif } } struct macro_definition * macro_lookup_definition (struct macro_source_file *source, int line, const char *name) { splay_tree_node n = find_definition (name, source, line); if (n) return (struct macro_definition *) n->value; else return 0; } struct macro_source_file * macro_definition_location (struct macro_source_file *source, int line, const char *name, int *definition_line) { splay_tree_node n = find_definition (name, source, line); if (n) { struct macro_key *key = (struct macro_key *) n->key; *definition_line = key->start_line; return key->start_file; } else return 0; } /* Creating and freeing macro tables. */ struct macro_table * new_macro_table (struct obstack *obstack, struct bcache *b) { struct macro_table *t; /* First, get storage for the `struct macro_table' itself. */ if (obstack) t = obstack_alloc (obstack, sizeof (*t)); else t = xmalloc (sizeof (*t)); memset (t, 0, sizeof (*t)); t->obstack = obstack; t->bcache = b; t->main_source = NULL; t->definitions = (splay_tree_new_with_allocator (macro_tree_compare, ((splay_tree_delete_key_fn) macro_tree_delete_key), ((splay_tree_delete_value_fn) macro_tree_delete_value), ((splay_tree_allocate_fn) macro_alloc), ((splay_tree_deallocate_fn) macro_free), t)); return t; } void free_macro_table (struct macro_table *table) { /* Free the source file tree. */ free_macro_source_file (table->main_source); /* Free the table of macro definitions. */ splay_tree_delete (table->definitions); } | http://opensource.apple.com/source/gdb/gdb-1344/src/gdb/macrotab.c | CC-MAIN-2013-20 | refinedweb | 3,296 | 59.84 |
In this post I will be rather wordy, as it marks a turning point – the realisation of a dream 😉 If you want to avoid some of the excessive chatter I’ve divided this in to sections so you can easily get past the BS – just scroll down to “The Problem”, or if you want to avoid explanations, scroll down to “And the Result” to see the script itself – extensively commented.
The Background
Sometimes when a business project is of sufficient size, snap decisions get made which have long term impacts – sadly with our implementation there were a number of wants in sensitive areas that required mods which were authorised for which we are still paying for over seven years later.
There are always holes in functionality – even with tailored software you’ll discover a perceived need in a certain piece of functionality that was overlooked or was only uncovered after extensive use of a product. People tend to think differently, and some work in rather unusual chaotic ways – so your efficient way may not be the efficient way for someone else.
For those of you that haven’t met me, I have been quite an advocate of modification removal through the use of jscripts – I’ve tried to raise awareness of some of the benefits of jscripts instead of mods in the NZ M3 community – even made a few jabs at Lawson to try to get them to get the word out to the consultants. Infact one of the selling points of upgrading to M3 10.1 here was that we would have access to two very kewl technologies – Smart Office (yay!) and WebServices which should allow us to remove almost all of our modifications.
To keep short timeframes to keep our staff focused, our upgrade was inplace – all mods were uplifted as is with some consulting time spared to provide areas that we can look at process improvement/mod removal. We would then quietly start working on modification removal through process change or jscripts throughout the 2011 year. This didn’t pan out due to spending the better part of 2011 scrambling to get our network running properly after some substantial earthquakes.
Why this long post? Well, if you’ve been reading this blog for a while, you’ll notice I like the sound of my own words 😉 And well, to be honest, I am quite excited to finally be able to move forward and clean up a lot of loose ends.
The Problem
We are predominantly an export company, ~80 – 85% of our product gets exported in shipping containers. A single container could have a single SKU of product, or it could have a dozen different SKUs. However, one thing that we have to do is ensure containers are packed to capacity. A container that isn’t packed to capacity increases the chances of product getting damaged considerably and isn’t very efficient.
We had previous systems where we could key in an order and it would calculate the approximate percentage of a 20′ container they would consume – these systems varied from a spreadsheet to a sprawling database in FileMaker. Our staff could determine how many containers needed to be booked or if an order needed to be increased or decreased.
In the efforts to remove peripheral repositories of data, a modification was made to OIS101 so it would take the Free Cap Units in MMS001 and calculate a container percentage of each individual line. It would then also provide a total percentage. The Free Cap Units in MMS001 were expressed as a percentage of a 20′ container that a single unit would consume.
Given the modification is in a rather sensitive area – which has been proven to create the occasional head-ache, and given the modification is purely for display, it would be nice to have it removed. Also, knew it would be challenging as there would be a need for several ‘new’ techniques to achieve success.
Sometimes the Simplest Things…
…can be insanely complicated. 🙂
The concept was that I’d add a new column to the ListView in the Init(), I’d spin up a BackgroundWorker thread to retrieve the Free Cap units through webservices – this WebService would actually use the SQL component and request all of the lines and do the actual math to calculate the freecap – returning each line with the appropriately calculated free cap value. I’d iterate through the lines, extract the ITNO and the ORQT and search for them in the data returned from the webservice so I could extract the container percentage. Then I would populate my column with the data and finally create a total which I’d display in a TextBox I added by simply adding all of the container percentages together and writing it out to a TextBox.
Adding columns in Smart Office wasn’t hard, as I previously posted. Actually getting them to work properly however was non-trivial – so the entire project got shelved due to frustration and time commitments to disaster recovery.
But then I spied a posting by Thibaud which described how to do it quite nicely. I did some work based on the posting and got it working quite happily, so I was nearly ready to spend some time on this again.
Over and above this, as I mentioned above I had wanted to look in to spinning it up a background thread – I had been toying with the concept but as part of it wanted to figure out how to use delegates in jscript to get around updating content on controls from a different thread (only the UI thread can update controls). Eventually I re-read the .Net documentation and realised that the OnWorkerCompleted event was called on the UI thread – I swear the MSDN documentation didn’t say that when I first started playing with BackgroundWorker but then I am prone to skim reading :-). Shortly after that and much to my delight Karin posted an article on the BackgroundWorker thread – I did have to laugh at the Simpsons picture.
So, all of the pieces were falling in to place…
The WebService
I won’t go through the creation of the WebService as I have done so in a previous post – and this particular posting is long enough as is. However, here is a snapshot of the completed WebService
WebService Tweaks from Previous Code
The previous code that I had used to process webservice results wasn’t up to the task of what I wanted to achieve. I needed to take the stream that was returned and then read it in a controlled fashion, creating objects from a new class which would be added to an array that we could search.
To this end, I made significant changes to the decodeSOAPResult() function, including needing to orientate my code – getting it to the correct depth to start processing the OrderLineCapItem elements. To do this I created this wee loop.
//; } } } }
Once we were at our starting point we could extract useful information. I would process an element at a time, extracting the value and populating a OrderLineCapItem object with the values until we had populated all of our values. Then it would be pushed in to an array.
Not All ListViews are Created Equal
I knew that not all of our views had the Item Number and the Quantity in the same column, so this meant I needed to be pretty dynamic in the way that I determined their location. After a little bit of fumbling around I noticed that the ListControl.Columns had the four digit name of the column, so I created a little loop that would look for ITNO and ORQT. As it turns out, one of the Views used ORQA instead of ORQT – so we also look for; } }
I also had my memory refreshed :-[ that there are events will remove controls from the panel and call the Init(), this meant that I needed to dynamically determine where the percentage column that I added was. Should be trivial, but it wasn’t quite as easy as I hoped. I ended up creating a function which would loop through the ListView columns and search for the text that I added.
//); }
And the Result…
And finally we get to the code itself. I still have user validation to occur (mainly around pulling out the modification itself rather than the script), but we’re looking pretty good. Don’t forget to change the username and password for logging in to the webservices in createSOAPRequest()
As always, code is presented as is without any warranty of any sort. It may or may not work for you, create world peace or put an end to reality tv…
import System; import System.Windows; import System.Windows.Controls; import MForms; import System.IO; import System.Net; // import System.Collections.Generic; //import System.Collections.ObjectModel; import System.ComponentModel; import Mango.UI.Core; import System.Xml; import System.Xml.Serialization; import System.Web.Services.Protocols; import Mango.UI.Services.Lists; package MForms.JScript { class ColumnTest_V007 { var gdebug; var giITNOPos : Int32 = -1; // item number position var giORQTPos : Int32 = -1; // order quantity position var gtbTotalPercentage : TextBox = null; var gController = null; var glvListView = null; var gobjArray : Array = null; var giItemsPopulatedUpTo : Int32 = 0; // this keeps track of the number of rows that we populated in our column var giColumnIndex : Int32 = -1; // this is where we have created our column var gbwBackgroundWorker : BackgroundWorker = new BackgroundWorker(); var<soapenv:Header><mws2:mws><mws2:user>potatoit</mws2:user><mws2:password>potatoit </mws2:password><mws2:company>100</mws2:company><mws2:division>IFL</mws2:division></mws2:mws></soapenv:Header><soapenv:Body><get:GetFreeCaps><get:OrderNumber>" + astrOrderNumber + "</get:OrderNumber></get:GetFreeCaps></soapenv:Body></soapenv:Envelope>"); } // in previous situations when using WebServices I didn't need to // worry so much about a list of data being returned, this meant // that tweaks are needed to make this work with a list of data public function doRequest(astrXMLRequest : String) : Array { var result : Array; // we are going to use the HttpWebRequest object // // and we want to connect to the ItemFreeCaps2 service var hwrRequest : HttpWebRequest = WebRequest.Create(""); // ensure we actually managed to create something if(null != hwrRequest) { // here we're defining our actions and content types hwrRequest.Headers.Add("SOAPAction","\"\""); hwrRequest.ContentType = "text/xml;charset=\"utf-8\""; hwrRequest.Method = "POST"; hwrRequest.Accept = "text/xml"; hwrRequest.Proxy = GlobalProxySelection.GetEmptyWebProxy(); // we are going to use a stream to write out our request (and also read it later) var strmStream : Stream = hwrRequest.GetRequestStream(); if(null != strmStream) { // SOAP is basically just xml, so we are going to use the XML framework // to make our lives easier. // Create an XML Document var xmdDocument : XmlDocument = new XmlDocument(); if(null != xmdDocument) { // we then add the String to our XML document xmdDocument.LoadXml(astrXMLRequest); // the save of the document to our stream actually sends the request // to the server xmdDocument.Save(strmStream); // close our stream strmStream.Close(); // this section is wrapped in a try .. catch() // block because I had a lot of problems getting // this running initially. try { // now we want to get a response var wresponse : WebResponse = hwrRequest.GetResponse(); if(null != wresponse) { // we like using streams, so get the stream // connection strmStream = wresponse.GetResponseStream(); if(null != strmStream) { // create a streamreader to retrieve the data var srStreamReader : StreamReader = new StreamReader(strmStream); if(null != srStreamReader) { // and finally we pass the stream handle to the decode // function var objResult = decodeSOAPResult(srStreamReader); if(null != objResult) { result = objResult; } else { gdebug.WriteLine("no result from request"); } // close the response wresponse.Close(); // close the stream reader srStreamReader.Close(); } } } else { gdebug.WriteLine("No Response was returned"); } } catch(e) { gdebug.WriteLine("Exception: " + e.message); } } } } else { gdebug.WriteLine("doRequest() unable to create"); } return(result); } public function decodeSOAPResult(asrStream : StreamReader) { var result; try { // create an XmlReader from the stream handle var xmrReader = XmlReader.Create(asrStream); //; } } } } // this is the array where we will be storing our objects var olciArray = new Array(); // continue reading until we hit the end of file (EOF) while(false == xmrReader.EOF) { var olciCurrent : OrderLineCapItem = new OrderLineCapItem(); xmrReader.ReadStartElement("OrderLineCapItem"); xmrReader.ReadStartElement("OBORNO"); olciCurrent.OBORNO = xmrReader.ReadString(); xmrReader.ReadEndElement(); xmrReader.ReadStartElement("OBITNO"); olciCurrent.OBITNO = xmrReader.ReadString(); xmrReader.ReadEndElement(); xmrReader.ReadStartElement("MMFCU1"); olciCurrent.MMFCU1 = xmrReader.ReadString(); xmrReader.ReadEndElement(); xmrReader.ReadStartElement("OBORQT"); olciCurrent.OBORQT = xmrReader.ReadString(); xmrReader.ReadEndElement(); xmrReader.ReadStartElement("CONTAINERPERCENTAGE"); olciCurrent.CONTAINERPERCENTAGE = xmrReader.ReadString(); xmrReader.ReadEndElement(); xmrReader.ReadEndElement(); // add our newly populate object to the array olciArray.push(olciCurrent); if(false == String.IsNullOrEmpty(xmrReader.Name)) { if(0 != String.Compare("OrderLineCapItem",xmrReader.Name)) { // exit if the next element isn't an OrderLineCapItem element break; } } } // close the reader xmrReader.Close(); // return our array result = olciArray; } catch(ex) { gdebug.WriteLine("Exception decoding the SOAP request " + ex.message); } return(result); } public function OnRequestCompleted(sender: Object, e: RequestEventArgs) { // verify that these are the events we are looking for! if(e.CommandType == MNEProtocol.CommandTypePage) { // we had a page event, this means that we may have new data // that needs its percentages to be updated populateNewColumn(glvListView, gobjArray); } } } public class OrderLineCapItem { var OBORNO : String; var OBITNO : String; var MMFCU1 : double; var OBORQT : double; var CONTAINERPERCENTAGE : double; } }
Happy coding!
Hi Scott,
Have you ever tried to add a column in a list that contains editable cells ? (like MMS424/B1 for example). The command “CopyTo” fails because the editable cells do not have the type System.String. You cannot remove and insert the new rows so easely.
On smartofficeblog.com, Norp said that editable cells values could be retrieved by MForms.MFormsUtil.GetCellText(lviItem,k) but i wonder how to create the new rows… ? When you manually pack in MMS424 for example, the packed quantity is filled by default but with the script i have written it is blank. Any idea ?
Hi Maxric,
I haven’t personally done it, but I would probably recommend asking over on the Smart Office blog.
If you dig in to the ItemsSource it requires a fair amount to unwind how it’s built. For example, the first item in the ItemsSource is an observable collection of Mango.UI.Services.Lists.ListRow which represents each row. The ListRow has an Items array which stores each cell in the row. If you do a GetType() on each of those cells in a ListView that has an editable column, then you’ll see the read only columns return “System.String”, the Editable cells are Mango.UI.Services.Lists.EditableCell
So you could reverse engineer it…but…
Ok, I did some digging and experimentation. If you look at the example from the Smart Office Blog here:
line 43 is
var newItems = new String[columnCount];
change it so it is var newItems = new Object[columnCount];
and it should work.
Cheers,
Scott
Thank you for your answer, its seems to work now! | https://potatoit.kiwi/2012/01/18/modification-removal-freecap-in-ois101-backgroundworker-and-adding-columns-2/?replytocom=859 | CC-MAIN-2020-10 | refinedweb | 2,441 | 52.19 |
Answered by:
Error TF248001 when uploading process template
I have a work item type in my process template that is setup like this:
<FIELD name="Project Name" refname="MyCompany.Common.ProjectName" type="String">
<READONLY not="[project]\Project Administrators" />
</FIELD>
But when I try to upload the process template the get the following error in the log:
Exception Message: TF248001: You have specified a project-scoped group, '[project]\Project Administrators', which is not allowed for a global list or a global workflow that is defined for a project collection. Remove the reference to the specified group. (type ProvisionValidationException)
My field is not a global list or part of a global workflow, so I'm not sure why I'm getting this error. If I remove the READONLY field the process template uploads just fine. But, I can load that specific work item type manually without any problems into a team project.
Any idea why it's a problem loading when done as part of the process template upload?
This is for TFS2010.
Thanks.
- Edited by Anthony Hunter Monday, October 01, 2012 4:39 PM
Question
Answers
All replies
Hi Anthony,
Thank you for your post.
I test the issue in my side, i add a field as same as your post in a process template, the result is i can save the process template and upload it to TFS server. Also, i can create new team project with that process template.
Hope you can provide following information to help narrow down the issue:
The detail steps you do to the process template. Include, which WIT you add the field to? How do you add that field, edit the existing field, or create a new WIT?
Regards,
Lily Wu [MSFT]
MSDN Community Support | Feedback to us
I'm working with the Sprint WIT from the Scrum template. I have used the Process Editor from the Power Tools to make the change. I then export the WIT and add it to my process template. When I export it, I don't save the global lists.
I'll try making the change manually to the xml file.
Hi Anthony,
I have pretty much run into the same problem and it just does not seems to go away with the SP1 (which I already have). I am basically restricting the values in certain fields using groups
<FIELD name="Assigned To" refname="System.AssignedTo" type="String" syncnamechanges="true"> <ALLOWEDVALUES filteritems="excludegroups"> <LISTITEM value="[Project]\People" /> </ALLOWEDVALUES> <ALLOWEXISTINGVALUE /> <HELPTEXT>The person currently working on this story</HELPTEXT> </FIELD>
Unfortunately, I can't avoid having restrictions using groups. Any ideas?
Paritosh
Hello
I have the same problem even after TFS SP1. My list is defined as
<FIELD name="Assigned To" refname="System.AssignedTo" type="String" syncnamechanges="true" reportable="dimension">
<ALLOWEXISTINGVALUE />
<HELPTEXT>The person currently working on this bug (Uses team defined values)</HELPTEXT>
<!-- if {USER_RULES} -->
<ALLOWEDVALUES expanditems="true" filteritems="excludegroups"><LISTITEM value="[project]\Contributors" /></ALLOWEDVALUES>
<!-- endif {USER_RULES} -->
</FIELD>
It fails with the TF248001 error saying that "You have specified a project-scoped group, '[project]\Contributors', which is not allowed for a global list or a global workflow that is defined for a project collection.".
I did apply TFS SP1 .
Any suggestions?
Thx
Prakash
Hi Prakash,
The key here is to use global keyword instead of project or collection. I used [GLOBAL]\People instead of [PROJECT]\People which worked for me.
Try using [GLOBAL]\CONTRIBUTORS. It should work.
Paritosh Arya
- Proposed as answer by Paritosh Arya Thursday, December 06, 2012 9:07 AM
- Unproposed as answer by Paritosh Arya Thursday, December 06, 2012 9:07 AM
If your project is in different collection this will work, but if all project reside under same collection, you will have problems.
You can refer to this link | http://social.msdn.microsoft.com/Forums/vstudio/en-US/a5ed68b5-913e-461d-88f5-53456693b9ad/error-tf248001-when-uploading-process-template?forum=tfsprocess | CC-MAIN-2014-15 | refinedweb | 625 | 54.22 |
I want to make a program that accesses images from files, encodes them, and sends them to an server.
Than the server is supposed to decode the image, and save it to file.
I tested the image encoding itself, and it worked, so the problem lies in the server and client connection.
Here is the server:
import socket
import errno
import base64
from PIL import Image
import StringIO
def connect(c):
try:
image = c.recv(8192)
return image
except IOError as e:
if e.errno == errno.EWOULDBLOCK:
connect(c)
def Main():
host = '138.106.180.21'
port = 12345
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP)
s.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR,1)
s.bind((host, port))
s.listen(1)
while True:
c, addr = s.accept()
c.setblocking(0)
print "Connection from: " + str(addr)
image = c.recv(8192)#connect(c)
imgname = 'test.png'
fh = open(imgname, "wb")
if image == 'cusdom_image':
with open('images.png', "rb") as imageFile:
image = ''
image = base64.b64encode(imageFile.read())
print image
fh.write(image.decode('base64'))
fh.close()
if __name__ == '__main__':
Main()
import socket
import base64
from PIL import Image
import StringIO
import os, sys
ip = '138.106.180.21'
port = 12345
print 'Add event executed'
s = socket.socket()
s.connect((ip, port))
image_path = '/home/gilgamesch/Bilder/Bildschirmfoto.png'
print os.getcwd()
olddir = os.getcwd()
os.chdir('/')
print os.getcwd()
if image_path != '':
with open(image_path, "rb") as imageFile:
image_data = base64.b64encode(imageFile.read())
print 'open worked'
else:
image_data = 'cusdom_image'
os.chdir(olddir)
s.send(image_data)
s.close()
Traceback (most recent call last):
File "imgserv.py", line 49, in <module>
Main()
File "imgserv.py", line 34, in Main
image = c.recv(8192)#connect(c)
socket.error: [Errno 11] Resource temporarily unavailable
In the server you are setting the remote socket (that returned by
accept()) to non-blocking mode, which means that I/O on that socket will terminate immediately by an exception if there is no data to read.
There will usually be a period of time between establishing the connection with the server and the image data being sent by the client. The server attempts to immediately read data from the client once the connection is accepted, however, there might not be any data to read yet, so
c.recv() raises a
socket.error: [Errno 11] Resource temporarily unavailable exception. Errno 11 corresponds to
EWOULDBLOCK, so
recv() aborted because there was no data ready to read.
Your code does not seem to require non-blocking sockets because there is an
accept() at the top of the while loop, and so only one connection can be handled at a time. You can just remove the call to
c.setblocking(0) and this problem should go away. | https://codedump.io/share/OlfmwRknY0KA/1/python-error-quotsocketerror-errno-11-resource-temporarily-unavailablequot-when-sending-image | CC-MAIN-2017-09 | refinedweb | 450 | 52.36 |
When reviewing issues for
react-rails, I see many questions about how to gather JavaScript dependencies with Sprockets. Here is how I use Sprockets to manage JavaScript dependencies.
I’m looking for a few things in a JavaScript bundler:
- Stability: I don’t want any changes to my dependencies unless I explicitly make them.
- Clarity: I want to be able to quickly tell what dependencies I have (library and version).
- Insulation: I don’t want to rely on external services during development, deployment or runtime (except for downloading new dependencies, of course)
- Feature-completeness: I want to concatenate and minify my assets and serve them with cache headers
Using Sprockets
To add a new dependency:
- Find a non-minified, browser-ready version of your dependency
- Add it to
app/assets/javascripts/vendor/{library-name}-v{version-number}.js(for example,
app/assets/javascripts/moment-v2.13.0.js)
- Require it in
application.jswith
//= require ./vendor/moment-v2.13.0
- Access the global variable as needed in your app (eg
moment)
To update a dependency:
- Find a non-minified, browser-ready version of the updated dependency
- Add it to
app/assets/javascripts/vendor/{library-name}-v{version-number}.jsand remove the old version from that directory
- Update the
//= requiredirective with the new version number
- Check the dependency’s changelog and update your app as needed. (Search your project for the global variable to find usages, eg
moment.)
To remove a dependency:
- Remove its file (
app/assets/javascripts/vendor/{library-name}-v{version-number}.js)
- Remove the
//= requiredirective
- Search your project for the global variable and remove all usages
Finding a browser-ready file
This got its own page: Finding a browser-ready file.
Adding the file to
vendor/
Use an unminified version of the library. It will help in debugging development and viewing diffs when you update the dependency. Have no fear, Sprockets will minify it for you for production.
Include the version number in the file name. This will give you more confidence in updating the library, since you’ll know what version you’re coming from.
Integrating with Sprockets
The
//= require ./vendor/{library}-v{version} directive is your friend. Like an entry in
package.json, it tells the reader what dependency you have.
Now, your library will be accessible by its global name, such as
React,
d3 or
Immutable.
Consuming a library via global variable is not ideal. But it does help you remember that, at the end of the day, the browser is one giant, mutable namespace, so you must be a good citizen! At least global variables can be grepped like any other dependency.
Consider isolating your dependency. For example, you could wrap
Pusher in an application-specific event emitter. This way, when you update Pusher, you only have to check one file for its usages. (Some libraries are poor candidates for isolation. My app will never be isolated from React!)
Caveats
There are some things Sprockets doesn’t provide for me, which I wish it did:
- Named imports: I wish there was a good alternative to global namespacing with Sprockets, but not yet. (It’s not a deal breaker – it doesn’t hurt to be familiar with this constraint because it’s the reality of the browser, anyways.)
- Tree shaking: It wish I could only transmit the parts of Underscore.js I actually used!
Perhaps I should read up on Sprockets and submit a patch 😎
Also, there’s one case where copy-pasting isn’t a great solution. Some libraries (like React.js) have separate “development” and “production” builds. The production build has fewer runtime checks than the development build, making it smaller and faster. There are a few solutions to this problem:
- Use a gem which provides the proper file for each environment (like
react-rails)
- Add environment-specific folders to the asset pipeline (like
react-railsdoes, I can write more on this if need be)
- Use the development build in productiosn (weigh the costs first: what’s the difference in behavior, performance and file size?) | http://rmosolgo.github.io/blog/2016/05/19/how-i-use-sprockets/ | CC-MAIN-2018-09 | refinedweb | 671 | 55.84 |
How Are Methods Defined?
Should methods always be declared inside some
class or
extend block, or just
be declared individually using
def? There are two basic syntaxes for adding a
method to a class. The conventional OOP style is:
class SomeClass def someMethod(arg Int ->) // ... end end
To add a method to an existing class, just replace
class with
extend. The
other option is Go style, where methods are just declared freestanding, like:
def SomeClass someMethod(arg Int ->) // ... end
Advantages for class style:
- Minimizes duplication when defining a lot of methods. Avoids repeating the class name for each method. With generic methods where the class name is an expression like
Dictionary[Key, Value], this can be a bigger deal.
- Familiar to most users.
- If we allow interface declarations to define methods in the main declaration, allowing classes to do the same would be more consistent.
Advantages for Go style:
- Avoids an unneeded level of indentation.
- Emphasizes the openness of classes. Encourages people to add methods to arbitrary classes by making it lightweight to do so.
- Highlights the separation between state (the core class definition) and methods.
Answer: There are advantages both ways. If you're adding a lot of methods to one class, then being able to do that in one block saves a lot of redundant typing, especially with long class names or generic classes:
def AbstractDictionary[Key, Value] blah... def AbstractDictionary[Key, Value] blah... def AbstractDictionary[Key, Value] blah... def AbstractDictionary[Key, Value] blah...
On the other hand, if you're adding a bunch of methods to different classes (i.e. avoiding the visitor pattern), the blocks are tedious:
extend AddExpr evaluate(-> Int) left + right end extend IntExpr evaluate(-> Int) value end ...
The best solution may be to just support both. | http://magpie.stuffwithstuff.com/design-questions/how-are-methods-defined.html | CC-MAIN-2017-30 | refinedweb | 293 | 58.99 |
Using Vue as an Angular alternative for Ionic: Routing part 1
The basic routing concepts between an Ionic Vue and an Ionic Angular application are the same.
A list of Routes are declared and they will help us navigate to different Views.
In this tutorial, we will see how to create:
- A basic route
- A route with parameters
- A named route
- Some nested views
If you don’t know how to bootstrap an Ionic Vue application, you should go to the first tutorial of the course.
As usual, we start by adding a new library:
npm i vue-router -S
The vue-router plugin is the official Vue router plugin, the equivalent of the @angular/router.
Basic example
We will start with a very basic route and pay tribute to the Ionic’s About section.
We first head to the index.html file:
<div id="app"> <router-linkAbout</router-link> <router-view></router-view> </div>
The router-view Directive will display the content according to the current route, this can be compared to a television.
The router-link helps us navigate to the about view, this is the equivalent of a remote control button.
In order to use those elements, we need to go to the main.ts file where we can link Vue and VueRouter together:
import Vue from "vue"; import VueRouter from "vue-router"; Vue.use(VueRouter);
We can now prepare the routing by creating a simple About Component:
const About = { template: "<div>About view</div>", mounted: function() { console.log("Welcome to the About view"); }, destroyed: function() { console.log("Thanks for visiting to the About view"); } };
This Component has a simple template and two lifecycle hooks. Thanks to the mounted and destroyed hooks, we can see the transition when navigating.
We then create an array for the routes:
const routes = [ { path: "/about", component: About } ];
So far there’s not much difference between the Ionic Vue and the Ionic Angular routing system.
And we finish the configuration:
const router = new VueRouter({ routes }); const app = new Vue({ router }).$mount("#app");
A new VueRouter instance is created then used by the Ionic Vue root instance.
Routes can be more complex. Since the REST ascension, we tend to pass more optional information when navigating. We will take the previous About example and pass a user id.
Routing with params
Going back to the index.html:
<div id="app"> <router-linkAbout</router-link> <router-linkAbout user 1</router-link> <router-view></router-view> </div>
The second route has more information: the user’s id.
We will use this information in the About Component from the main.ts file:
const About = { template: "<div>About view</div>", mounted: function() { console.log("Welcome to the About view"); const userId = this.$route.params.id; if (userId) { console.log("You are viewing the user:", userId); } }, destroyed: function() { console.log("Thanks for visiting to the About view"); } };
The id is acquired from the context’s $route.params.id property in the mounted hook. If the id is not null, we display the optional information.
We update the routes configuration:
const routes = [ { path: "/about/:id", component: About } ];
The “/:id” part means that an optional id parameter can follow the “/about” part.
Be careful there, if we want to use a more specific route like “/about/create”, it’s necessary to use this configuration:
const routes = [ { path: "/about/create", component: About }, { path: "/about/:id", component: About } ];
The router will first check if the route contains “/create”, if not, it will consider that the rest of the route is an id parameter. If we do the opposite, the “/create” part will always be considered an id and we will never reach the “/create” view.
Always list the routes from the more specific to the more generic (and by alphabetical order if possible). And that’s it, we have our route working!
But wait! That’s not the only way.
We can create a different Contact Component:
const Contact = { props: ["id"], template: "<div>Contact view</div>", mounted: function() { console.log("Welcome to the Contact view"); const userId = this.id; if (userId) { console.log("You are viewing the user", userId); } }, destroyed: function() { console.log("Thanks for visiting to the Contact view"); } };
This one has an id props and directly get the id from the context.
We only need to specify in the routes configuration that some props are available:
const routes = [ { path: "/contact/:id", component: Contact, props: true } ];
Named routes
Routing can be quite annoying when the paths are quite long. However, thanks to named routes we can use shorter names as aliases, here is an example:
const routes = [ { path: "/black", name: "white", component: NamedRoute } ];
This route’s path is “/black”, but it can be called by using the name “white”!
Here is the associated Component:
const NamedRoute = { template: "<div>Named route content</div>" };
And how to use it in the index.html:
<div id="app"> <router-link :Go to the same named route</router-link> <router-view></router-view> </div>
We need to use an object here with a name property, hence we need to use :to to interpret the content.
Nested views
Sometimes we can end up with just one little part of the view that requires some changes. Creating one new view for each change can be overkill.
Nested views can be useful and performant in this case. They are like a <ng-switch> coupled with an <ng-include>.
The parent’s content stays the same and the sub-views content are shown according to the current route.
Here is how it’s done:
const FirstChild = { template: "<div>Hi I'm the first child</div>" }; const SecondChild = { template: "<div>Hi I'm the second child</div>" }; const ParentComponent = { template: `<div> Hi I'm the parent, I will stay here <router-view></router-view> <router-linkFirst Child</router-link> <router-linkSecond Child</router-link> </div> ` }; const ParentRoute = { path: "/parent", name: "parent", component: ParentComponent, children: [ { path: "first-child", component: FirstChild }, { path: "second-child", component: SecondChild } ] }; const routes = [ ParentRoute ];
The Parent Component has a <router-view> Element where its Child Components will be displayed.
We have two Child Components, they are declared in the children property and we navigate between them by using a <router-link> to “/parent/first-child” and “/parent/second-child”.
We can finally add a <router-link> to the parent in our main view:
<div id="app"> <router-linkGo to the nested part</router-link> <router-view></router-view> </div>
Conclusion
Routing is a big part for an Ionic Vue application.
It can be very simple with basic routes, however, the bigger the application gets, the more complex the routing system becomes.
The routes generally follow a functional approach coupled with the REST convention like:
user/:id/delete
It’s wiser to take a step back and set some time aside to create a solid routing system. In the next routing tutorial, we will dive into more advanced concepts like lazy loading, dynamic routes, etc. | https://javascripttuts.com/using-vue-as-an-angular-alternative-for-ionic-routing-part-1/ | CC-MAIN-2019-26 | refinedweb | 1,155 | 61.26 |
This part describes NIS+.
Chapter 3, "Introduction to NIS+"
Chapter 4, "The NIS+ Namespace"
Chapter 5, "NIS+ Tables and Information"
Chapter 6, "Security Overview"
This chapter provides an overview of the Network Information Service Plus (NIS+):
"What NIS+ Can Do for You"
"How NIS+ Differs From NIS"
"NIS+ and the Name Service Switch"
"Solaris 1.x Releases and NIS-Compatibility Mode"
"NIS+ Administration Commands"
Directions for setting up NIS+ and DNS namespaces are contained in Solaris Naming Setup and Configuration Guide. See Glossary for definitions of terms and acronyms you don't recognize.
NIS+ is a network name service similar to NIS but with more features. NIS+ is not an extension of NIS. It is a new software program.
The NIS+ name service is designed to conform to the shape of the organization that installs it, wrapping itself around the bulges and corners of almost any network configuration..
Solaris clients use the name service switch (/etc/nsswitch.conf file) to determine from where a workstation will retrieve network information. Such information may be stored in local /etc files, NIS, DNS, or NIS+. You can specify different sources for different types of information in the name service switch.
NIS+.
The.
NIS+ protects the structure of the namespace, and the information it stores, by the complementary processes of authorization and authentication.
Authorization. Every component in the namespace specifies the type of operation it will accept and from whom. This is authorization.
Authentication. NIS+ attempts to authenticate every request for access to the namespace. Requests come from NIS+ principals. An NIS+ principal can be a process, machine, root, or a user. Valid NIS+ principals possess an NIS+ credential. NIS+ authenticates the originator of the request (principal) by checking the principal's credential.
If the principal possesses an authentic (valid) credential, and if the principal's request is one that the principal is authorized to perform, NIS+ carries out the request. If either the credential is missing or invalid, or the request is not one the principal is authorized to perform, NIS+ denies the request for access. An introductory description of the entire NIS+ security system is provided in Chapter 6, Security Overview.
NIS+.
NIS+.
NIS+ provides a full set of commands for administering a namespace. Table 3-2, below, summarizes them.Table 3-2 NIS+ Namespace Administration Commands
The())
This chapter describes the structure of the NIS+ namespace, the servers that support it, and the clients that use it.
"NIS+ Files and Directories"
"Structure of the NIS+ Namespace"
"NIS+ Clients and Principals"
Table 4-1 lists the UNIX directories used to store NIS+ files.Table 4-1 Where NIS+ Files are Stored
Do not rename the /var/nis or /var/nis/data directories or any of the files in these directories that were created by nisinit or any of the other NIS+ setup procedures. In Solaris Release 2.4 and earlier versions, the /var/nis directory contained two files named hostname.dict and hostname.log. It also contained a subdirectory named /var/nis/hostname. Starting with Solaris Release 2.5, the two files were named trans.log and data.dict, and the subdirectory was named /var/nis/data. The content of the files was also changed and they are not backward compatible with Solaris Release 2.4 or earlier. Thus, if you rename either the directories or the files to match the Solaris Release 2.4 patterns, the files will not work with either the Solaris 2.4 Release or the current version of rpc.nisd. Therefore, you should not rename either the directories or the files.
With the Solaris operating environment, the NIS+ data dictionary (/var/nis/data.dict) is now machine independent. This allows you to easily change the name of an NIS+ server. You can also now use the NIS+ backup and restore capabilities to transfer NIS+ data from one server to another. See Chapter 16, NIS+ Backup and Restore.
The.
Directory objects are the skeleton of the namespace. When arranged into a tree- like structure, they divide the namespace into separate parts. You may want to visualize a directory hierarchy as an upside-down tree, with the root of the tree at the top and the leaves toward the bottom. The topmost directory in a namespace is the root directory. If a namespace is flat, it has only one directory, but that directory is nevertheless the root directory. The directory objects beneath the root directory are simply called "directories":
A namespace can have several levels of directories:
When identifying the relation of one directory to another, the directory beneath is called the child directory and the directory above is called the parent directory.
Whereas UNIX directories are designed to hold UNIX files, NIS+ directories are designed to hold NIS+ objects: other directories, tables and groups. Each NIS+ domain-level directory contains the following sub-directories:
groups_dir. Stores NIS+ group information.
org_dir. Stores NIS+ system tables.
ctx_dir. This directory is only present if you are using FNS.
Technically, you can arrange directories, tables, and groups into any structure that you like. However, NIS+ directories, tables, and groups in a namespace are normally arranged into configurations called domains. Domains are designed to support separate portions of the namespace. For instance, one domain may support the Sales Division of a company, while another may support the Manufacturing Division.
An NIS+ domain consists of a directory object, its org_dir directory, its groups_dir directory, and a set of NIS+ tables.
NIS+ domains are not tangible components of the namespace. They are simply a convenient way to refer to sections of the namespace that are used to support real-world organizations.
For example, suppose the DOC company has Sales and Manufacturing divisions. To support those divisions, its NIS+ namespace would most likely be arranged into three major directory groups, with a structure that looked like this:
Instead of referring to such a structure as three directories, six subdirectories, and several additional objects, referring to it as three NIS+ domains is more convenient:. Although instructions are provided in Part 2, one thing is important to mention now: when that connection is established, the directory object stores the name and IP address of its server. This information is used by clients to send requests for service, as described later in this section.
Any Solaris operating environment based workstation can be an NIS+ server. The software for both NIS+ servers and clients is bundled together into the release. Therefore, any workstation that has the Solaris Release 2 software installed can become a server or a client, or both. What distinguishes a client from a server is the role it is playing. If a workstation is providing NIS+ service, it is acting as an NIS+ server. If it is requesting NIS+ service, it is acting as an NIS+ client.
Because of the need to service many client requests, a workstation one.
An NIS+ master server implements updates to its objects immediately; however, it tries to "batch" several updates together before it propagates them to its replicas. When a master server receives an update to an object, whether a directory, group, link, or table, it waits about two minutes for any other updates that may arrive. Once it is finished waiting, it stores the updates in two locations: on disk and in a transaction log (it has already stored the updates in memory).
The transaction log is used by a master server to store changes to the namespace until they can be propagated to replicas. A transaction log has two primary components: updates and time stamps.
An update is an actual copy of a changed object. For instance, if a directory has been changed, the update is a complete copy of the directory object. If a table entry has been changed, the update is a copy of the actual table entry. The time stamp indicates the time at which an update was made by the master server.
After recording the change in the transaction log, the master sends a message to its replicas, telling them that it has updates to send them. Each replica replies with the time stamp of the last update it received from the master. The master then sends each replica the updates it has recorded in the log since the replica's time stamp:
When the master server updates all its replicas, it clears the transaction log. In some cases, such as when a new replica is added to a domain, the master receives a time stamp from a replica that is before its earliest time stamp still recorded in the transaction log. If that happens, the master server performs a full resynchronization, or resync. A resync downloads all the objects and information stored in the master down to the replica. During a resync, both the master and replica are busy. The replica cannot answer requests for information; the master can answer read requests but cannot accept update requests. Both respond to requests with a Server Busy - Try Again or similar message.
NIS+ principals are the entities (clients) that submit requests for NIS+ services. workstation. Therefore,.).
When a client is initialized, it is given a cold-start file. The cold-start file gives a client a copy of a directory object that it can use as a starting point for contacting servers in the namespace. The directory object contains the address, public keys, and other information about the master and replica servers that support the directory. Normally, the cold-start file contains the directory object of the client's home domain.
A cold-start file is used only to initialize a client's directory cache. The directory cache, managed by an NIS+ facility called the cache manager, stores the directory objects that enable a client to send its requests to the proper servers.
By storing a copy of the namespace's directory objects in its directory cache, a client can know which servers support which domains. (To view the contents of a client's cache, use the nisshowcache command, described in "The nisshowcache Command".) Here is a simplified example:
To keep these copies up-to-date, each directory object has a time-to-live (TTL) field. Its default value is 12 hours. If a client looks in its directory cache for a directory object and finds that it has not been updated in the last 12 hours, the cache manager obtains a new copy of the object. You can change a directory object's time-to-live value with the nischttl command, as described in "The nischttl Command". However, keep in mind that the longer the time-to-live, the higher the likelihood that the copy of the object will be out of date; and the shorter the time to live, the greater the network traffic and server load.
How does the directory cache accumulate these directory objects? As mentioned above, the cold-start file provides the first entry in the cache. Therefore, when the client sends its first request, the request goes to the server specified by the cold-start file. If the request is for access to the domain supported by that server, the server answers the request.
If the request is for access to another domain (for example, sales.doc.com.), the server tries to help the client locate the proper server. If the server has an entry for that domain in its own directory cache, it sends a copy of the domain's directory object to the client. The client loads that information into its directory cache for future reference and sends its request to that server.
In the unlikely event that the server does not have a copy of the directory object the client is trying to access, it sends the client a copy of the directory object for its own home domain, which lists the address of the server's parent. The client repeats the process with the parent server, and keeps trying until it finds the proper server or until it has tried all the servers in the namespace. What the client does after trying all the servers in the domain is determined by the instructions in its name service switch configuration file. See Chapter 2, The Name Service Switch.
Over time, the client accumulates in its cache a copy of all the directory objects in the namespace and thus the IP addresses of the servers that support them. When it needs to send a request for access to another domain, it can usually find the name of its server in its directory cache and send the request directly to that server.
An NIS+ server is also an NIS+ client. In fact, before you can set up a workstation as a server (as described in Part 2),:
Objects in an NIS+ namespace can be identified with two types of names: partially-qualified and fully qualified. A partially qualified name, also called a simple name, is simply the name of the object or any portion of the fully qualified name. If during any administration operation you type the partially qualified name of an object or principal, NIS+ will attempt to expand the name into its fully qualified version. For details, see "NIS+ Name Expansion".
A fully qualified name is the complete name of the object, including all the information necessary to locate it in the namespace, such as its parent directory, if it has one, and its complete domain name, including a trailing dot.
This varies among different types of objects, so the conventions for each type, as well as for NIS+ principals, is described separately. This namespace will be used as an example:
The fully qualified names for all the objects in this namespace, including NIS+ principals, are summarized in Figure 4-3.
A in Table 4-2), or a two or three character geographic identifier such as .jp. for Japan.Table 4-2 Internet Organizational Domains
The second and third lines above show the names of lower-level domains.
A").
Fully qualified table and group names are formed by starting with the object name and appending the directory name, followed by the fully qualified domain name. Remember that all system table objects are stored in an org_dir directory and all group objects are stored in a groups_dir directory. (If you create your own NIS+ tables, you can store them anywhere you like.) Here are some examples of group and table names:
To identify an entry in an NIS+ table, you need to identify the table object and the entry within it. This type of name is called an indexed name. It has the following syntax:
Column is the name of the table column. Value is the actual value of that column. Tablename is the fully qualified name of the table object. Here are a few examples of entries in the hosts table:
You can use as few column-value pairs inside the brackets as required to uniquely identify the table entry.
Some NIS+ administrative commands accept variations on this syntax. For details, see the nistbladm, nismatch, and nisgrep commands in Part 2.
Hostdomain.
NIS+ principal names are sometimes confused with Secure RPC netnames. Both types of names are described in the security chapters of Part--2. However, one difference is worth pointing out now because it can cause confusion: NIS+ principal names always end in a dot and Secure RPC netnames never do:Table 4-3 NIS+ Principal Names
Also, even though credentials for principals are stored in a cred table, neither the name of the cred table nor the name of the org_dir directory is included in the principal name.
You can form namespace names from any printable character in the ISO Latin 1 set. However, the names cannot start with these characters: @ < > + [ ] - / = . , : ;
To use a string, enclose it in double quotes. To use a quote sign in the name, quote the sign too (for example, to use o'henry, type o"'"henry). To include white space (as in John Smith), use double quotes within single quotes, like this:
`"John Smith"`
See "Host Names" for restrictions that apply to host names.
Entering).
NIS.
This chapter describes the structure of NIS+ tables and provides a brief overview of how they can be set up.
NIS+ stores a wide variety of network information in tables. NIS+ tables provide several features not found in simple text files or maps. They have a column-entry structure, they accept search paths, they can be linked together, and they can be set up in several different ways. NIS+ provides 16 preconfigured system tables, and you can also create your own tables. Table 5-1 lists the preconfigured NIS+ tables.Table 5-1 NIS+ Tables
Because it contains only information related to NIS+ security, the Cred table, is described in Chapter 7, Administering NIS+ Credentials.
These tables store a wide variety of information, ranging from user names to Internet services. Most of this information is generated during a setup or configuration procedure. For instance, an entry in the passwd table is created when a user account is set up. An entry in the hosts table is created when a workstation is added to the network. And an entry in the networks table is created when a new network is set up.
Since this information is generated from such a wide field of operations, much of it is beyond the scope of this manual. However, as a convenience, Appendix C, Information in NIS+ Tables, summarizes the information contained in each column of the tables, providing details only when necessary to keep things from getting confusing, such as when distinguishing groups from NIS+ groups and netgroups. For thorough explanations of the information, consult Solaris system and network administration manuals.
You can create more automounter maps for a domain, but be sure to store them as NIS+ tables and list them in the auto_master table. When creating additional automount maps to supplement auto_master (which is created for you), the column names must key and value. For more information about the automounter consult books about the automounter or books that describe the NFS file system.
As a naming service, NIS+ tables are designed to store references to objects, not the objects themselves. For this reason, NIS+ does not support tables with large entries. If a table contains excessively large entries, rpc.nisd may fail.
Although.
A".
Setting.
This.
NIS+ principals can have two types of credential: DES and LOCAL.
DES.. | http://docs.oracle.com/cd/E19455-01/806-1387/6jam6926r/index.html | CC-MAIN-2013-48 | refinedweb | 3,062 | 54.12 |
#include <DM_MouseHook.h>
A DM_MouseHook creates new DM_MouseEventHook objects when new viewports are created.
Definition at line 91 of file DM_MouseHook.h.
Create a mouse hook which creates mouse event hook instances for specific viewports. Only one mouse hook is ever created, and it is responsible for managing the mouse event hooks for viewports. Each hook requires a name (for error reporting) and a priority level to resolve multiple mouse hook conflicts.
Definition at line 103 of file DM_MouseHook.h.
Definition at line 104 of file DM_MouseHook.h.
Called when a viewport needs to create a new hook.
Each viewport has its own event hook.
Called when a viewport no longer requires the hook.
When a viewport is destroyed, it retires all its hooks. Because a hook could be shared between all viewports, this method gives the mouse hook the opportunity to delete it, dereference it, etc. The viewport doing the retiring is passed in along with the hook it is retiring. | http://www.sidefx.com/docs/hdk/class_d_m___mouse_hook.html | CC-MAIN-2018-43 | refinedweb | 162 | 67.55 |
reinterpret_cast Operator
Allows any pointer to be converted into any other pointer type. Also allows any integral type to be converted into any pointer type and vice versa.
Syntax
reinterpret_cast < type-id > ( expression )
Remarks.
#include <iostream> using namespace std; // Returns a hash code based on an address unsigned short Hash( void *p ) { unsigned int val = reinterpret_cast<unsigned int>( p ); return ( unsigned short )( val ^ (val >> 16)); } using namespace std; int main() { int a[20]; for ( int i = 0; i < 20; i++ ) cout << Hash( a + i ) << endl; } Output: 64641 64645 64889 64893 64881 64885 64873 64877 64865 64869 64857 64861 64849 64853 64841 64845 64833 64837 64825 64829.
See also
Casting Operators
Keywords
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/cpp/cpp/reinterpret-cast-operator?redirectedfrom=MSDN&view=msvc-170 | CC-MAIN-2022-27 | refinedweb | 117 | 56.79 |
Reading and writing from GPIO ports from Python
This tutorial covers the setup software and hardware to read and write the GPIO pins on a Raspberry Pi running the latest Raspbian operating system.
We will showing how to read from a physical push-button from Python code, and control an LED.
Related categories: Tutorial
Step 1: Install Python development tools
Open a terminal on the Raspberry Pi either via the desktop or by SSH’ing in (default credentials are pi/raspberry).
Run the following commands to install some basic Python development tools:
sudo apt-get update sudo apt-get install python-dev python-pip sudo pip install –upgrade distribute sudo pip install ipython
Step 2: Install GPIO library
While the default Raspbian image does include the RPi.GPIO library, we would like to install a newer version to get access to a newer API for callbacks.
sudo pip install --upgrade RPi.GPIO
As of this writing the current version of 0.5.0a but you may see a more recent version later.
Step 3: Connect the button
For the first part of this we will be using a single push button. Normally the top two pins and bottom two pins are not connected, but when pressing the button a connection is formed, allowing current to flow. We will put a 330 Ohm resistor in-line with the switch to protect the GPIO ping from recieving too much current.
I have used GPIO4 for this example, but any GPIO pin not otherwise in use will work fine, just update the pin number in later code samples.
Important: Never connect GPIO pins to the 5V power supply
The two 5V supply pins on the breakout board are very useful for powering complex chips and sensors, but you must take care to never accidentally use them to directly interface with the GPIO pins. The GPIO system is only designed to handle 3.3V signals and anything higher will most likely damage your Raspberry Pi
Step 4: Read the button from Python
Remember that you must run your interpreter as root (ex. sudo ipython).
First we need to configure the GPIO pins so that the Raspberry Pi knows that they are inputs:
import RPi.GPIO as GPIO GPIO.setmode(GPIO.BCM) GPIO.setup(4, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
The pull_up_down argument controls the state of the internal pull-up/down resistors. Looking back at the circuit diagram above you can see that when the button is not pushed, the GPIO pin is effecitvely not connected to anything. This is referred to as “floating”, and it means that the voltage there can be unpredictable. A pull-down adds an additional resistor between the pin and ground, or put simply forces the voltage when the button is not pressed to be 0.
With the pin configured we can now do a simple read of the button:
print GPIO.input(4)
This should output False if the button is released and True if the button is pressed. Try it a few times each way to make sure you wiring and configuration is correct.
For more detail: Reading and writing from GPIO ports from Python | http://projects-raspberry.com/reading-and-writing-from-gpio-ports-from-python/ | CC-MAIN-2018-05 | refinedweb | 528 | 59.64 |
Python:
python setup.py install..
After interactively entering commands, statements, and so on into the IPython shell like this:
In [1]: a = 1 In [2]: b = 2 In [3]: c = 3 In [4]: d = {} In [5]: e = [] In [6]: for i in range(20): ...: e.append(i) ...: d[i] = b ...:
you can quickly view everything you typed like this:
In [7]: hist 1: a = 1 2: b = 2 3: c = 3 4: d = {} 5: e = [] 6: for i in range(20): e.append(i) d[i] = b
To view the history without the input numbers (here, 1 through 6), use
hist -n:
In [8]: hist -n a = 1 b = 2 c = 3 d = {} e = [] for i in range(20): e.append(i) d[i] = b
Using
hist -n makes it easier to paste commands into a text
editor. To search the history, type a pattern to search for and press Ctrl-P.
When something matches, a subsequent Ctrl-P will search backward in your
history, and Ctrl-N will search forward.
When testing an idea at a Python prompt, it is sometimes helpful to edit
(and, more importantly, to reedit) some lines of source code with a text
editor. Type
edit from an IPython prompt to bring up the editor
defined by the
$EDITOR environment variable, or
vi on
Unix and Notepad on Windows if you don't have
$EDITOR defined. To
return to the IPython prompt, exit the editor. Saving and exiting will execute,
in the current namespace, the code entered into the editor. If you do not want
IPython to execute the code automatically, use
edit -x. To reedit
the last code that you edited, type
edit -p. In the previous
feature, I mentioned
hist -n making it easier to paste code into
an editor. An even easier way of putting code into an editor is using
edit with Python list slice syntax. Suppose
hist
yields:
In [29]: hist 1 : a = 1 2 : b = 2 3 : c = 3 4 : d = {} 5 : e = [] 6 : for i in range(20): e.append(i) d[i] = b 7 : %hist
To export lines 4, 5, and 6 into an editor, type:
edit 4:7
Another feature within IPython is its access to the Python debugger. Type
the
pdb magic word from the IPython shell to toggle automatic
debugging upon hitting an exception. With automatic pdb calling enabled, the
Python debugger will start automatically when Python encounters an unhandled
exception; the current line in the debugger will be the line of code on
which the exception occurred. The IPython author states that sometimes when he
wants to debug something at a certain line of code, he will put
1/0 at the point he wants to begin debugging, enable pdb, and run
the code in IPython. When the interpreter hits the
1/0 line of
code, it raises a
ZeroDivisionError exception and drops
him into a debugging session at that particular line.
Sometimes, when in an interactive shell, it is helpful to execute the
contents of a Python source file. Issuing the
run magic command followed by a
Python source file will run the file in the IPython interpreter (for example,
run <run options> <python source file>
<options>). The following run options are available:
-n prevents the
__name__ variable from being
set to
"__main__" for the Python source file. This prevents the
execution of any code in an
if __name__ == "__main__":
block.
-i runs the file in the current IPython namespace rather
than a new one. This is helpful if you want the Python source file to have
access to variables defined in the interactive session.
-p runs and profiles the file using the Python profiler
module. This option does not bring the executed code into the current
namespace.
Macros allow a user to associate a name with a section of Python code so the
code can be run later by referring to the name. As with the
edit
magic word, the list slice notation also works with macro definitions. For
example, for a history such as:
In [3]: hist 1: l = [] 2: for i in l: print i
you can define a macro with:
In [4]: macro print_l 2 Macro `print_l` created. To execute, type its name (without quotes). Macro contents: for i in l: print i
Execute it via:
In [5]: print_l Out[5]: Executing Macro...
In this case, the list
l was empty, so it did not print
anything. However, and here is a powerful feature of macros, binding the list
l to something and then executing the macro again produces a
different result:
In [6]: l = range(5) In [7]: print_l Out[7]: Executing Macro... 0 1 2 3 4
It is as if you retyped and executed the code contained in the macro
print_l when calling the macro again. It had access to the new
binding of the variable
l. While macros are absent from Python
syntax (and probably always will be), it is certainly a useful feature in an
interactive shell.
As mentioned earlier, IPython installs multiple configuration files for several different profiles. The configuration files have a naming convention of ipythonrc-<profile name>. In order to start IPython with a specific profile, execute IPython with:
ipython -p <profile name>
One method of creating your own profile is to create an IPython configuration file in the $HOME/.ipython directory named ipythonrc_<your profile> where <your profile> is the name with which you will refer to your profile. This can be useful if you have several projects you work on and each project requires the use of specific, different libraries. You can create a profile for each project and import the modules you frequently use for that project in the configuration file for each project.
In the default IPython profile, the Unix shell commands (on Unix, of course)
cd,
pwd, and
ls all work like they do
from a bash shell. To execute any other shell commands, prepend a
! or
!! to them. Use the
%sc and
%sx magic words to capture the output from shell commands.
The
pysh profile is intended as a shell replacement. Starting
IPython with a
-p pysh flag will cause IPython to accept and execute
any commands in the user's
$PATH, while at the same time allowing
access to all Python modules as well as all Python keywords and built-in
functions. For example, to create 500 directories named d_0_d through
d_500_d, start IPython with a
-p pysh and do something
like this:
jjones@cerberus[foo]|2> for i in range(500): |.> mkdir d_${i}_d |.>
This will create 500 directories:
jjones@cerberus[foo]|8> ls -d d* | wc -l 500
Notice the mix of the Python
range function and the Unix
mkdir command.
Note, however, that while
ipython -p pysh can provide a
powerful shell replacement, it lacks proper job control. Pressing Ctrl-Z
while performing some long-running task will stop the IPython session rather
than the running subprocess.
While the Python replacement shell is excellent overall, two things provided a small amount of trouble for me. To the credit of the IPython developers, both items are configurable with clear documentation.
The first item was the coloring. On one of my systems, I use xterms with a
white background. When requesting information from an object or module with the
? and
?? operators, the object definition line
appeared, but it looked like the arguments were missing. That was because the
arguments in the constructor displayed in white by default. I resolved this by
entering
colors LightBG at the IPython shell.
The second item was the combination of autoindent and pasting code. With autoindent enabled, IPython double-indented a section of code I pasted that already had indentation. For example, the following code:
for i in range(10): for j in range(10): for k in range(10): pass
became:
for i in range(10): for j in range(10): for k in range(10): pass
which really was not a problem in this case, because the indentation was
consistent among itself. In other circumstances (examples of which elude me
just now), it may present a real problem. Invoking the
autoindent
magic word toggles autoindent so it will not add extra indents--similar to
set paste in vim.
IPython is not revolutionary, nor is it entirely novel. Tab completion, searchable history, profiles, and config files have existed in other shells for years, and Python has had levels of introspection for quite some time now. However, IPython has unified some of the most powerful features of mature Unix shells, the Python standard shell, and the Python language into one utility. The result is an unbelievably powerful performance-enhancing tool that I will likely use for years to come. To paraphrase Archimedes, give me a powerful and flexible text editor (vim), interactive shell (IPython), and language (Python), and I can move the world.
Jeremy Jones is a software engineer who works for Predictix. His weapon of choice is Python.
Return to the Python DevCenter. | http://www.onlamp.com/lpt/a/5576 | CC-MAIN-2014-35 | refinedweb | 1,498 | 68.81 |
I was really glad to find out that my presentation about Scala and Java 8 was retweeted more than other similar content. That’s why I decided to make some notes and tell you about it. We are going to talk about the difference between Scala and Java and say why each of them is important. There are mutual innovations. Each language has borrowed something from each other. The question is: do you have to learn Scala if there is Java available? Definitely! The more languages you know, the more professional you are.
If we ever ask a Scala engineer about the principal differences between Scala and Java, he will not tell you all the nuances of lambda functions and traits. Instead, he will provide the following example:
Java
public class Person { private String firstName; private String lastName; String getFirstName() { return firstName; } void setFirstName(String firstName) { this.firstName = firstName; } String getLastName() { return lastName; } void setLastName(String lastName) { this.lastName = lastName; } int hashCode() .... boolean equals(Object o) { .... } }
Scala
case class Person(firstName:String, lastName:String)
Thus, one line in Scala corresponds to twenty lines in Java. On the other hand, the lack of compactness is characteristic not only for Java as a language, but also for a culture that has formed in the world of Java developers. Actually, we can write it this way:
public class Person extends DTOBase { public String firstName; public String lastName; }
hashCode and equals in the DTO-base are redefined with the help of reflection. Since Scala has proven that compactness is really promising, I can write a field without getters and setters and will not get it in the neck. It means that development of the idiomatic Java language moves towards compactness as well.
Java 8 introduces some innovations to make the functional programming style handy. At first sight, the innovations repeat some corresponding structures of Scala. For example:
- lambda expressions (anonymous functions);
- default methods in interfaces (like traits in scala);
- stream operations on collections.
Let’s review them in details.
Lambda Expressions
Java
list.sort((x,y)-> { int cmp = x.lastName.compareTo(y.lastName); return cmp!=0 ? cmp : x.firstName.compareTo(y.firstName) }
Scala
list.sort((x,y) => { val cmp = x.lastName.compareTo(y.lastName) if (cmp!=0) cmp else x.firstName.compareTo(y.lastName) }
We can see that the code is really similar, but:
Scala
var (maxFirstLen, maxSecondLen) = (0,0) list.foreach{ x => maxFirstLen = max(maxFirstLen, x.firstName.length) maxSecondLen = max(maxSecondLen, x.secondName.lenght) }
Java
[?] (it is impossible to modify the content the lambda expression has been called from).
Thus, lambda expressions in Java are syntactic sugar over anonymous classes that have access to the final objects of a context only. But in Scala they are full-on closures that have the full access to the context.
Default Methods in Interfaces
Another feature borrowed by Java from Scala, is the default methods in interfaces. They correspond to traits in Scala in a way.
Java
interface AsyncInput<T> { void onReceive(Acceptor<T> acceptor) default void read(): Future<T> { final CompletableFuture<T> promise = new CompletableFuture<>(); onReceive( x -> promise.complete(x) ); return promise; } }
Scala
trait AsyncInput[T] { def onReceive(acceptor: T=>()): Unit def read: Future[T] = { Promise p = Promise[T]() onReceive(p.complete(_)) p.future } }
At first glance, they are the same, but:
Scala
trait LoggedAsyncInput[T] { override def onReceive(acceptor: T => ()) = super.onReceive(x => { println(s“received:${x}”) acceptor(x) }) }
Java
[?] (Java does not provide such functionality. An aspect approach could be a sort of an analogue here)
Another example (a bit less important):
Scala
trait MyToString { override def toString = s”[${super.toString}]” }
Java
[?] (it is impossible to overload object methods in the interface).
We can see that the structures of trait and default methods are quite different. In Java, it’s the specification of the call dispatching. As for Scala, a trait is a more general structure that specifies the build of a final class with the help of linearization.
Stream Operations on Collections
The third Java 8 innovation is the stream interface to collections, which resembles a standard Scala library in design.
Java
peoples.stream().filter( x -> x.firstName.equals(”Jon”)).collect(Collectors.toList())
Scala
peoples.filter(_.firstName == “Jon”)
They are really similar, but in Java we should first get a stream interface from collection and then convert it to the result interface. The main reason for it is the interface encapsulation.
It means that if Java does already have quite a full non-functional API of collections, it’s inappropriate to add another functional interface into it (with regard to the API design and the simplicity of its modification, use and understanding). So, it’s the price we pay for the slow evolutionary development.
But let’s keep comparing:
Java
persons.parallelStream().filter( x -> x.person==”Jon”).collect(Collectors.toList())
Scala
persons.par.filter(_.person==”Jon”)
Solutions are very similar here. We can create a “parallel” stream in Java. As for Scala, we can create a “parallel” collection.
The access to the SQL databases:
Scala
db.persons.filter(_.firstName === “Jon”).toList
There is an analog in the Java ecosystem. We can write the following:
dbStream(em,Person.class).filter(x -> x.firstName.equals(“Jon”)).toList
It's interesting to take a look how the representation of collections in database tables is implemented in both cases.
In Scala, operations have types of operations on the data. Giving a rough description of types:
persons is of TableQuery[PersonTable] type
In which PersonTable <: Table[Person], that has a structure with firstName and lastName methods.
firstName === lastName is a binary operation === (we can define our own infix operations in Scala ), that is a type similar to Column[X] * Column[Y] => SqlExpr[Boolean].
filter SqlExpr[Boolean] Query[T]
has
filter: SqlExpr[Boolean] => Query[T]
method and some method to generate the SQL. Hence, we can express something as an expression over Table[Person], which is a Person representation.
It’s quite simple and even trivial in a way.
Now, let’s take a look how this functionality is implemented in jinq:
dbStream(em,Person.class).filter(x -> x.firstName.equals(“Jon”)).toList
x type is Person here, and x.firstName is a String. filter method accepts Person -> Boolean function as a parameter. But how do we generate the SQL from it?
filter analyzes the bytecode. There’s a sort of bytecode interpreter that executes instructions “symbolically”. The execution result is the route of calling getters and functions, which helps to build an SQL.
Actually, it’s a really nice idea. However, all of this is executed dynamically during the runtime. Thus, it takes quite a lot of time. If inside of our filter method we try to use a function not from a fixed list (which we do not know how to build an SQL for), we will also realize it during the runtime only.
Thus, the code in Scala is more or less trivial. Meanwhile, we use quite sophisticated technologies in Java.
That’s it about the Scala borrowings in Java. As you can see, Java «versions of features» really differ from Scala.
Scala Borrowings from Java
It’s time to talk about Scala borrowings from Java 8. The process of innovations is bi-directional. There is one Java 8 innovation that has been borrowed by Scala. In 2.11 version, we can turn it on with the help of compiler option. As for 2.12, it is there by default. We are talking about the SAM conversion.
Look at the two code fragments:
Java
interface AsyncInputOutput<T> { void onReceive(Acceptor<T> acceptor) void onSend(Generator<T> generator) }
Scala
trait AsyncInputOutput[T] { def onReceive(f: T => Unit): Unit def onSend(f: Unit => T): Unit }
As we can see, types and methods parameters in Java are Acceptor and Generator. At the bytecode level they are represented as corresponding classes. As for Scala, they are T=>Unit, and Unit=>T functions, which are represented as Function1.class.
SAM-type (Single Abstract Method) is a class or an interface containing one abstract method. If a method accepts a SAM-type as a parameter in Java, we can pass a function. The situation is different in Scala up to 2.11 version. A function is subclass of Function[A,B].
At first glance, the changes are not really significant. Except for the fact that we will be able to describe the functional API in an object-oriented way. In practice, this feature has a very important application – using SAM interfaces in those parts that are critical to time. Why? Efficiency of the bytecode execution by the JIT interpreter depends on its ability to execute the aggressive inlining.
But if you work with functional interfaces, parameters classes look like Function1 for any function with one parameter, Function2 for all functions with two parameters, etc. They are definitely not easy to inline. That’s why there was a hidden problem. We’d better not use functional interfaces in the critical to time low-level parts of the code, as JIT will not be able to inline them. Using SAM, we can rewrite them via local SAM types, and the compiler can inline them. Thus, the problem will disappear.
Though we will have to change the already existing code. We could rewrite some things (such as interfaces of collections) via SAM, but combine the new and the old interfaces in such a way, so that everything would look together.
We have observed the same problem in Java, when talking about the interfaces of collections. This allows to see how the evolution works. They have improved Java in a way. It is not perfect, but better than it used to be. They have improved Scala in another way, and it is not perfect either. Thus, there are two «bugs» in both languages. They are caused by the slow adaptation. There is a space for another language that can provide a “perfect” interface for some period of time. That’s the way evolution works.
We can separate Scala structures, that are absent in Java, into 2 groups:
- Those to be used in Java-9,10,11,12 (if we ever see those releases and Java is still interesting to anyone). That's the logic of development, just like Fortan-90 has become object-oriented.
- Those showing the difference between Java and Scala ideology.
As for the first group, we can name case classes and automatic type inference. All other things will go to the second one.
At the very beginning, we used the following code fragment:
case class Person(firstName: String, lastName: String)
Why are case classes named case? Because we can use them with a match/case operator:
p match { case Person(“Jon”,”Galt” ) => “Hi, who are you ?” case Person(firstName, lastName) => s”Hi, ${firstName}, ${lastName}” case _ => “You are not person” }
The first case responses to Jon Galt name. The second one will respond to any other Person values. In addition to that, in the second case there are firstName and lastName names introduced. It’s called an ML-style pattern matching. ML-style is because it’s the first structure to be proposed in ML language that was invented in 1973.
Nowadays, most of “new” languages, such as Scala, Kotlin, Ceylon, Apple Swift, support it.
Scala Peculiarities
So, what is the main Scala peculiarity? What capacities, that are absent in Java, does it provide? The answer is the ability of building internal DSL [Domain Specific Language]. Thus, Scala is so structured, that we could build a strictly typed model for every object area, and then express it in language structures.
These structures are built in the statically-typed environment. What are basic features that allow us to build such structures?
- flexible syntax, syntactic sugar
- the syntax of passing parameters by name
- macros
Let’s start with the syntax flexibility. What does it mean in practice?
1. Methods can have any names:
def +++(x:Int, y:Int) = x*x*y*y
2. Infix method calls for methods with one parameter:
1 to 100 == 1.to(100)
3. The only difference between round and curly brackets is that curly brackets may have multiple expressions in it. But still, one parameter calls are the same:
future(1) == future{1}
4. We can define functions with several lists of arguments:
def until(cond: =>Boolean)(body: => Unit) : Unit
5. We can pass a block of code as a function argument, so that it will be called each time, when the corresponding argument is triggered (passing arguments “by name”)
def until(cond: =>Boolean)(body: => Unit):Unit = { body; while(!cond) { body } } until(x==10)(x += 1)
Let’s make a DSL for Do/until:
object Do { def apply(body: => Unit) = new DoBody(body) } class DoBody(body: => Unit) { def until(cond: =>Unit): Unit = { body while(!cond) body } }
Now, we can use something like this:
Do { x += 1 } until ( x != 10 )
Another feature allowing to create DSL is a special syntax for some dedicated functions.
For instance, the following expression:
for(x <- collection){ doSomething }.
is just a syntax to call the method:
collection.foreach(x => doSomething)
So, if we write our own class with foreach method accepting a certain function ( [X] => Unit ), we will be able to use foreach syntax for our own type.
The same is for for/yield (for map) structure, and nested iterations (flatMap) conditional operators in the loop:
Therefore,
for(x <- fun1 if (x.isGood); y <- fun2(x) ) yield z(x,y)
is just another syntax for
fun1.withFilter(_.isGood).flatMap(x => fun2.map(y => z(x,y)))
By the way, there is a Scala extension called Scala-virtualized. It is a stand-alone project. Unfortunately, it will hardly be a part of Scala standard. All syntactic structures are virtualized here: if-u, match and other. We could put in an absolutely different semantics. The following are the examples of applications: generating code for GCPU, a specialized language for machine learning, translation to JavaScript.
At the same time, program compilation to Javascript exist in a current ecosystem. Functionality has been moved to a scala.js Scala compiler that is able to generate JavaScript. You can use it at any time.
Another useful Scala feature is macros. It is a program code conversion during the compilation time. Let’s take a look at a simple example:
object Log { def apply(msg: String): Unit = macro applyImpl def applyImpl(c: Context)(msg: c.Expr[String]):c.Expr[Unit] = { import c.universe._ val tree = q"""if (Log.enabled) { Log.log(${msg}) } """ c.Expr[Unit](tree) } }
Log(message) expression will be replaced with:
if (Log.enabled) { Log.log(message) }
Why are they useful?
First of all, with the help of macros, we can generate the so-called ‘boilerplate’ code, which is quite obvious, but still needs to be written somehow. We can name xml/json converters or case classes mapping to databases. In Java boilerplate, we can also «shorten» the code by using reflections. But this will impose some restrictions in places that are critical to execution speed, as reflections are not free.
Secondly, using macros, we can make more significant changes in programs, rather than just passing functions. We can actually implement our own implementation of structures, or just rewrite them globally.
We can name async interfaces as an example. A copy of C# interface, i.e. in the middle of async block:
async { val x = future{ long-running-code-1} val y = future{ long-running-code-1} val z = await(x)+await(y) }
Reading this code block directly, we will see that x and y will run the calculations. Then, z will be waiting for the calculations completion. In fact, the code in async is rewritten in a way that all context switches are non-blocking.
It’s interesting that async/await API is made as a library of macros. So, we would need to release a new compiler version in C#, whilst in Scala we can just write a library.
jscala is another example. It's a macros converting the subset of the Scala code to JavaScript. So, if you want do some frontend development and do not feel like switching to JavaScript, you can still do it with Scala and the macros will take care of the rest.
Summary
To sum up, we can say that it is more or less reasonable to compare Java and Scala in the sphere of their operation on the existing content, where the level of abstraction is classes and objects. When it is necessary to increase the abstraction level and describe something new, we can think out an internal DSL for Scala. As for Java, we can try to apply such solutions as building an external DSL or the aspect-oriented programming.
It would be wrong to say that some approach is definitely better in all situations. In Java, we should feel the moment when we leave the boundaries of a language and should build some infrastructure. In Scala, we can build the infrastructure “in the language”.
There are plenty of internal problems in Scala. Its facilities are sometimes imbalanced, and it’s quite a long story. There are plenty of experimental solutions, and it would be great to see them in the main vector of development. It seems that we have entered a new world and can see the dimension building abilities, as well as all the problems of the current structure. There is simply no such dimension in Java.
It's yet another convenience (aka write less) taken too far.
Thanks for getting back to me.
I'd love to see similar articles about Groovy, Kotlin, Ceylon and Fantom! :)
Upload image | https://kukuruku.co/hub/scala/java-8-vs-scala-the-difference-in-approaches-and-mutual-innovations?ModPagespeed=noscript | CC-MAIN-2022-33 | refinedweb | 2,937 | 58.08 |
HelloCustomer
What date did you file your return? Was your return mailed or filed electronically?
Hello againCustomer
On your taxes for 2006, did you owe money or penalties?
Have you received a notice from the IRS indicating what your refund was applied to?
There could be several reasons why you did not receive your portion of any refund that was due and your portion of the stimulus rebate. Sometimes even a small error in filling out the Form 8379 can delay the processing or even have it denied.
You may want to check your copy of the tax return itself and the Form 8379 to check for accuracy. According to the IRS instructions, you should have entered "Injured Spouse" in the upper left hand corner on page 1 of your joint return. The information you entered on the Form 8379 should also be in the same order as the information on your tax return (if your husband's name and social security number were listed first on your tax return then they should have also been on the first line of the Form 8379). You had to include copoes of all W-2 forms or 1099 forms that you received for each spouse.
If after checking your copies of what was filed you find any errors, you should fill out a corrected form and submit it to the IRS to the same address as where you sent your joint return. If you do not find any errors on your forms, then you should contact the IRS again to find out if they received your Injured Spouse form and, if they did, why it was denied. The child support office would not have that information. All the child support office could tell you (or I suppose your husband) how much money was received from the IRS to offset past due child support amounts.
If this was helpful please press the Accept button. Positive feedback is also appreciated.
Thanks and good luck in getting this resolved.
I assume that when you refer to "other pages" you are talking about your attachments such as Schedule A or Schedule B. On those schedules you would only list the social security number of the person listed first on your Form 1040, so I don't think that is the problem.
If your both your form 1040 and your form 8379 have your husband's name and social security number listed first, and then your name and social security number listed second, then it sounds like you completed that part properly. On the form 8379 next to the names, there are also boxes to check to indicate which one of you is the injured spouse. Make sure you checked the box next to your name and not your husband's.
If everything else appears correct, contact the IRS at(NNN) NNN-NNNNto see if they even acknowledge receiving your Form 8379, and if it has been processed.
Also, I just noticed that you are from the state of California. Keep in mind that is a community property state, so all of your income and expenses will be treated as they were earned or expensed 50/50 by each spouse. That shouldn't keep you from getting a portion of the refund, but it may not be the actual percentage of your earnings since income from community property states is calculated differently. Basically you would be entitled to 50% of any refund that was due, but no more than that amount. Also, California itself does not recognize injured spouse claims, so if you had any state refund coming it would still be kept by the state to offset any child support debt.
If this was helpful please press the Accept button. Positive feedback is also appreciated.
Thank you. | http://www.justanswer.com/tax/1cdfz-bread-winner-family-husband-no.html | CC-MAIN-2015-06 | refinedweb | 631 | 67.38 |
This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.
>>>>> "Joel" == Joel Brobecker <brobecker@adacore.com> writes: >> +#ifdef PYTHONDIR >> + PyModule_AddStringConstant (gdb_module, "pythondir", PYTHONDIR); >> +#else >> + if (gdb_datadir) >> + PyModule_AddStringConstant (gdb_module, "datadir", gdb_datadir); >> +#endif Joel> Can we change that, for instance by assuming that we always have Joel> PYTHONDIR, and that by default, configure will set to the datadir? This code is gone from the latest revision of the patch. We don't actually need the pythondir stuff until we have a library of Python code installed with gdb, and this patch series doesn't add that. But, yes, I think we can clean this up when the time comes. Joel> I was going to comment on the ..._RELOCATABLE macro, but I see that Joel> we're already using that idiom for at least the system gdbinit file. Yeah. I cleaned all this up a bit in the latest patch as well. That is, I unified almost all of the cut-and-pasted code. Tom | http://www.cygwin.com/ml/gdb-patches/2009-04/msg00129.html | CC-MAIN-2019-47 | refinedweb | 169 | 73.68 |
How to Catch Exceptions in Java
Whenever you use a statement that might throw an exception in Java, you should write special code to anticipate and catch the exception. That way, your program won’t crash if the exception occurs.
You catch an exception by using a try statement, which has this general form:
try { statements that can throw exceptions } catch (exception-type identifier) { statements executed when exception is thrown }
Here, you place the statements that might throw an exception within a try block. Then you catch the exception with a catch block.
Here are a few things to note about try statements:
You can code more than one catch block. That way, if the statements in the try block might throw more than one type of exception, you can catch each type of exception in a separate catch block.
For scoping purposes, the try block is its own self-contained block, separate from the catch block. As a result, any variables you declare in the try block are not visible to the catch block. If you want them to be, declare them immediately before the try statement.
You can also code a special block (called a finally block) after all the catch blocks. For more information about coding finally blocks.
The various exception classes in the Java API are defined in different packages. If you use an exception class that isn’t defined in the standard java.lang package that’s always available, you need to provide an import statement for the package that defines the exception class.
A simple example
To illustrate how to provide for an exception, here’s a program that divides two numbers and uses a try/catch statement to catch an exception if the second number turns out to be zero:
public class DivideByZero { public static void main(String[] args) { int a = 5; int b = 0; // you know this won’t work try { int c = a / b; // but you try it anyway } catch (ArithmeticException e) { System.out.println("Oops, you can’t " + "divide by zero."); } } }
Here, the division occurs within a try block, and a catch block handles ArithmeticException. ArithmethicException is defined by java.lang, so an import statement for it isn’t necessary.
When you run this program, the following is displayed on the console:
Oops, you can’t divide by zero.
There’s nothing else to see here.
Another example
Here is a simple example of a program that uses a method to get a valid integer from the user. If the user enters a value that isn’t a valid integer, the catch block catches the error and forces the loop to repeat.
import java.util.*; public class GetInteger { static Scanner sc = new Scanner(System.in); public static void main(String[] args) { System.out.print("Enter an integer: "); int i = GetAnInteger(); System.out.println("You entered " + i); } public static int GetAnInteger() { while (true) { try { return sc.nextInt(); } catch (InputMismatchException e) { sc.next(); System.out.print("That’s not " + "an integer. Try again: "); } } } }
Here the statement that gets the input from the user and returns it to the methods called is coded within the try block. If the user enters a valid integer, this statement is the only one in this method that gets executed.
If the user enters data that can’t be converted to an integer, however, the nextInt method throws an InputMismatchException. Then this exception is intercepted by the catch block — which disposes of the user’s incorrect input by calling the next method, as well as by displaying an error message. Then the while loop repeats.
Here’s what the console might look like for a typical execution of this program:
Enter an integer: three That’s not an integer. Try again: 3.001 That’s not an integer. Try again: 3 You entered 3
Here are a couple other things to note about this program:
The import statement specifies java.util.* to import all the classes from the java.util package. That way, the InputMismatchException class is imported.
The next method must be called in the catch block to dispose of the user’s invalid input because the nextInt method leaves the input value in the Scanner’s input stream if an InputMismatchException is thrown. If you omit the statement that calls next, the while loop keeps reading it, throws an exception, and displays an error message in an infinite loop.
This error was found the hard way. (The only way to make it stop is to close the console window.) | https://www.dummies.com/programming/java/how-to-catch-exceptions-in-java/ | CC-MAIN-2019-13 | refinedweb | 754 | 63.59 |
TL; DR. This post will get you started with the Keras deep learning framework without installation hassles. I will show you how easy it is to run your code on the cloud for free.
I know there are lots of tutorials out there to get you up and running with deep learning in Keras. They normally go with an image classifier for the MNIST handwritten digits or cat/dog classification. Here I wanted to take a different but maybe a more interesting approach by showing you how to build a model that can recommend a place you might be interested in given a source image you like.
Let's get started!
If you come from a background of programming in general, you might have once be suffered from the pain when you were first starting something new.
Installing an IDE, library dependencies, hardware driver support... And they might have cost you a lot of time before the first successful run of "Hello World!".
Deep learning is the same, it depends on lots of things to make the model working. For example, in order to have a deep learning model to train and run faster, you need a graphics card. Which could easily take several hours for beginners to setup, let alone that you would have to choose and purchase the graphics card itself which can be quite costly.
Today you can eliminate the initial learning curve of deep learning. It is now possible to run your code entirely in the cloud with all necessary dependencies pre-installed for you. More importantly, you can run your model faster on a graphics card for free.
At this point, I'd like to introduce Google Colab since I found it very useful to share my deep learning code with others where they can reproduce the result in a matter of seconds.
All you need is a Gmail account and an internet connection. The heavy lifting computation will be handled by Google Colab servers.
You will need to get comfortable with Jupyter notebook environment on Colab. Which is quite easy, you tap the play button at the left side of a cell to run the code inside. You can run a cell multiple times if you want.
Let's supercharge the running speed of a deep learning model by activating the GPU on colab.
Click on the "Runtime" menu button, then "Change runtime type", choose GPU in the "Hardware accelerator" dropdown list.
We are ready for the journey! Now buckle up since we are going to enter the wild west of the deep learning world.
The model we are introducing can tell which places an image contains.
Or described more formally, the input of the model is the image data and the output will be a list of places with different probabilities. The higher the probability, the more likely the image contains the corresponding scene/place.
The model can classify 365 different places, including coffee shop, museum, outdoor etc.
Here is Colab notebook for this tutorial, you can experiment with it while reading this article. Keras_travel_place_recommendation-part1.ipynb
The most important building block of our model is the convolutional network which will play the role of extracting image features. From more general low-level features like edges/corners to more domain specific high-level features like patterns and parts.
The model will have several blocks of convolutional networks stacked one over another. The deeper the convolutional layer, the more abstract and higher level features it extracts.
Here are an images showing the idea.
Now enough with the intuition of convolutional network. Let's get our hands dirty by building a model to make it happen.
It is really easy to build a custom deep learning model with Keras framework. Keras is designed for human beings, not machines. It is also an official high-level API for the most popular deep learning library - TensorFlow. If you just get started and look for a deep learning framework. Keras is the right choice.
Don't panic if it is your first time seeing a Keras model code below. Actually, it is quite simple to understand. The model has several blocks of convolutional layers. Each block as we explained earlier extract different levels of image features. For example "Block 1" being at the input level, it extracts entry-level features like edges and corners. The deeper it goes, the more abstract features each block extracts. You also noticed the final classification block formed by two fully connected Dense layers, they are responsible for making a final prediction.
from keras.models import Sequential from keras.layers.core import Flatten, Dense, Dropout from keras.layers.convolutional import Conv2D, MaxPooling2D from keras.optimizers import SGD model = Sequential() # Block 1 model.add(Conv2D(64, (3, 3) ,input_shape=(3,224,224), activation='relu', padding='same', name='block1_conv1')) model.add(Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')) model.add(MaxPooling2D((2,2), strides=(2,2))) # Block 2 model.add(Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')) model.add(Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')) model.add(MaxPooling2D((2,2), strides=(2,2))) # Block 3 model.add(Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')) model.add(Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')) model.add(Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')) model.add(MaxPooling2D((2,2), strides=(2,2))) # Block 4 model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')) model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')) model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')) model.add(MaxPooling2D((2,2), strides=(2,2))) # Block 5 model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')) model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')) model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')) model.add(MaxPooling2D((2,2), strides=(2,2))) # Classification block model.add(Flatten()) model.add(Dense(4096, activation='relu', name='fc1')) model.add(Dropout(0.5)) model.add(Dense(4096, activation='relu', name='fc2')) model.add(Dropout(0.5)) model.add(Dense(365, activation='softmax')) # Load pre-trained model weights parameters. model.load_weights('models/places/places_vgg_keras.h5') sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True) model.compile(optimizer=sgd, loss='categorical_crossentropy')
Let's have the model predict labels for an image.
The model expects a fixed shape of image input which is 244 x 244 pixels with three color channels(RGB). But what if we have another image with different resolution? Keras has some helper functions come in handy.
The code below turns an image into the data array, followed by some data value normalization before feeding to the model.
from keras.preprocessing import image from keras.applications.imagenet_utils import preprocess_input img = image.load_img(img_path, target_size=(224, 224)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x)
Then we feed the processed array of shape (3, 244, 244) to the model, the 'preds' variable is a list of 365 floating point numbers corresponding to 365 places/scenes.
Take the top 5 predictions and map their indexes to the actual names of places/scenes.
preds = model.predict(x)[0] top_preds = np.argsort(preds)[::-1][0:5] results = [] for x in top_preds: results.append(labels[x]) print(results)
And here is the result.
['beach', 'lagoon', 'coast', 'ocean', 'islet']
Feel free to try with other images.
We have learned how easy it is to get a deep learning model that predicts places/scenes up and running quickly with Google Colab. Read the second part of the tutorial, I am going to show you how to extract raw features from images and use that to build a travel recommendation engine.
At the meanwhile check out some resources that might be helpful.
Keras documentation, especially the sequential model API part.
If you want to upload your custom images to Colab, read the section "Predict with Custom Images" in one of my previous posts.Share on Twitter Share on Facebook | https://www.dlology.com/blog/gentle-guide-to-setup-keras-deep-learning-framework-and-build-a-travel-recommendation-engine/ | CC-MAIN-2020-24 | refinedweb | 1,351 | 51.14 |
Hello
This year I am taking OOP as a core course and I am starting with C#..
So my first code to accomplish is to make an array of information about different types of cars..
In a brief.. It consists of 2 forms, The first form is where the user enter the car information accompanied with its index in the array like : 0, 1 , 2 with a button to Submit it..
In the Second form - where it is opened from the first via a button - , The user enters the car Index and press a button to view the data into a multi-line text box..
I am still learning, so I haven't gone far with the code but this..
Source Codes:
1. Car.cs
Accomplishments : Creating constructors for the array, ReadInfo to read the inputs
using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Arrays_Objects { class car { string factory, modelname; int year, tanksize; double price; public car() { } public car(string Factory, string ModelName, int Year, int TankSize, double Price) { factory = Factory; modelname = ModelName; year = Year; tanksize = TankSize; price = Price; } public void ReadInfo(string Factory, string ModelName, int Year, int TankSize, double Price) { factory = Factory; modelname = ModelName; year = Year; tanksize = TankSize; price = Price; } } }
2. Form1.cs
Accomplishments : Creating array and allocating it. Adding some options for the user to select from. Linking Form2 to Form1
using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace Arrays_Objects { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { car[] y = new car[10]; // Creating an Array of 10 for (int k = 0; k < 10; k++) // Allocating the Arrays { y[k] = new car(); } } private void label1_Click(object sender, EventArgs e) { } private void comboBox2_SelectedIndexChanged(object sender, EventArgs e) { comboBox3.Items.Clear(); if (comboBox2.Text == "Honda") { comboBox3.Items.Add("Civic"); comboBox3.Items.Add("Jazz"); comboBox3.Items.Add("Accord"); } else if(comboBox2.Text == "Mercedes Benz") { comboBox3.Items.Add("C180"); comboBox3.Items.Add("B160"); comboBox3.Items.Add("E250"); } else if (comboBox2.Text == "BMW") { comboBox3.Items.Add("116i"); comboBox3.Items.Add("330i"); comboBox3.Items.Add("525i"); } } private void textBox1_TextChanged(object sender, EventArgs e) { } private void button1_Click(object sender, EventArgs e) { } private void button2_Click(object sender, EventArgs e) { Form2 newForm = new Form2(); newForm.Show(); } private Form2 newForm; } }
3. Form2.cs
Accomplishments :Just the Design so far
What I need help with:
1. The "Submit" button to save the entered data into the textboxs... How can I take the data into the text box and save it into the array ... No care about the index number because the array is limited to 10 blocks only..
2. How to view this info into a multiline textbox in form 2 when the user supply the car index.. No problem either if it is a number more than 10...
N.B. For the textbox numbers, pleas just refer to what will be written in it such as, Tank Size, etc..
They are found in the car class.
If the Form Design will help, please tell me to supply it.
I hope I am organized enough to show what I need help with...
Thanks in advance :) | https://www.daniweb.com/programming/software-development/threads/396613/help-with-arrays-and-forms-still-learning | CC-MAIN-2017-17 | refinedweb | 537 | 59.4 |
Building the Flex UI
To get started, you need to set up a development environment for writing your Flex and ColdFusion source code. For building Flex applications, you can use Adobe Flex Builder 3, which can be installed as a standalone application or used to add Flex development tools to an existing Eclipse environment. Flex Builder 3 provides code editing with full code-assist and documentation for both ActionScript 3.0 and MXML, the two languages used to create classes in Flex.
ActionScript 3.0 is an ECMA language designed for developers accustomed to Java, C#, JavaScript, or C. It uses semicolons at the end of lines and curly braces for blocks. In a Flex RIA, ActionScript 3.0 is used primarily to model application entities and behavior. MXML, on the other hand, provides a simple tag-based language for declaratively creating classes. It is a flavor of XML, and is primarily used to define user interfaces. To handle events and behavior, it can contain embedded ActionScript 3.0 code.
In addition to its code editor, Flex Builder 3 provides a full-featured visual editor for MXML interfaces. You can drag-and-drop controls from tool palettes into desired locations within the interface. Once you place the controls, you can use toolbars to modify control properties, apply Cascading Style Sheet (CSS) attributes to change their look-and-feel, and even bind controls to data or ActionScript 3.0 classes. Because the visual editor is extensible, you can add and manipulate your own developer-defined controls as if they were native Adobe controls.
To examine your application at runtime, Flex Builder 3 provides an interactive debugger built on top of the existing Eclipse debugger framework. This provides a view of breakpoints, variables, and expressions that's almost identical to ColdFusion or Java debuggers. If you need to examine memory or processor utilization, an interactive profiler is also available.
To begin building Task List, install ColdFusion 8, Flex Builder (which includes the Flex SDK), and the CFEclipse Eclipse plug-in for ColdFusion (). Use the Flex Builder 3 New Project wizard to choose a ColdFusion server deployed within a JBoss environment.
When the wizard completes (and if you were creating a Flex project using Java as a back end), you create and edit a number of Flex-specific XML files in the server-side Java application server's configuration directories. The instant the wizard completes, ColdFusion creates a Flex application that is already connected to a ColdFusion server. The Flex application's source code and debugging builds are stored in your Eclipse workspace.
With a client-side Flex project created, you can now create a client-side model of your single domain object. Right-click the Task List project in the Flex Builder 3 Flex Navigator pane and choose New ActionScript Class to get started. Using the wizard, create a Task class and place it within the com.firemoss.tasks package. When the wizard completes, the actual source-code file is in the project's src/com/firemoss/tasks directory. (When you download the code accompanying this article, you'll find the source code for this class in Tasks/src/com/firemoss/tasks.)
package com.firemoss.tasks.model { import mx.utils.UIDUtil; // Alias'd to server-side class / component [RemoteClass(alias="com.firemoss.tasks.model.Task")] // Bindable makes this class a subject for data binding [Bindable] /** Model of a Task. */ public class Task { /** Unique Id. */ public var id:String = UIDUtil.createUID(); /** Name of the task. */ public var task:String; /** Is the task complete? */ public var complete:Boolean; } }
Listing One provides ActionScript 3.0 Javadoc-like comment capabilities for automatically generating documentation. Annotations within square braces let metadata be added to the class. Using the ASDoc utility within the Flex SDK, you can produce API documentation for your ActionScript 3.0 classes. In the Task class, annotations are used to add two key metadata items:
- A Bindable annotation states that an instance of the class may act as a subject for data binding, meaning that an instance of Task dispatches events when its properties change.
- RemoteClass, which states that there is a server-side equivalent to this class, also named com.firemoss.tasks.model.Task.
With the model of a Task created, turn to the UI. When the Flex project wizard completes, it creates the Tasks.mxml file to act as the main interface. Because the application is simple, it employs the Autonomous View pattern for its sole UI, embedding within Task.mxml all knowledge of the model, services, and control code. (In larger Flex applications, you should use more scalable architectures, such as Model View Controller.)
Inside Tasks.mxml, MXML tags define properties and server connections. In Listing Two, Task instances are stored using classes from the Flex SDK's Collections API. An ArrayCollection stores a list of all tasks, and a ListCollectionView subscribes to the master collection and defines an automatically updated subset of tasks that are complete. Both of these collections are "bindable," which means they can serve as a subject for observation by other classes and UI controls. When the subject object changes its properties, observers are notified and may update themselves accordingly. This provides a loosely coupled architecture ideal for creating flexible, engaging UIs.
<!-- MODEL --> <!-- Master list of all tasks --> <mx:ArrayCollection <!-- Filtered list of completed tasks --> <mx:ListCollectionView
Once data is stored in collections, Task List needs to display a list of tasks. It does this in Listing Three with a List component that shows a vertically scrolling list of items. The List instance is bound to the currently selected collection of tasks. As the collection changes, List automatically redraws itself.
<!-- List of tasks --> <mx:List <mx:itemRenderer> <mx:Component> <view:TaskItemRenderer /> </mx:Component> </mx:itemRenderer> </mx:List>
By default, List shows a label for each of its items. However, Task List needs to show both the label and a CheckBox to enable user interaction. Any of Flex's list-based controls can employ a custom item renderer, which is typically an MXML-based class that defines a UI for each item. In Task List, I use a custom renderer that displays a CheckBox and the task's name.
Below the list of tasks, the application must display a form to let users enter new tasks; see Figure 2. MXML includes a rich set of form controls. By combining tags like Form, FormItem, TextInput, and StringValidator, you create a form with complete client-side validation solely through declarative tags.
Last on Task List's set of defined MXML elements are server connections. Flex provides a RemoteObject tag for RPC-style communication with server-side ColdFusion and Java components. In Task List, a RemoteObject tag using the preconfigured "ColdFusion" destination lets the application communicate with a server-side ColdFusion component.
For real-time messaging, Flex provides Producer and Consumer tags. A Consumer lets an application subscribe to real-time messages published by the server. A Producer publishes messages to the server for rebroadcast to other users of the application. In Listing Four, a Consumer tag using the preconfigured "ColdFusionGateway" connection lets ColdFusion send real-time messages to the Task List application running within the user's browser.
<!-- BUSINESS SERVICES --> <!-- Subscribes to realtime messages from the application server --> <mx:Consumer <!-- Allows RPC-style invocations of server-side service methods --> <mx:RemoteObject <mx:method <mx:method </mx:RemoteObject>
To wire all of these MXML components together, Task List uses ActionScript 3.0 functions within Tasks.mxml that act as event handlers. These event handlers define what actions are taken when users indicate task completion or click the "OK" button to add a new task. They also handle real-time task updates received from the server; see Listing Five. With event handlers in place, the client-side Flex application is ready to be used.
/** Invoked when the server completes an operation of taskService */ private function listTasksResultHandler(event:ResultEvent):void { // Server returns an Array of Task instances: update our model. this.tasks.source = event.result as Array; } | http://www.drdobbs.com/architecture-and-design/building-rias-on-j2ee-foundations/209900484?pgno=2 | CC-MAIN-2015-48 | refinedweb | 1,334 | 56.05 |
I want to add a gun to the player, when he collides with the gun (the player can have only one gun at the time). I've tried to add WeaponHolder, which will contains weapons, that user have (every time when I pick up weapon, I will destroy a first one). But for now my gun is adding on coliision place not in a place of weapon. How to properly implement this kind of things? Maybe there is another way to do this?
PickUp on collision
public class PickUpGun : MonoBehaviour {
void Start() {
}
void Update() {
}
private void OnTriggerEnter(Collider other) {
if (other.gameObject.tag == "Player") {
Transform childInParent = other.gameObject.transform.GetChild(0);
WeaponSwtiching weaponSystem = childInParent.gameObject.GetComponent<WeaponSwtiching>();
transform.parent = childInParent.transform;
}
}
}
Answer by Cooperall
·
Sep 09, 2019 at 03:42 AM
Hi! Try setting the position of the gun to the position of the gun that is already stored in WeaponHolder.
If that doesn't work, try creating an empty GameObject as a child of the player. Set up that GameObject so it is in the spot where you want the gun to be held. Finally, set the position of every gun the player picks up to that GameObject.
Checking for a gameobject at a position
1
Answer
My main character isn't being able to colide with the objects.
1
Answer
Is there any good sample code for shooting a projectile and collision detection?
0
Answers
Physics object bouncing on collision.
0
Answers
Why is Collider.IsTouching not working in this sample?
0
Answers | https://answers.unity.com/questions/1664024/how-to-add-a-gun-to-the-player.html | CC-MAIN-2020-40 | refinedweb | 255 | 66.54 |
UI Frontiers - Silverlight, Windows Phone 7 and the Multi-Touch Thumb
By Charles Petzold | December 2010
For many Silverlight programmers, the most exciting news about Windows Phone 7 is its support for Silverlight as one of its two programming interfaces. (The other one is XNA.) Not only can Silverlight programmers leverage their existing knowledge and skills in writing new applications for the phone, but they should be able to build Silverlight programs for the Web and the phone that share code.
Of course, sharing code—particularly UI code—is rarely as easy as it first seems. The version of Silverlight used in the phone is called Silverlight for Windows Phone, and it’s mostly a stripped-down implementation of Silverlight 3. When contemplating a shared-code application, you’ll want to take a close look at the documentation: For each Silverlight class, the online documentation indicates which environments support that class. Within each class, lists of properties, methods and events use icons to indicate Windows Phone 7 support.
A Silverlight application for the Web gets user input through the keyboard, mouse and perhaps multi-touch. In a Windows Phone 7 program, multi-touch is the primary means of input. There’s no mouse, and while there might be a hardware keyboard on the phone, Silverlight programs can rely only on the existence of a virtual keyboard—the Software Input Panel, or SIP—and only through the TextBox control.
If your existing Silverlight programs never directly obtain keyboard or mouse input and rely entirely on controls, you won’t have to worry about the conversion to multi-touch. Also, if your programs contain their own mouse logic, you can actually retain that logic when porting the program to the phone.
On the phone, primary touch events are converted to mouse events, so your existing mouse logic should work fine. (A primary touch event is the entire activity of a finger that first touches the screen when no other fingers are in contact with the screen.)
Moving from the mouse to multi-touch will require some thought: Both Silverlight for the Web and Silverlight for Windows Phone support the static Touch.FrameReported event, but this event is a rather low-level interface to multi-touch. I focused on this event in my article “Finger Style: Exploring Multi-Touch Support in Silverlight” in the March 2010 issue (msdn.microsoft.com/magazine/ee336026).
Silverlight for Windows Phone supports a subset of the Manipulation events that originated in the Surface SDK and have since become part of Windows Presentation Foundation (WPF). It’s an example of how multi-touch is becoming more mainstream in steps. The phone supports only the translation and scaling functions, not rotation, and does not implement inertia, although sufficient information is available to implement inertia on your own. These Manipulation events are not yet supported in the Web version of Silverlight.
In summary, if you want to share code between Silverlight for the Web and Silverlight for Windows Phone, you’ll be sticking either with mouse events or with Touch.FrameReported.
Consider the Thumb
However, there’s another option: If you need only the translation support of the Manipulation events, and you don’t want to worry about whether the input is coming from the mouse or touch, there is a control that provides this support in a very pure form. This control is the Thumb.
It’s possible that you’ve never actually encountered the Thumb. The Thumb control is hidden away in the System.Windows.Controls.Primitives namespace and is primarily intended for ScrollBar and Slider templates. But you can also use it for other chores, and I’ve recently come to think of the Thumb as a high-level implementation of the translation feature of the Manipulation events.
Now, the Thumb isn’t a truly “multi”-touch control—it supports only one touch at a time. However, exploring the Thumb in some detail will give you an opportunity to experiment with supporting touch computing along with sharing code between a Silverlight application and a Windows Phone 7 application.
The Thumb defines three events:
- DragStarted is fired when the user first touches the control with a finger or mouse.
- DragDelta indicates movement of the mouse or finger relative to the screen.
- DragCompleted indicates the mouse or finger has lifted.
The DragDelta event is accompanied by event arguments with the properties HorizontalChange and VerticalChange that indicate the mouse or finger movement since the last event. You’ll generally handle this event by adding the incremental changes to the X and Y properties of a TranslateTransform set to a RenderTransform property of some draggable element, or the Canvas.Left and Canvas.Top attached properties.
In its default state, the Thumb is rather plain. As with other controls, the HorizontalAlignment and VerticalAlignment properties are set to Stretch so the Thumb normally fills the area allowed for it. Otherwise, the Silverlight Thumb is just four pixels square. In Silverlight for Windows Phone, the Thumb is 48 pixels square, but visually it’s really just 24 pixels square with a 12-pixel wide transparent border on all four sides.
At the very least, you’ll probably want to set an explicit Height and Width on the Thumb. Figure 1 shows the Silverlight and Windows Phone 7 versions side by side, with the default light-on-dark color theme of the phone. For both I’ve set the Height and Width to 72 and Background to Blue, which in the Silverlight version becomes a gradient that changes when the Thumb is pressed. Neither Thumb pays attention to the Foreground property.
Figure 1 The Silverlight and Windows Phone Thumb Controls
Very often you’ll want not only to resize the Thumb, but also to apply a ControlTemplate that redefines the control’s visuals. This ControlTemplate can be extremely simple.
Sharing Controls
Suppose you want a simple control that lets the user drag bitmaps around the screen. A very easy approach is to put both an Image element and a Thumb in a single-cell Grid, with the Thumb the same size as the Image and overlaying it. If the ControlTemplate for the Thumb is just a transparent Rectangle, the Thumb is invisible but it still fires drag events.
Let’s try to create such a control usable in both regular Silverlight and Windows Phone 7 projects. I’ll assume you have the Windows Phone 7 Developer Tools installed. These tools allow you to create Windows Phone 7 projects from Visual Studio.
Begin by creating a regular Silverlight 4 project called DragImage. The resulting DragImage solution contains the customary DragImage project (which is the Silverlight program itself) and a DragImage.Web project (which hosts the Silverlight program in an HTML or ASP.NET page).
Next, add a new project of type Windows Phone Application to the solution. Call this project DragImage.Phone. (It’s likely you won’t want that name showing up in the program list of the phone or the phone emulator, so you can change the display name in the Title attribute of the App tag in the WMAppManifest.xml file.)
By right-clicking either the DragImage.Web project or the DragImage.Phone project, you’ll get a context menu from which you can select Set as StartUp Project and run either the regular Silverlight program or the Windows Phone 7 program. A toolbar drop-down in Visual Studio lets you deploy the phone program to either an actual phone device or the phone emulator. (Visual Studio won’t build the projects if this drop-down is set for Windows Phone 7 Device and no phone is attached.)
In the DragImage project (the regular Silverlight project), add a new item of type Silverlight User Control. Call it DraggableImage. As usual, Visual Studio creates DraggableImage.xaml and DraggableImage.xaml.cs files for this control.
Figure 2shows DraggableImage.xaml with the visual tree of the control. The standard outer Grid named LayoutRoot will occupy the full dimensions of the control’s container; the inner Grid is aligned at the upper-left corner, but there’s a TranslateTransform assigned to its RenderTransform property to move it within the outer Grid. This inner Grid holds an Image element with a Thumb control on top with its Template property set to a visual tree containing only a transparent Rectangle.
<UserControl x: <Grid x: <Grid HorizontalAlignment="Left" VerticalAlignment="Top"> <Image Name="image" Stretch="None" Source="{Binding ElementName=ctrl, Path=Source}" /> <Thumb DragDelta="OnThumbDragDelta"> <Thumb.Template> <ControlTemplate> <Rectangle Fill="Transparent" /> </ControlTemplate> </Thumb.Template> </Thumb> <Grid.RenderTransform> <TranslateTransform x: </Grid.RenderTransform> </Grid> </Grid> </UserControl>
Notice that the Source property of the Image element is bound to the Source property of the control itself. That property is defined in the DraggableImage.xaml.cs file shown in Figure 3. That file also processes the DragDelta event from the Thumb by changing the X and Y properties of the TranslateTransform.
using; } } }
To share that control with the Windows Phone 7 project, right-click the DragImage.Phone project and select Add | Existing Item to bring up the Add Existing Item dialog box. Navigate to the DragImage project directory. Select DraggableImage.xaml and DraggableImage.xaml.cs, but don’t click the Add button. Instead, click the little arrow to the right of the Add button and select Add as Link. The files show up in the DragImage.Phone project with little arrows on the icons indicating that the files are shared between the two projects.
Now you can make changes to the DraggableImage files and both projects will use the revised versions.
To test it out, you’ll need a bitmap. Store the bitmap in an Images directory within each of the projects. (You don’t need to make copies of the bitmap; you can add the bitmap to the Images directory using a link.)
There should be two MainPage.xaml files floating around. One is from the regular Silverlight project and the other is from the Windows Phone 7 project. In MainPage.xaml for the Silverlight project, add an XML namespace binding called (traditionally) “local”:
Now you can add DraggableImage to the page:
The MainPage class for the Windows Phone 7 project is in a namespace called DragImage.Phone, but the shared DraggableImage class is in the namespace DragImage. You’ll need an XML namespace binding for the DragImage namespace, which you can call “shared”:
Now you can add DraggableImage to the content area of the page:
That’s probably the simplest way you can share a control between two Silverlight projects, one for the Web and one for Windows Phone 7. Because the control uses the Thumb, both programs work with the mouse or touch.
The downloadable code for the DragImage solution also includes a project named DragImage.Wpf, which is a WPF program that also uses this control. In the general case, however, sharing controls between Silverlight and WPF is harder than sharing controls between Silverlight and Windows Phone 7.
Color and Resolution
Aside from mouse and touch input, when attempting to share code between Silverlight and Windows Phone 7, you’ll need to deal with two other issues: color and video resolution.
On the desktop, Silverlight displays black text on a white background. (However, a Silverlight program could use the SystemColors class in order to display the Windows colors selected by the user.) By default, Windows Phone 7 displays white text on a black background except if the user changes the color theme to display black on white. Windows Phone 7 provides handy, predefined resource keys, such as PhoneForegroundBrush and PhoneBackgroundBrush, to help a program use the selected color scheme.
Any code or markup shared between Silverlight and Windows Phone 7 that uses explicit colors will have to figure out some way to determine the platform on which it’s running to get the correct colors.
The video resolution problem is a little trickier. All Silverlight coordinates are in units of pixels, and that rule applies to the phone as well. The average desktop video display probably has a resolution somewhere in the vicinity of 100 dots per inch (DPI). (For example, suppose a 21-inch video display handles 1600 × 1200 pixels, or 2000 pixels diagonally. That’s a resolution of 105 DPI.) By default, Windows assumes that the display resolution is 96 DPI, although the user can change that to make the screen easier to read.
A Windows Phone 7 device has a screen that’s 480 × 800 pixels with a diagonal of 933 pixels. Yet the screen measures only 3.5 inches diagonally, which means the resolution is about 264 DPI, some 2.75 times the resolution of the desktop display.
This means that shared elements of a particular size that look fine on the desktop are going to be too small on the phone. However, the viewing distance of the phone is usually shorter than for desktop displays, so the elements don’t have to be increased by a full 2.75 times to be visible on the phone.
How big should the Thumb be for touch purposes? One criterion I’ve read indicates that touch targets should be 9 millimeters (or 0.25 inches) wide and high. On a desktop display with a resolution of 96 pixels to the inch, that’s 34 pixels—but on the phone it’s 93 pixels.
On the other hand, the standard button on a Windows Phone 7 device is only 72 pixels tall, and that seems adequate. Perhaps the best approach is to experiment until you find something that’s easy to use but isn’t too clunky.
Making Adjustments
Traditionally, programs adjusted themselves for different platforms using preprocessor directives for conditional compilation. A Silverlight program defines the conditional compilation symbol SILVERLIGHT, and a Windows Phone 7 program defines both SILVERLIGHT and PHONE. (You can see these by selecting the Build tab on the project Properties page.) That means you can have code that looks something like this:
Or, you can differentiate at run time by accessing the Environment.OSVersion object. If the Platform property is PlatformID.WinCE and the Version.Major property is 7 or greater, your code is running on a Windows Phone 7 device (or perhaps Windows Phone 8 or 9).
In theory, it’s possible to define conditional sections of XAML files using the AlternateContent and Choice tags defined in the markup-compatibility (mc) namespace, but these tags don’t seem to be supported in Silverlight.
But XAML can contain data bindings, and these bindings can reference different objects depending on the platform. XAML can also have StaticResource references that retrieve different objects for different platforms. It is this approach I used in the TextTransform program.
I created the TextTransform solution the same way I created the DragImage solution. The solution has three projects: TextTransform (Silverlight program), TextTransform.Web (Web project to host the Silverlight program) and TextTransform.Phone (Windows Phone 7).
In the Silverlight project, I then created a TextTransformer control that derives from UserControl. This control is shared between the Silverlight project and the Windows Phone 7 project. The TextTransformer control contains a hardcoded text string (the word “TEXT”) surrounded by a Border with four Thumb controls at the corners. Moving a Thumb causes a non-affine transform to be applied to the Border and TextBlock. (It only works correctly if the quadrilateral formed by the Border has no concave corners.)
The TextTransformer.xaml file doesn’t create a new template for the Thumb, but it does define a Style as shown in Figure 4.
<Style>
Notice the references to ThumbSize and HalfThumbOffset. Although the TextBlock displaying the text gets the correct Foreground property through property inheritance, the Border must be explicitly colored with the same foreground color:
Where are these resources defined? They’re defined in App.xaml. The regular Silverlight project includes a Resources collection in its App.xaml file that contains the following:
The App.xaml file for the Windows Phone 7 program references the predefined resources for the two brushes and defines larger ThumbSize and HalfThumbOffset values:
<Application.Resources> <SolidColorBrush x: <SolidColorBrush x: <system:Double x:96</system:Double> <system:Double x:-48</system:Double> </Application.Resources>
Figure 5 shows the program running in the browser and Figure 6 shows the program running on the Windows Phone 7 emulator. The emulator is displayed at 50 percent of full size to compensate for the higher pixel density on the phone.
Figure 5 The TextTransform Program in the Browser
Figure 6 The TextTransform Program on the Phone Emulator
These techniques suggest that sharing code between the desktop and phone has become a reality. If you want to delve a bit deeper into this subject, the Surface Toolkit for Windows Touch includes a SurfaceThumb control for WPF developers. This is just like the normal Thumb control, but it adds support for true multi-touch and events for when the thumb is flicked. For more information, see the Surface Toolkit for Windows Touch beta page at msdn.microsoft.com/library/ee957351.
Charles Petzold is a longtime contributing editor to MSDN Magazine. His new book, “Programming Windows Phone 7,” is available as a free download at bit.ly/cpebookpdf.
Thanks to the following technical experts for reviewing this article: Doug Kramer and Robert Levy
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. | https://msdn.microsoft.com/magazine/e082e094-cdf2-4e18-85e2-b50033a0907f | CC-MAIN-2018-13 | refinedweb | 2,899 | 54.63 |
Things used in this project
Story
Introduction
I'm an engineer and artist who enjoys projects that combine science and arts. I've been making dress designs with microcontrollers embedded within network capability.
Instructions
*Note: after this project was published, I renovated the electronic circuit portion. An improved method to attach the LEDs to the cloth and how to make them durable is published here. You can refer to that page for the crafty side of the project and come back to this page for the code and schematics.
Step 1
I used a NeoPixel strip from Adafruit, cut it up into pieces and arranged them into the shapes of constellations. Feel free to use other types of LEDs such as the individual RGB LEDs. Glue or sew them onto the base fabric.
Step 2
Put an interfacing fabric on the top and you can outline the constellations. This step is optional but I found it helpful to have multiple layers of fabrics to strengthen the structure. I actually sewed another thick fabric onto the back of the base fabric. So three layers in total as the base, sandwiching the LEDs.
Step 3
Solder the LEDs. If you use the sew-able individual LEDs, you can also use conductive threads to connect them. Either way, it's a lot of manual labor and requires patience. As I have four constellations (Orion, Big Dipper, Cygnus and Cassiopeia), I separated them into four traces. Each will be connected to a different Arduino 101 pin.
Test!
Test your circuit before going further! Can just do a NeoPixel Strandtest for each trace.
Okay, I put the level as "Easy" since scientifically it's not hard once you understand the code, but it does take a lot of work stabilizing the cables on the fabric.
Make sure that your Arduino IDE is the latest version and has Curie PME library. I'd recommend using the Arduino Web Editor. Download the library here.
Make the Dress
In parallel (figuratively), make the dress. After you test the circuit, sew the base fabrics with the LEDs onto the inside of the dress. LEDs will shine through the graphics.
As you can see, the Arduino 101 is in my hand. There are long wires connecting the LEDs and the board, which are hidden in the sleeve.
The code below will give you information on how the board is programmed. After you flash the code, train the neurons first so they learn which patterns are there. Watch this video at ~0:30:
For more photos and other tech-fashion/paintings-on-fabric designs, check out my website :)
Schematics
I plugged the 9 V battery directly into the barrel jack.
Code
PME_LEDArduino
/* * This example demonstrates using the pattern matching engine (CuriePME) * to classify streams of accelerometer data from CurieIMU.The code is a modification of the Draw in the Air example: * * * First, the sketch will prompt you to draw patterns in the air (just * imagine you are writing on an invisible whiteboard, using your board as the * pen), and the IMU data from these motions is used as training data for the * PME. Once training is finished, you can keep drawing letters and the PME * will try to guess which letter you are drawing. * * This example requires a button to be connected to digital pin 4 * * * NOTE: For best results, draw big letters, at least 1-2 feet tall. * * Copyright (c) 2016 Intel Corporation. All rights reserved. * See license notice at end of file. */ #include "CurieIMU.h" #include "CuriePME; /* This controls how many times a letter must be drawn during training. * Any higher than 4, and you may not have enough neurons for all 26 letters * of the alphabet. Lower than 4 means less work for you to train a letter, * but the PME may have a harder time classifying that letter. */ const unsigned int trainingReps = 4; /* Increase this to 'A-Z' if you like-- it just takes a lot longer to train */ const unsigned char trainingStart = 'A'; const unsigned char trainingEnd = 'D'; /* The input pin used to signal when a letter is being drawn- you'll * need to make sure a button is attached to this pin */ const unsigned int buttonPin = 4; /* Sample rate for accelerometer */ const unsigned int sampleRateHZ = 200; /* No. of bytes that one neuron can hold */ const unsigned int vectorNumBytes = 128; /* Number of processed samples (1 sample == accel x, y, z) * that can fit inside a neuron */ const unsigned int samplesPerVector = (vectorNumBytes / 3); /* This value is used to convert ASCII characters A-Z * into decimal values 1-26, and back again. */ const unsigned int upperStart = 0x40; const unsigned int sensorBufSize = 2048; const int IMULow = -32768; const int IMUHigh = 32767; void setup() { Serial.begin(9600); // while(!Serial); pinMode(buttonPin, INPUT); /* Start the IMU (Intertial Measurement Unit) */ CurieIMU.begin(); /* Start the PME (Pattern Matching Engine) */ CuriePME.begin(); CurieIMU.setAccelerometerRate(sampleRateHZ); CurieIMU.setAccelerometerRange(2); trainLetters(); //Serial.println("Training complete. Now, draw some letters (remember to "); // Serial.println("hold the button) and see if the PME can classify them."); strip.begin(); // intialize neopixel strip strip.show(); // Initialize all pixels to 'off' } void loop () { /// these functions are written out at the bottom of the sketch. Serial.println("Training complete. Now, draw some letters (remember to "); Serial.println("hold the button) and see if the PME can classify them."); byte vector[vectorNumBytes]; unsigned int category; char letter; char pattern; /* Record IMU data while button is being held, and * convert it to a suitable vector */ readVectorFromIMU(vector); /* Use the PME to classify the vector, i.e. return a category * from 1-26, representing a letter from A-Z */ category = CuriePME.classify(vector, vectorNumBytes); if (category == CuriePME.noMatch) { Serial.println("Don't recognise that one-- try again."); //theaterChase(); theaterChase(strip.Color(127, 127, 127), 50); // White strip.show(); // delay(10); } else { letter = category + upperStart; pattern = letter; if ( pattern == 'A' ) { //red colorWipe(strip.Color(0, 255, 0), 50); // Green theaterChase(strip.Color(127, 127, 127), 50); // White strip.show(); } else if ( pattern == 'B') { colorWipe(strip.Color(255, 0, 0), 50); // Red theaterChase(strip.Color(127, 127, 127), 50); // White strip.show(); } else if ( pattern == 'C') { colorWipe(strip.Color(0, 0, 255), 50); // Blue theaterChase(strip.Color(127, 127, 127), 50); // White strip.show(); } else if ( pattern == 'D') { colorWipe(strip.Color(255, 0, 255), 50); // Blue theaterChase(strip.Color(127, 127, 127), 50); // White strip.show(); } Serial.println(letter); } } /* Simple "moving average" filter, removes low noise and other small * anomalies, with the effect of smoothing out the data stream. */ byte getAverageSample(byte samples[], unsigned int num, unsigned int pos, unsigned int step) { unsigned int ret; unsigned int size = step * 2; if (pos < (step * 3) || pos > (num * 3) - (step * 3)) { ret = samples[pos]; } else { ret = 0; pos -= (step * 3); for (unsigned int i = 0; i < size; ++i) { ret += samples[pos - (3 * i)]; } ret /= size; } return (byte)ret; } /* We need to compress the stream of raw accelerometer data into 128 bytes, so * it will fit into a neuron, while preserving as much of the original pattern * as possible. Assuming there will typically be 1-2 seconds worth of * accelerometer data at 200Hz, we will need to throw away over 90% of it to * meet that goal! * * This is done in 2 ways: * * 1. Each sample consists of 3 signed 16-bit values (one each for X, Y and Z). * Map each 16 bit value to a range of 0-255 and pack it into a byte, * cutting sample size in half. * * 2. Undersample. If we are sampling at 200Hz and the button is held for 1.2 * seconds, then we'll have ~240 samples. Since we know now that each * sample, once compressed, will occupy 3 of our neuron's 128 bytes * (see #1), then we know we can only fit 42 of those 240 samples into a * single neuron (128 / 3 = 42.666). So if we take (for example) every 5th * sample until we have 42, then we should cover most of the sample window * and have some semblance of the original pattern. */ void undersample(byte samples[], int numSamples, byte vector[]) { unsigned int vi = 0; unsigned int si = 0; unsigned int step = numSamples / samplesPerVector; unsigned int remainder = numSamples - (step * samplesPerVector); /* Centre sample window */ samples += (remainder / 2) * 3; for (unsigned int i = 0; i < samplesPerVector; ++i) { for (unsigned int j = 0; j < 3; ++j) { vector[vi + j] = getAverageSample(samples, numSamples, si + j, step); } si += (step * 3); vi += 3; } } void readVectorFromIMU(byte vector[]) { byte accel[sensorBufSize]; int raw[3]; unsigned int samples = 0; unsigned int i = 0; /* Wait until button is pressed */ while (digitalRead(buttonPin) == LOW); /* While button is being held... */ while (digitalRead(buttonPin) == HIGH) { if (CurieIMU.dataReady()) { CurieIMU.readAccelerometer(raw[0], raw[1], raw[2]); /* Map raw values to 0-255 */ accel[i] = (byte) map(raw[0], IMULow, IMUHigh, 0, 255); accel[i + 1] = (byte) map(raw[1], IMULow, IMUHigh, 0, 255); accel[i + 2] = (byte) map(raw[2], IMULow, IMUHigh, 0, 255); i += 3; ++samples; /* If there's not enough room left in the buffers * for the next read, then we're done */ if (i + 3 > sensorBufSize) { break; } } } undersample(accel, samples, vector); } void trainLetter(char letter, unsigned int repeat) { unsigned int i = 0; while (i < repeat) { byte vector[vectorNumBytes]; if (i) Serial.println("And again..."); readVectorFromIMU(vector); CuriePME.learn(vector, vectorNumBytes, letter - upperStart); Serial.println("Got it!"); delay(1000); ++i; } } void trainLetters() { for (char i = trainingStart; i <= trainingEnd; ++i) { Serial.print("Hold down the button and draw the letter '"); Serial.print(String(i) + "' in the air. Release the button as soon "); Serial.println("as you are done."); trainLetter(i, trainingReps); Serial.println("OK, finished with this letter."); delay(2000); } } ///////////////Special light functions from Adafruit Strandtest Example Code // Rainbow! Note- this function blocks new position inputs until it's finished.. Used for rainbow effect); } //////Theater Chase lights from Adafruit strandtest example code. This takes whatever the curent RGB value is, and does a "theatre chase" effect with it. //Theatre-style crawling lights.<strip.numPixels(); i++) { strip.setPixelColor(i, c); strip.show(); delay(wait); } } //Theatre-style crawling lights with rainbow effect void theaterChaseRainbow(uint8_t wait) { for (int j=0; j < 256; j++) { // cycle all 256 colors in the wheel for (int q=0; q < 3; q++) { for (uint16_t i=0; i < strip.numPixels(); i=i+3) { strip.setPixelColor(i+q, Wheel( (i+j) % 255)); //turn every third pixel on } strip.show(); delay(wait); for (uint16_t i=0; i < strip.numPixels(); i=i+3) { strip.setPixelColor(i+q, 0); //turn every third pixel */
Credits
Kitty Yeung
Physicist/Artist/Musician/Fashion Designer/Engineer working at Intel
Replications
Did you replicate this project? Share it!I made one
Love this project? Think it could be improved? Tell us what you think! | https://www.hackster.io/kitty-yeung/arduino-101-intel-curie-pattern-matching-dress-9cc783 | CC-MAIN-2017-26 | refinedweb | 1,778 | 64.71 |
ran into a problem with one of my XSD's that heavily uses namespaces with
elements. The problem occurred because Misc::strip_element_name() removes
the namespace. This is called at the beginning of matchFieldsCallback().
So as an example, I have a field dct:accessRights. In the
xsd_display_matchfields table, the xsdmf_element gets stored as
!dct:accessRights when the field is defined. In matchFieldsCallback(), the
dct: is stripped causing a search for !accessRights which doesn't exist.
In my local code, I fixed this by adding code to strip_element_name to not
strip off my namespaces. This doesn't seem to have had a negative effect on
other areas of Fez. But I don't like this hardcoded fix.
What is the purpose of stripping off the namespace before creating the full
element name? If this really is needed for other parts of Fez to operate
correctly, then it seems like a better fix would be to generate the
xsdmf_element name without namespaces when the field is defined. Am I
missing something here?
Lynette
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/fez/mailman/message/23938982/ | CC-MAIN-2017-04 | refinedweb | 213 | 59.9 |
I have, as many of you readers have, seen and/or used other similar libraries. But I have this thing in my head, this crazy notion, that using other people's code is, well, cheating. I can't do it without feeling guilty unless I know that I can implement their ideas on my own. I like walking the hardest path I can for the sake of learning as much as I can. So, when I needed a signals and slots mechanism, I had to write my own library for my own mental health. This is what I came up with.
Good question. I think it is appropriate to start with an explanation of what they exactly are. Signals and slots are a mechanism for which "events" are handled, passed around, processed, and eventually invoked. Slots connect to signals, and when a signal is fired, it sends data to the referenced slots, allowing that data to be handled arbitrarily. It is important to point out that this referencing of slots to signals is done at run time, allowing for a great deal of flexibility.
The short answer: very. In my humble opinion, for a library to be lightweight, it not only needs to provide the smallest subset of useful functionality, but should also feel lightweight. It needs to reflect the "lightweight-ness" in its syntax. To use this library to its full potential requires the use of only two classes and two functions.
Feature-wise, it provides simple and lightweight mechanisms (+1 exception... see below) for attaching, detaching, and invoking slots handling up to 10 parameters.
This library is simple. It does not do the following:
First, let's define some functionality where to use the signals and slots system: four functions and a member function.
void print_add(int a, int b)
{
cout << a << " + " << b << " = " << a + b << endl;
}
void print_sub(int a, int b)
{
cout << a << " - " << b << " = " << a - b << endl;
}
void print_mul(int a, int b)
{
cout << a << " x " << b << " = " << a * b << endl;
}
void print_div(int a, int b)
{
cout << a << " / " << b << " = " << a / (double)b << endl;
}
class test
{
public:
void inclass(int a, int b)
{
cout << "MEMBER: The circumfrence of a " << a
<< " by " << b << " box is " << 2*a + 2*b << endl;
}
};
Well, the syntax is simple. Borrowing ideas from C# delegates, connecting these functions and invoking them looks like this:
test t;
signal<void, int, int> math_signals;
math_signals += slot(print_add);
math_signals += slot(print_sub);
math_signals += slot(print_mul);
math_signals += slot(print_div);
math_signals += slot(&t, &test::inclass);
math_signals(8, 4);
The above code adds five slots to the signal, and invokes them with the data 8 and 4. This means that each corresponding function will be executed once, in the order in which they were added, with the parameters 8 4.
Deleting a signal is just as easy. Expanding from the above code, let's say we wanted to remove the third slot, the one pointing to print_mul.
print_mul
math_signals -= slot(print_mul);
This snippet will do it.
An alternate using the +=, -=, and () is to use the functions connect, disconnect, and emit, respectively.
+=
-=
()
connect
disconnect
emit
Functions slot and safeslot are used to create slots to avoid as much explicit template argument declarations. Functions are capable of inferring template arguments, and thus removes much redundant template code as every class and function would have the same signature.
slot
safeslot
How do you collapse several functions with different return types into one result? This library will only return the result from the last slot fired. This works fine if only one slot is attached per signal. Also, there is no mechanism to marshal the return type of one function into the next. However, there is one trick.
References. Or more specifically, creating signals and slots that take references to data as a parameter. Consider the following code:
void a(int& in)
{
++in;
}
void b(int& in)
{
in += 6;
}
void c(int& in)
{
in *= 2;
}
//...
signal<void, int&> cool_test;
cool_test += slot(a);
cool_test += slot(b);
cool_test += slot(c);
int result = 5;
cool_test(result);
cout << result << endl;
As you probably would expect, the number "24" is printed to screen. Using a referenced parameter allows data to be returned and subsequently modified by the following functions.
Well, I'm sure you probably noticed the one big dangerous shortfall to the code I've presented so far.
If a slot contains a member function pointer, and the pointer to the instance of the class we want to invoke the function in is deleted or goes out of scope, well, things can get nasty quick. If you are lucky, it will still work, but you can end up with a nasty segment fault bringing your ever so wonderful application (without warning, mind you) to its knees.
So, into the spotlight comes the class trackable. Yes, for those of you familiar with boost::function, it functions similarly (and yes, a blatant rip-off of the name).
trackable
boost::function
Consider the following example:
using namespace std;
class test_trackable : public semaphore::trackable
{
public:
void inclass(int a, int b)
{
cout << "MEMBER: The circumfrence of a " << a << " by "
<< b << " box is " << 2*a + 2*b << endl;
}
};
int main()
{
signal<void, int, int> math_signals;
{
test_trackable track;
math_signals += safeslot(&track, &test_trackable::inclass);
math_signals(8, 4);
}
cout << "TRACKED MEMBER FUNCTION POINTER NOW OUT OF SCOPE!" << endl;
math_signals(8, 4);
system("pause");
return 0;
};
Let's now look at safeslot. This little function creates a slot which is capable of determining when the instance for the member function has been deleted, and thus avoids a potential disaster. The catch? The class which contains the member function now has to inherit from semaphore::trackable, which implements a virtual destructor. More on how this mechanism works later.
semaphore::trackable
So, if we where to run this little snippet, test_trackable::inclass would only be called once - the first time. For compatibilities' sake, safeslot will also create simple function pointer slots. Any member function that can be wrapped in a safe slot can also be warped in a regular slot (minus the safety).
test_trackable::inclass
Well, I apologize if up to this point this reads a bit like a commercial. But I have to fulfill my responsibility to explain how to use my library, and I wanted to make clear what it can and can't do. So, from here on, I will discuss the design, internal mechanisms, how things fit together, and the problems I encountered.
The signal class is just a simple wrapper around a std::list of slots. Things are kept typesafe with 22 different template specializations to support up to 10 parameters and the void return type.
signal
std::list
void
The slot class is hidden in the internal namespace. Slots are to be created only through the slot and safeslot functions provided for the reasons already stated. The internal::slot class holds a reference counted pointer to an internal::invokable class which is the workhorse of the slot. Reference counting makes the copy of the class cheap, making storage in the signal class' std::list efficient.
internal
internal::slot
internal::invokable
There are several worker classes buried in namespaces which are not used directly but do pretty much all the work. They are internal::invokable and derivatives: internal::simple_function, internal::member_function, and internal::smart_member_function. These classes store function and member function pointers to pieces of code desired to be wrapped in a slot. internal::simple_function wraps a simple function pointer, and internal::member_function wraps a member function pointer. internal::smart_member_function functions similar to the internal::member_function but adds the ability to be able to determine when the data it's pointed to has expired.
internal::simple_function
internal::member_function
internal::smart_member_function
The trackable class, as seen above, prevents slots from executing member function pointers after death. The trackable class, in order for it to tell internal::smart_member_function that it's been deleted, has to externalize that data. So, it creates an instance of a reference counted watcher class. Any internal::smart_member_function class created stores its own reference to the watcher class. Once the trackable class is destroyed, it changes the data of the watcher class, and hence internal::smart_member_function can discreetly avoid certain disasters. The last internal::smart_member_function class with the reference to the watcher class will delete it.
There was one notable design complication that I feel is worthy enough to deserve its own section: how to compare and equate slots. This functionality is important to be able to delete slots. Previous incarnations of this library failed to deliver this functionality, and left me unable to find a simple method to detach a slot.
Now, in order to see if two slots are the same, we would have to compare their instance of internal::invokable. Now that said, this base class could not be responsible for this because it's an abstract base class; the important data is held in the classes that derive from it. Furthermore, dynamic_casting is complicated due to the fact that internal::member_function and internal::smart_member_function take one more arbitrary template argument than its base class.
To solve this problem, I added two more virtual functions and an enumeration to internal::invokable: gettype(), compare(internal::invokable* rhs), and
gettype()
compare(internal::invokable* rhs)
enum type
{
SimpleFunction,
MemberFunction,
SmartMemberFunction,
UserDefined
};
gettype() returns one of the values from the enum. compare(...) first checks if the types are the same (and not UserDefined) via gettype(), and if so, performs a dynamic_casts and a comparison.
compare(...)
Despite first impressions, this is guaranteed to work, and no incorrect casts can be made. In order for the two slots to be comparable, they have to share the same template arguments. Therefore, the internal::invokable which they hold will also share template arguments. Thus, invoking a compare will result in a dynamic_cast with the correct type and number of arguments. A slot will also allow comparison of incompatible types, obviously returning false in all cases.
There is only one problem with the library that I haven't tackled. As shown, in order to remove a slot, that slot needs to be passed to the disconnect function, like so:
signal<void, int, int> test_signals;
test t;
// ...
test_signals += safeslot(&t, &test::inclass); // Add slot
// ...
test_signals -= safeslot(&t, &test::inclass); // Remove slot
However, if at a later time, a slot needs to be removed, and the class whose slot's function pointer calls is not known, it cannot be removed. I currently am undecided on how to tackle this one, but I'm thinking the most appropriate way would be to use a NULL to signify a "wildcard" for the class instance, like this:
NULL
signal<void, int, int> test_signals;
test t;
// ...
test_signals += safeslot(&t, &test::inclass); // Add slot
// ...
test_signals -= safeslot(NULL, &test::inclass); // Remove slot
Another feature missing that I would like to eventually see included is a control over the order in which the slots are fired.
While this library is simple and lightweight, it still has its flaws. In the end, it has taught me much, and was a fun challenge to complete. I hope someone will find this code useful, and I am looking forward to receiving feedback on my work.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
armentage wrote:I don't know how I feel about the trackable/safe_signal thing. IMHO, disconnecting from a signal is just as important as subscribing to one, and should be just as explicit.
armentage wrote:If the object did not do the registration itself, the entity that called its destructor should do the unsubscribe. I also believe that the entity calling the destructor should always be the one that created the object and registered it, so that there is some sense of symmetry.
I guess its like garbage collection; Good C++ programmers don't need it, and don't pine for it. Likewise, we shouldn't anything like with different object services.
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/15111/A-Minimalistic-Signals-and-Slots-Implementation?msg=1622231 | CC-MAIN-2017-43 | refinedweb | 2,041 | 60.55 |
Pipeline#
Datashader provides a flexible series of processing stages that map from raw data into viewable images. As shown in the Introduction, using datashader can be as simple as calling
datashade(), but understanding each of these stages will help you get the most out of the library.
The stages in a datashader pipeline are similar to those in a 3D graphics shading pipeline:
Here the computational steps are listed across the top of the diagram, while the data structures or objects are listed along the bottom. Breaking up the computations in this way is what makes Datashader able to handle arbitrarily large datasets, because only one stage (Aggregation) requires access to the entire dataset. The remaining stages use a fixed-sized data structure regardless of the input dataset, allowing you to use any visualization or embedding methods you prefer without running into performance limitations.
In this notebook, we’ll first put together a simple, artificial example to get some data, and then show how to configure and customize each of the data-processing stages involved:
Data#
For an example, we’ll construct a dataset made of five overlapping 2D Gaussian distributions with different σs (spatial scales). By default we’ll have 10,000 datapoints from each category, but you should see sub-second response times even for 1 million datapoints per category if you increase
num.
import pandas as pd import numpy as np from collections import OrderedDict as odict num=10000 np.random.seed(1) dists = {cat: pd.DataFrame(odict([('x',np.random.normal(x,s,num)), ('y',np.random.normal(y,s,num)), ('val',val), ('cat',cat)])) for x, y, s, val, cat in [( 2, 2, 0.03, 10, "d1"), ( 2, -2, 0.10, 20, "d2"), ( -2, -2, 0.50, 30, "d3"), ( -2, 2, 1.00, 40, "d4"), ( 0, 0, 3.00, 50, "d5")] } df = pd.concat(dists,ignore_index=True) df["cat"]=df["cat"].astype("category")
Datashader can work many different data objects provided by different data libraries depending on the type of data involved, such as columnar data in Pandas or Dask dataframes, gridded multidimensional array data using xarray, columnar data on GPUs using cuDF, multidimensional arrays on GPUs using CuPy, and ragged arrays using SpatialPandas (see the Performance User Guide for a guide to selecting an appropriate library). Here, we’re using a Pandas dataframe, with 50,000 rows by default:
df.tail()
To illustrate this dataset, we’ll make a quick-and-dirty Datashader plot that dumps these x,y coordinates into an image:
import datashader as ds import datashader.transfer_functions as tf %time tf.shade(ds.Canvas().points(df,'x','y'))
CPU times: user 450 ms, sys: 23.8 ms, total: 474 ms Wall time: 474 ms
Without any special tweaking, datashader is able to reveal the overall shape of this distribution faithfully: four summed 2D normal distributions of different variances, arranged at the corners of a square, overlapping another very high-variance 2D normal distribution centered in the square. This immediately obvious structure makes a great starting point for exploring the data, and you can then customize each of the various stages involved as described below.
Of course, this is just a static plot, and you can’t see what the axes are, so we can instead embed this data into an interactive plot if we prefer:
import holoviews as hv from holoviews.operation.datashader import datashade hv.extension("bokeh") datashade(hv.Points(df)) | https://datashader.org/getting_started/Pipeline.html | CC-MAIN-2022-40 | refinedweb | 571 | 50.16 |
How To Build Your Own Comment System Using Firebase
A comments section is a great way to build a community for your blog. Recently when I started blogging, I thought of adding a comments section. However, it wasn’t easy. Hosted comments systems, such as Disqus and Commento, come with their own set of problems:
- They own your data.
- They are not free.
- You cannot customize them much.
So, I decided to build my own comments system. Firebase seemed like a perfect hosting alternative to running a back-end server.
First of all, you get all of the benefits of having your own database: You control the data, and you can structure it however you want. Secondly, you don’t need to set up a back-end server. You can easily control it from the front end. It’s like having the best of both worlds: a hosted system without the hassle of a back end.
In this post, that’s what we’ll do. We will learn how to set up Firebase with Gatsby, a static site generator. But the principles can be applied to any static site generator.
Let’s dive in!
What Is Firebase?
Firebase is a back end as a service that offers tools for app developers such as database, hosting, cloud functions, authentication, analytics, and storage.
Cloud Firestore (Firebase’s database) is the functionality we will be using for this project. It is a NoSQL database. This means it’s not structured like a SQL database with rows, columns, and tables. You can think of it as a large JSON tree.
Introduction to the Project
Let’s initialize the project by cloning or downloading the repository from GitHub.
I’ve created two branches for every step (one at the beginning and one at the end) to make it easier for you to track the changes as we go.
Let’s run the project using the following command:
gatsby develop
If you open the project in your browser, you will see the bare bones of a basic blog.
The comments section is not working. It is simply loading a sample comment, and, upon the comment’s submission, it logs the details to the console.
Our main task is to get the comments section working.
How the Comments Section Works
Before doing anything, let’s understand how the code for the comments section works.
Four components are handling the comments sections:
blog-post.js
CommentForm.js
Comment.js
First, we need to identify the comments for a post. This can be done by making a unique ID for each blog post, or we can use the slug, which is always unique.
The
blog-post.js file is the layout component for all blog posts. It is the perfect entry point for getting the slug of a blog post. This is done using a GraphQL query.
export const query = graphql` query($slug: String!) { markdownRemark(fields: { slug: { eq: $slug } }) { html frontmatter { title } fields { slug } } } `
Before sending it over to the
substring() method to get rid of the trailing slash (
/) that Gatsby adds to the slug.
const slug = post.fields.slug.substring(1, post.fields.slug.length - 1) return ( <Layout> <div className="container"> <h1>{post.frontmatter.title}</h1> <div dangerouslySetInnerHTML={{ __html: post.html }} /> <Comments comments={comments} slug={slug} /> </div> </Layout> ) }
The
Comments.js component maps each comment and passes its data over to
Comment.js, along with any replies. For this project, I have decided to go one level deep with the commenting system.
The component also loads
CommentForm.js to capture any top-level comments.
const Comments = ({ comments, slug }) => { return ( <div> <h2>Join the discussion</h2> <CommentForm slug={slug} /> <CommentList> {comments.length > 0 && comments .filter(comment => !comment.pId) .map(comment => { let child if (comment.id) { child = comments.find(c => comment.id === c.pId) } return ( <Comment key={comment.id} child={child} comment={comment} slug={slug} /> ) })} </CommentList> </div> ) }
Let’s move over to
CommentForm.js. This file is simple, rendering a comment form and handling its submission. The submission method simply logs the details to the console.
const handleCommentSubmission = async e => { e. preventDefault() let comment = { name: name, content: content, pId: parentId ∣∣ null, time: new Date(), } setName("") setContent("") console.log(comment) }
The
Comment.js file has a lot going on. Let’s break it down into smaller pieces.
First, there is a
SingleComment component, which renders a comment.
I am using the Adorable API to get a cool avatar. The Moment.js library is used to render time in a human-readable format.
const SingleComment = ({ comment }) => ( <div> <div className="flex-container"> <div className="flex"> <img src="" alt="Avatar" /> </div> <div className="flex"> <p className="comment-author"> {comment.name} <span>says</span> </p> {comment.time} &&(<time>(moment(comment.time.toDate()).calendar()}</time>)} </div> </div> </p>{comment.content}</p> </div> )
Next in the file is the
Comment component. This component shows a child comment if any child comment was passed to it. Otherwise, it renders a reply box, which can be toggled on and off by clicking the “Reply” button or “Cancel Reply” button.
const Comment = ({ comment, child, slug }) => { const [showReplyBox, setShowReplyBox] = useState(false) return ( <CommentBox> <SingleComment comment={comment} /> {child && ( <CommentBox child className=comment-reply"> <SingleComment comment={child} /> </CommentBox> )} {!child && ( <div> {showReplyBox ? ( <div> <button className="btn bare" onClick={() => setShowReplyBoy(false)} > Cancel Reply </button> <CommentForm parentId={comment.id} slug={slug} /> </div> ) : ( <button className="btn bare" onClick={() => setShowReplyBox(true)}> Reply </button> )} </div> )} </div> )} </CommentBox>
Now that we have an overview, let’s go through the steps of making our comments section.
1. Add Firebase
First, let’s set up Firebase for our project.
Start by signing up. Go to Firebase, and sign up for a Google account. If you don’t have one, then click “Get Started”.
Click on “Add Project” to add a new project. Add a name for your project, and click “Create a project”.
Once we have created a project, we’ll need to set up Cloud Firestore.
In the left-side menu, click “Database”. Once a page opens saying “Cloud Firestore”, click “Create database” to create a new Cloud Firestore database.
When the popup appears, choose “Start in test mode”. Next, pick the Cloud Firestore location closest to you.
Once you see a page like this, it means you’ve successfully created your Cloud Firestore database.
Let’s finish by setting up the logic for the application. Go back to the application and install Firebase:
yarn add firebase
Add a new file,
firebase.js, in the root directory. Paste this content in it:
import firebase from "firebase/app" import "firebase/firestore" var firebaseConfig = 'yourFirebaseConfig' firebase.initializeApp(firebaseConfig) export const firestore = firebase.firestore() export default firebase
You’ll need to replace
yourFirebaseConfig with the one for your project. To find it, click on the gear icon next to “Project Overview” in the Firebase app.
This opens up the settings page. Under your app’s subheading, click the web icon, which looks like this:
This opens a popup. In the “App nickname” field, enter any name, and click “Register app”. This will give your
firebaseConfig object.
<!-- The core Firebase JS SDK is always required and must be listed first --> <script src=""></script> <!-- TODO: Add SDKs for Firebase products that you want to use --> <script> // Your web app’s Firebase configuration var firebaseConfig = { ... }; // Initialize Firebase firbase.initializeApp(firebaseConfig); </script>
Copy just the contents of the
firebaseConfig object, and paste it in the
firebase.js file.
Is It OK to Expose Your Firebase API Key?
Yes. As stated by a Google engineer, exposing your API key is OK.
The only purpose of the API key is to identify your project with the database at Google. If you have set strong security rules for Cloud Firestore, then you don’t need to worry if someone gets ahold of your API key.
We’ll talk about security rules in the last section.
For now, we are running Firestore in test mode, so you should not reveal the API key to the public.
How to Use Firestore?
You can store data in one of two types:
- collection
A collection contains documents. It is like an array of documents.
- document
A document contains data in a field-value pair.
Remember that a collection may contain only documents and not other collections. But a document may contain other collections.
This means that if we want to store a collection within a collection, then we would store the collection in a document and store that document in a collection, like so:
{collection-1}/{document}/{collection-2}
How to Structure the Data?
Cloud Firestore is hierarchical in nature, so people tend to store data like this:
blog/{blog-post-1}/content/comments/{comment-1}
But storing data in this way often introduces problems.
Say you want to get a comment. You’ll have to look for the comment stored deep inside the blog collection. This will make your code more error-prone. Chris Esplin recommends never using sub-collections.
I would recommend storing data as a flattened object:
blog-posts/{blog-post-1} comments/{comment-1}
This way, you can get and send data easily.
How to Get Data From Firestore?
To get data, Firebase gives you two methods:
get()
This is for getting the content once.
onSnapshot()
This method sends you data and then continues to send updates unless you unsubscribe.
How to Send Data to Firestore?
Just like with getting data, Firebase has two methods for saving data:
set()
This is used to specify the ID of a document.
add()
This is used to create documents with automatic IDs.
I know, this has been a lot to grasp. But don’t worry, we’ll revisit these concepts again when we reach the project.
2. Create Sample Date
The next step is to create some sample data for us to query. Let’s do this by going to Firebase.
Go to Cloud Firestore. Click “Start a collection”. Enter
For the “Document ID”, click “Auto-ID. Enter the following data and click “Save”.
While you’re entering data, make sure the “Fields” and “Types” match the screenshot above. Then, click “Save”.
That’s how you add a comment manually in Firestore. The process looks cumbersome, but don’t worry: From now on, our app will take care of adding comments.
At this point, our database looks like this:
3. Get the Comments Data
Our sample data is ready to query. Let’s get started by getting the data for our blog.
Go to
blog-post.js, and import the Firestore from the Firebase file that we just created.
import {firestore} from "../../firebase.js"
To query, we will use the
useEffect hook from React. If you haven’t already, let’s import it as well.
useEffect(() => { firestore .collection(`comments`) .onSnapshot(snapshot => { const posts = snapshot.docs .filter(doc => doc.data().slug === slug) .map(doc => { return { id: doc.id, ...doc.data() } }) setComments(posts) }) }, [slug])
The method used to get data is
onSnapshot. This is because we also want to listen to state changes. So, the comments will get updated without the user having to refresh the browser.
We used the
filter and
map methods to find the comments whose slug matches the current slug.
One last thing we need to think about is cleanup. Because
onSnapshot continues to send updates, this could introduce a memory leak in our application. Fortunately, Firebase provides a neat fix.
useEffect(() => { const cleanUp = firestore .doc(`comments/${slug}`) .collection("comments") .onSnapshot(snapshot => { const posts = snapshot.docs.map(doc => { return { id: doc.id, ...doc.data() } }) setComments(posts) }) return () => cleanUp() }, [slug])
Once you’re done, run
gatsby develop to see the changes. We can now see our comments section getting data from Firebase.
Let’s work on storing the comments.
4. Store Comments
To store comments, navigate to the
CommentForm.js file. Let’s import Firestore into this file as well.
import { firestore } from "../../firebase.js"
To save a comment to Firebase, we’ll use the
add() method, because we want Firestore to create documents with an auto-ID.
Let’s do that in the
handleCommentSubmission method.
firestore .collection(`comments`) .add(comment) .catch(err => { console.error('error adding comment: ', err) })
First, we get the reference to the comments collection, and then add the comment. We’re also using the
catch method to catch any errors while adding comments.
At this point, if you open a browser, you can see the comments section working. We can add new comments, as well as post replies. What’s more amazing is that everything works without our having to refresh the page.
You can also check Firestore to see that it is storing the data.
Finally, let’s talk about one crucial thing in Firebase: security rules.
5. Tighten Security Rules
Until now, we’ve been running Cloud Firestore in test mode. This means that anybody with access to the URL can add to and read our database. That is scary.
To tackle that, Firebase provides us with security rules. We can create a database pattern and restrict certain activities in Cloud Firestore.
In addition to the two basic operations (read and write), Firebase offers more granular operations: get, list, create, update, and delete.
A read operation can be broken down as:
get
Get a single document.
list
Get a list of documents or a collection.
A write operation can be broken down as:
create
Create a new document.
update
Update an existing document.
delete
Delete a document.
To secure the application, head back to Cloud Firestore. Under “Rules”, enter this:
service cloud.firestore { match /databases/{database}/documents { match /comments/{id=**} { allow read, create; } } }
On the first line, we define the service, which, in our case, is Firestore. The next lines tell Firebase that anything inside the
If we had used this:
allow read, write;
… that would mean that users could update and delete existing comments, which we don’t want.
Firebase’s security rules are extremely powerful, allowing us to restrict certain data, activities, and even users.
On To Building Your Own Comments Section
Congrats! You have just seen the power of Firebase. It is such an excellent tool to build secure and fast applications.
We’ve built a super-simple comments section. But there’s no stopping you from exploring further possibilities:
- Add profile pictures, and store them in Cloud Storage for Firebase;
- Use Firebase to allow users to create an account, and authenticate them using Firebase authentication;
- Use Firebase to create inline Medium-like comments.
A great way to start would be to head over to Firestore’s documentation.
Finally, let’s head over to the comments section below and discuss your experience with building a comments section using Firebase.
| https://www.smashingmagazine.com/2020/08/comment-system-firebase/?utm_campaign=Frontend%2BWeekly&utm_medium=web&utm_source=Frontend_Weekly_218 | CC-MAIN-2020-40 | refinedweb | 2,435 | 68.26 |
#include <DEV_IO.h>
#include <DEV_IO.h>
Inheritance diagram for ACE_DEV_IO:
<buf> is the buffer to write from or receive into.
<len> is the number of bytes to transfer.
The <timeout> parameter in the following methods indicates how long to blocking trying to transfer data..:
On partial transfers, i.e., if any data is transferred before timeout/error/EOF, <bytes_transferred> will contain the number of bytes transferred.
Default constructor.
Dump the state of an object.
Reimplemented from ACE_DEV.
Return the local endpoint address.
Return the address of the remotely connected peer (if there is one).>.
0 n bytes, keep trying until n are sent.
[friend]
Declare the dynamic allocation hooks.
[private]
Address of device we are connected to. | http://www.theaceorb.com/1.4a/doxygen/ace/classACE__DEV__IO.html | CC-MAIN-2017-51 | refinedweb | 117 | 62.75 |
The. However, we strongly recommend that an enum contain a constant with a value of 0. For more information, see Enumeration Types (C# Programming Guide).
Every enumeration type has an underlying type, which can be any integral type except char. The default underlying type of the enumeration elements is int. To declare an enum of another integral type, such as byte, use a colon after the identifier followed by the type:
enum Days : byte {Sat=1, Sun, Mon, Tue, Wed, Thu, Fri};.
An enumerator may not contain white space in its name.:
int x = (int)Days.Sun;
When you apply System..::.FlagsAttribute to an enumeration that contains some elements combined with a bitwise OR operation, you will notice that the attribute affects the behavior of the enum when it is used with some tools. You can notice these changes when you use tools such as the Console class methods, the Expression Evaluator, and so forth. (See example 3). dependant source code. Enum values are used often in switch statements. If additional elements have been added to the enum type, the test for default values can return true unexpectedly.
If other developers will be using your code, you should provide guidelines about how their code should react if new elements are added to any enum types.
In this example, an enumeration, Days, is declared. Two enumerators are explicitly converted to integer and assigned to integer variables.
public class EnumTest
{
enum Days { Sun, Mon, Tue, Wed, Thu, Fri, Sat };
static void Main()
{
int x = (int)Days.Sun;
int y = (int)Days.Fri;
Console.WriteLine("Sun = {0}", x);
Console.WriteLine("Fri = {0}", y);
}
}
/* Output:
Sun = 0
Fri = 5
*/
In this example, the base-type option is used to declare an enum whose members are of the type long. Notice that even though the underlying type of the enumeration is long, the enumeration members must still be explicitly converted to type long by using a cast.
public class EnumTest2
{
enum Range : long { Max = 2147483648L, Min = 255L };
static void Main()
{
long x = (long)Range.Max;
long y = (long)Range.Min;
Console.WriteLine("Max = {0}", x);
Console.WriteLine("Min = {0}", y);
}
}
/* Output:
Max = 2147483648
Min = 255
*/
The following code example illustrates the use and effect of the System..::.FlagsAttribute attribute on an enum declaration.
);
}
}
/* Output:
SunRoof, FogLights
5
*/
Notice that if you remove FlagsAttribute, the example will output the following:
5
For more information, see the following sections in the C# Language Specification:
1.10 Enums
6.2.2 Explicit Enumeration Conversions
14 Enums | http://msdn.microsoft.com/en-us/library/sbbt4032.aspx | crawl-002 | refinedweb | 418 | 66.64 |
newbie - help - where do u store custom classes when importing namespaces in ASP
Discussion in 'ASP .Net' started by ravi sankar, Aug 25, 2003.
Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
- Similar Threads
@Import Syntax and Importing Namespaces in global.asax fileD. Shane Fowlkes, Jan 13, 2004, in forum: ASP .Net
- Replies:
- 1
- Views:
- 970
- Tu-Thach
- Jan 13, 2004
Importing packages and classes help required by newbieRogue 9, Jul 4, 2003, in forum: Python
- Replies:
- 0
- Views:
- 298
- Rogue 9
- Jul 4, 2003
Explicit or general importing of namespaces?Harlin Seritt, Feb 28, 2005, in forum: Python
- Replies:
- 10
- Views:
- 505
- Harlin Seritt
- Mar 5, 2005
combining namespaces when importing two modulesDonnal Walter, Aug 22, 2005, in forum: Python
- Replies:
- 1
- Views:
- 295
- Peter Hansen
- Aug 22, 2005
Importing namespaces in a classRalph Wiggum, Jan 23, 2008, in forum: ASP .Net
- Replies:
- 7
- Views:
- 396
- Ralph Wiggum
- Jan 23, 2008 | http://www.thecodingforums.com/threads/newbie-help-where-do-u-store-custom-classes-when-importing-namespaces-in-asp.63279/ | CC-MAIN-2015-06 | refinedweb | 188 | 78.28 |
Name | Synopsis | Description | Return Values | Errors | Usage | Attributes | See Also
#include <sys/lock.h> int plock(int op);
The plock() function allows the calling process to lock or unlock into memory its text segment (text lock), its data segment (data lock), or both its text and data segments (process lock). Locked segments are immune to all routine swapping. The effective user ID of the calling process must be super-user to use this call.
The plock() function performs the function specified by op:
Lock text and data segments into memory (process lock).
Lock text segment into memory (text lock).
Lock data segment into memory (data lock).
Remove locks.
Upon successful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error.
The plock() function fails and does not perform the requested operation if:
Not enough memory.
The op argument is equal to PROCLOCK and a process lock, a text lock, or a data lock already exists on the calling process; the op argument is equal to TXTLOCK and a text lock or a process lock already exists on the calling process; the op argument is equal to DATLOCK and a data lock or a process lock already exists on the calling process; or the op argument is equal to UNLOCK and no lock exists on the calling process.
The {PRIV_PROC_LOCK_MEMORY} privilege is not asserted in the effective set of the calling process.
The mlock(3C) and mlockall(3C) functions are the preferred interfaces for process locking.
See attributes(5) for descriptions of the following attributes:
exec(2), exit(2), fork(2), memcntl(2), mlock(3C), mlockall(3C), attributes(5)
Name | Synopsis | Description | Return Values | Errors | Usage | Attributes | See Also | http://docs.oracle.com/cd/E19082-01/819-2243/6n4i09999/index.html | CC-MAIN-2014-41 | refinedweb | 284 | 60.35 |
Solving differential algebraic equations with help from autograd
Posted September 22, 2019 at 12:59 PM | categories: autograd, dae, ode | tags: | View Comments
This problem is adapted from one in "Problem Solving in Chemical Engineering with Numerical Methods, Michael B. Cutlip, Mordechai Shacham".
In the binary, batch distillation of benzene (1) and toluene (2), the moles of liquid \(L\) remaining as a function of the mole fraction of toluene (\(x_2\)) is expressed by:
\(\frac{dL}{dx_2} = \frac{L}{x_2 (k_2 - 1)}\)
where \(k_2\) is the vapor liquid equilibrium ratio for toluene. This can be computed as:
\(k_i = P_i / P\) where \(P_i = 10^{A_i + \frac{B_i}{T +C_i}}\) and that pressure is in mmHg, and the temperature is in degrees Celsius.
One difficulty in solving this problem is that the temperature is not constant; it changes with the composition. We know that the temperature changes to satisfy this constraint \(k_1(T) x_1 + k_2(T) x_2 = 1\).
Sometimes, one can solve for T directly, and substitute it into the first ODE, but this is not a possibility here. One way you might solve this is to use the constraint to find \(T\) inside an ODE function, but that is tricky; nonlinear algebra solvers need a guess and don't always converge, or may converge to non-physical solutions. They also require iterative solutions, so they will be slower than an approach where we just have to integrate the solution. A better way is to derive a second ODE \(dT/dx_2\) from the constraint. The constraint is implicit in \(T\), so We compute it as \(dT/dx_2 = -df/dx_2 / df/dT\) where \(f(x_2, T) = k_1(T) x_1 + k_2(T) x_2 - 1 = 0\). This equation is used to compute the bubble point temperature. Note, it is possible to derive these analytically, but who wants to? We can use autograd to get those derivatives for us instead.
The following information is given:
The total pressure is fixed at 1.2 atm, and the distillation starts at \(x_2=0.4\). There are initially 100 moles in the distillation.
We have to start by finding the initial temperature from the constraint.
import autograd.numpy as np from autograd import grad from scipy.integrate import solve_ivp from scipy.optimize import fsolve %matplotlib inline import matplotlib.pyplot as plt P = 760 * 1.2 # mmHg A1, B1, C1 = 6.90565, -1211.033, 220.79 A2, B2, C2 = 6.95464, -1344.8, 219.482 def k1(T): return 10**(A1 + B1 / (C1 + T)) / P def k2(T): return 10**(A2 + B2 / (C2 + T)) / P def f(x2, T): x1 = 1 - x2 return k1(T) * x1 + k2(T) * x2 - 1 T0, = fsolve(lambda T: f(0.4, T), 96) print(f'The initial temperature is {T0:1.2f} degC.')
The initial temperature is 95.59 degC.
Next, we compute the derivative we need. This derivative is derived from the constraint, which should ensure that the temperature changes as required to maintain the constraint.
dfdx2 = grad(f, 0) dfdT = grad(f, 1) def dTdx2(x2, T): return -dfdx2(x2, T) / dfdT(x2, T) def ode(x2, X): L, T = X dLdx2 = L / (x2 * (k2(T) - 1)) return [dLdx2, dTdx2(x2, T)]
Next we solve and plot the ODE.
x2span = (0.4, 0.8) X0 = (100, T0) sol = solve_ivp(ode, x2span, X0, max_step=0.01) plt.plot(sol.t, sol.y.T) plt.legend(['L', 'T']); plt.xlabel('$x_2$') plt.ylabel('L, T') x2 = sol.t L, T = sol.y print(f'At x2={x2[-1]:1.2f} there are {L[-1]:1.2f} moles of liquid left at {T[-1]:1.2f} degC')
At x2=0.80 there are 14.04 moles of liquid left at 108.57 degC
<Figure size 432x288 with 1 Axes>
You can see that the liquid level drops, and the temperature rises.
Let's double check that the constraint is actually met. We do that qualitatively here by plotting it, and quantitatively by showing all values are close to 0.
constraint = k1(T) * (1 - x2) + k2(T) * x2 - 1 plt.plot(x2, constraint) plt.ylim([-1, 1]) plt.xlabel('$x_2$') plt.ylabel('constraint value') print(np.allclose(constraint, np.zeros_like(constraint))) constraint
True
array([ 2.22044605e-16, 4.44089210e-16, 2.22044605e-16, 0.00000000e+00, 1.11022302e-15, 0.00000000e+00, 6.66133815e-16, 0.00000000e+00, -2.22044605e-16, 1.33226763e-15, 8.88178420e-16, -4.44089210e-16, 4.44089210e-16, 1.11022302e-15, -2.22044605e-16, 0.00000000e+00, -2.22044605e-16, -1.11022302e-15, 4.44089210e-16, 0.00000000e+00, -4.44089210e-16, 4.44089210e-16, -6.66133815e-16, -4.44089210e-16, 4.44089210e-16, -1.11022302e-16, -8.88178420e-16, -8.88178420e-16, -9.99200722e-16, -3.33066907e-16, -7.77156117e-16, -2.22044605e-16, -9.99200722e-16, -1.11022302e-15, -3.33066907e-16, -1.99840144e-15, -1.33226763e-15, -2.44249065e-15, -1.55431223e-15, -6.66133815e-16, -2.22044605e-16])
<Figure size 432x288 with 1 Axes>
So indeed, the constraint is met! Once again, autograd comes to the rescue in making a computable derivative from an algebraic constraint so that we can solve a DAE as a set of ODEs using our regular machinery. Nice work autograd!
Copyright (C) 2019 by John Kitchin. See the License for information about copying.
Org-mode version = 9.2.3 | http://kitchingroup.cheme.cmu.edu/blog/category/dae/ | CC-MAIN-2020-05 | refinedweb | 886 | 75.3 |
C# is a language based on Object oriented programming concepts, in last tutorial, we learned about Class and object, now, in this chapter, we will learn about inheritance in C# ( another OOP concept). Inheritance is simply process of creating one class from another class, so it allows us to use feature of one main ( base-class) into another class ( derived class). C# gives us polymorphism through inheritance.
C# doesn't support multiple inheritance.
The class which is inherited is called as base class and the class which will inherit properties and methods of base class is known as derived class.
Syntax
<Access_Modifier> class DerivedClass_Namee:BaseClass_Name { //code }
Example:
using System; public class Program { //base class, Animal public class Animal { public void Talk() { Console.WriteLine("Animal talk"); } }; //inherits Animal class class Dog : Animal { public void Talk() { Console.WriteLine("Dog talk"); } }; public static void Main() { Animal a1 = new Animal(); a1.Talk(); Dog a2 = new Dog(); a2.Talk(); } }
Output
Animal talk Dog talk
As you can see in the above example, we were able to override the method
Talk() of
Animal class into derived class
Dog.
Now, suppose, I remove the
Talk() method from derived class (dog)
using System; public class Program { //base class, Animal public class Animal { public void Talk() { Console.WriteLine("Animal talk"); } }; //inherits Animal class class Dog : Animal { }; public static void Main() { Animal a1 = new Animal(); a1.Talk(); Dog a2 = new Dog(); a2.Talk(); } }
Output will be
Animal talk Animal talk
Basically, now as there is no definition of
Talk() in the derived class so simply it called the base class one.
There are 4 types of inheritance in C#
We just saw the example of Single Inheritance above, let's check other inheritance type examples.
When a class is derived from base class than a new class inherits derived class, it is known as multi-level inheritance. Check the image below to understand it
Example
using System; namespace Application { public class A { public void show() { Console.WriteLine("Class A"); } } public class B : A //class B is derived by class A { public void display() { Console.WriteLine("Class B Call"); } } class C : B //class C is derived by class B { public void show1() { Console.WriteLine("Class C Call"); } } public class multilevel { public static void Main() { C obj = new C(); obj.show(); // super class member function obj.display(); // base class member function obj.show1(); // own member function } } }
Output:
Class A Class B Call Class C Call
In this type of inheritance there are multiple classes which are derived from one base class. It is used when one class feature is required in multiple classes. Let us have a look on the example:
using System; namespace Application { public class A { public void show() { Console.WriteLine("Show from A"); } } public class B: A //class B is derived by class A { public void display() { Console.WriteLine("Display from B"); } } public class C: A //class C is also derived by class A { public void show1() { Console.WriteLine("Display from C"); } } public class hierarchical { public static void Main() { B objl = new B(); C obj2 = new C(); objl.show(); objl.display(); obj2.show1(); } } }
Output
Show from A Display from B Display from C
As stated above multiple inheritance is not supported in C#, but to solve this issue, we can use Interface and derive class using it, check example to understand it more.
using System; namespace Application { // Base class Cost public class Shape { public void setSide(int s) { side = s; } protected int side; } //interface public interface Cost { int getCost(int area); } // Derived class, using Interface and Base class class square: Shape, Cost { //methods in derived class public int getArea() { return (side * side); } public int getCost(int area) { return area * 10; } } public class SquareInheritance { public static void Main(string[] args) { square sq = new square(); int area; sq.setSide(5); area = sq.getArea(); // Print the area of the object. Console.WriteLine("The area is: {0}", sq.getArea()); Console.WriteLine("The cost is: {0}", sq.getCost(area)); } } }
output:
The area is: 25 The cost is: 250
As you can see in the above code, we are using a "Shape" as base clas and added Interface "Cost" as second base class to create derived class "Square". | https://qawithexperts.com/tutorial/c-sharp/23/c-sharp-inheritance | CC-MAIN-2021-39 | refinedweb | 693 | 61.16 |
While I agree with keeping data structures simple and clean I think conserving them forever is bad idea in general. Let's look on every particular case before making decision. On Mon, Apr 9, 2012 at 3:46 PM, Andrew Svetlov <andrew.svetlov at gmail.com> wrote: > So it's really no difference between three separate fields in frame > and embedded struct with those fields. > > On Mon, Apr 9, 2012 at 1:51 PM, Mark Shannon <mark at hotpy.org> wrote: >> Andrew Svetlov wrote: >>> >>> Do you want to create `frame` and `f_namespaces` every function call >>> instead of single `frame` creation? >> >> >> f_namespaces would be part of the frame, replacing f_builtins, f_globals >> and f_locals. The indirection of an external object hurts performance, >> so it would have to be a struct within the frame. The aim is clarity; >> locals, globals and builtins form a trio, so should be implemented as such. >> >> >>> On Mon, Apr 9, 2012 at 11:56 AM, Mark Shannon <mark at hotpy.org> wrote: >>>> >>>> The frame object is a key object in CPython. It holds the state >>>> of a function invocation. Frame objects are allocated, initialised >>>> and deallocated at a rapid rate. >>>> Each extra field in the frame object requires extra work for each >>>> and every function invocation. Fewer fields in the frame object >>>> means less overhead for function calls, and cleaner simpler code. >>>> >>>> We have recently removed the f_yieldfrom field from the frame object. >>>> () >>>> >>>> The f_exc_type, f->f_exc_value, f->f_exc_traceback fields which handle >>>> sys.exc_info() in generators could be moved to the generator object. >>>> () >>>> >>>> The f_tstate field is redundant and, it would seem, dangerous >>>> () >>>> >>>> The f_builtins, f_globals, f_locals fields could be combined into a >>>> single f_namespaces struct. >>>> () >>>> >>>> Now PEP 419 proposes adding (yet) another field to the frame object. >>>> Please don't. >>>> >>>> Clean, concise data structures lead to clean, concise code. >>>> which we all know is a "good thing" :) >>>> >>>> Cheers, >>>> Mark. >>>> >>>> _______________________________________________ >>>> Python-Dev mailing list >>>> Python-Dev at python.org >>>> >>>> Unsubscribe: >>>> >>>> >>> >>> >>> >>> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> >> Unsubscribe: >> > > > > -- > Thanks, > Andrew Svetlov -- Thanks, Andrew Svetlov | https://mail.python.org/pipermail/python-dev/2012-April/118627.html | CC-MAIN-2019-22 | refinedweb | 340 | 67.45 |
IRC log of tp on 2003-03-05
Timestamps are in UTC.
12:29:11 [RRSAgent]
RRSAgent has joined #tp
12:29:28 [RalphS]
2003-03-05 Technical Plenary
12:29:30 [RalphS]
->
agenda
13:10:49 [geoff_a]
geoff_a has joined #tp
13:16:45 [Norm]
Norm has joined #tp
13:20:13 [ht]
ht has joined #tp
13:20:28 [ht]
This channel for logging TP sessions?
13:22:59 [Norm]
Norm has joined #tp
13:23:29 [marie]
marie has joined #tp
13:27:57 [Ian]
Ian has joined #tp
13:28:45 [Ian]
Ian has changed the topic to: Tech Plenary
13:28:47 [marie]
marie has joined #tp
13:28:57 [marie]
yes this is the channel
13:29:02 [Marsh]
Marsh has joined #tp
13:29:26 [Bert-lap]
Bert-lap has joined #tp
13:29:35 [marie]
mc, ht and ij are this morning scribers
13:29:40 [Steven]
Steven has joined #tp
13:29:44 [marie]
s/mc/mcf
13:29:52 [mimasa]
mimasa has joined #tp
13:31:31 [ps]
ps has joined #tp
13:32:15 [mdubinko]
mdubinko has joined #tp
13:32:33 [marie]
-----
13:32:36 [zrendon]
zrendon has joined #tp
13:32:55 [marie]
Session 1: Welcome! by Steve Bratt
13:33:09 [zrendon]
zrendon has left #tp
13:33:22 [marie]
slides at:
13:33:45 [zrendon]
zrendon has joined #tp
13:34:04 [ht]
zarella, did you get your web access problem sorted?
13:34:59 [zrendon]
Yes, thanks.
13:35:10 [amy]
amy has joined #tp
13:35:18 [hugo]
hugo has joined #tp
13:35:29 [gerald]
gerald has joined #tp
13:36:00 [Gudge]
Gudge has joined #tp
13:36:03 [marie]
Slide 3
13:36:04 [PGrosso]
PGrosso has joined #tp
13:36:06 [jeffsch]
jeffsch has joined #tp
13:36:29 [luu]
luu has joined #tp
13:36:33 [marie]
sb: I encourage you to sign at today's BOF tables
13:36:40 [Don]
Don has joined #tp
13:36:44 [janet]
janet has joined #tp
13:37:08 [marie]
sb: town session at the end of the day, ian jacobs will moderate it
13:37:38 [marie]
...please prepare your questions and write them down on pieces of paper...
13:37:41 [carol]
carol has joined #tp
13:38:00 [marie]
I am happy to collect them all day long
13:38:20 [marie]
... but, Why Are We Here?:
13:39:52 [marie]
W3C's Mission:
13:39:56 [dino]
dino has joined #tp
13:40:06 [ddahl]
ddahl has joined #TP
13:40:20 [shayman]
shayman has joined #tp
13:40:33 [caribou]
caribou has joined #tp
13:41:42 [SueL]
SueL has joined #tp
13:41:56 [marie]
Engineering the Web's Foundational Technologies:
13:42:33 [olivier]
olivier has joined #tp
13:43:14 [emmanuel]
emmanuel has joined #tp
13:43:17 [marie]
Work Organization:
13:44:01 [marie]
sb: the org chart, done by ivan herman, is at
13:44:16 [slh]
slh has joined #tp
13:44:26 [marie]
sb: we started 8 new activities from last year
13:44:49 [marie]
Coordination:
13:45:12 [marie]
sb: essential in W3C = the coordination between WGs
13:45:51 [em-lap]
em-lap has joined #tp
13:47:26 [marie]
Tools for Coordination:
13:47:30 [steph-tp]
steph-tp has joined #tp
13:48:13 [marie]
sb displays the org chart showing dependencies among WGs
13:48:15 [MSM]
MSM has joined #tp
13:49:02 [Yves]
Yves has joined #tp
13:49:13 [simonMIT]
simonMIT has joined #tp
13:50:00 [marie]
sb: we didn't show dependencies from the QA groups, for ex., since they influence all WGs
13:50:08 [DanC]
DanC has joined #tp
13:50:18 [DanC]
scatter chart... oooh... ahh...
13:50:55 [asir]
asir has joined #tp
13:51:19 [marie]
Technical Plenary Week Participation
13:51:28 [marie]
:
13:52:00 [sandro]
sandro has joined #tp
13:52:01 [marie]
sb: record in participation + 30 different groups meeting this week
13:52:18 [marie]
Recommendations, Completed and En Route:
13:52:55 [MJDuerst]
MJDuerst has joined #tp
13:53:45 [marie]
sb shows the REC timeline
13:54:01 [chaalsBOS]
chaalsBOS has joined #tp
13:54:03 [marie]
Tools for Reaching REC Faster:
13:54:38 [marie]
sb: most important way to reach REC fast is to reach consensus
13:55:19 [Tantek]
Tantek has joined #tp
13:55:38 [marie]
sb: we will present very helpful tools such as the voting system, set up by Dominique Hazael-Massieux
13:55:41 [frankmcca]
frankmcca has joined #tp
13:55:53 [marie]
Future Work:
13:57:17 [marie]
Tech Plenary Agenda:
13:57:20 [tvraman]
tvraman has joined #tp
13:57:34 [tvraman]
Sniffing ether in a meeting is good for ones wakefulness
13:57:38 [marie]
sb presents today's agenda
13:58:12 [frankmcca]
frankmcca has joined #tp
13:58:28 [marie]
agenda at
13:58:39 [Don]
Don has joined #tp
13:58:52 [marie]
--- Questions?
13:59:04 [marie]
------
13:59:17 [ht]
13:59:21 [marie]
Session 2: What Does Anywhere, Anytime, Anyone, Any Device Access to the Web Really Mean?
13:59:26 [ht]
Dave Raggett in the chair
14:00:04 [ht]
welcome, introduction of panelists
14:00:07 [KevinLiu]
KevinLiu has joined #tp
14:00:26 [ht]
dr promises demo!
14:02:05 [ht]
Second slide
14:02:41 [ht]
(All slides are at same URL)
14:02:52 [jeffm]
jeffm has joined #tp
14:04:45 [ht]
Third slide
14:07:47 [MarkJ]
MarkJ has joined #TP
14:07:59 [ht]
ht has joined #tp
14:08:04 [ht]
scribe rejoins
14:08:23 [ht]
Call again for URL for these slides, anyone?
14:08:40 [KevinLiu]
it's at
14:09:01 [ht]
slide 3
14:09:02 [timbl]
timbl has joined #tp
14:09:20 [yasuyuki]
yasuyuki has joined #tp
14:09:21 [dom]
dom has joined #tp
14:09:26 [slh]
slh has joined #tp
14:09:34 [ht]
14:09:36 [steve]
steve has joined #tp
14:09:51 [Alan-Lap]
Alan-Lap has joined #tp
14:09:53 [ht]
14:10:19 [ht]
14:10:27 [Steven]
Steven has joined #tp
14:10:58 [ht]
14:11:22 [Steven]
Steven has joined #tp
14:11:36 [bwm]
bwm has joined #tp
14:11:45 [ht]
rg hands over to Jim Larson
14:12:07 [ht]
jl plays at calling on a cell phone
14:12:13 [ht]
and getting a voicemail system
14:12:25 [ht]
"Any time, any where, without being placed on hold"
14:12:26 [tvraman]
we should have a hack that results in the slides in a browser flipping when the scribe says next slide on the irc channel
14:12:54 [ht]
14:13:52 [ht]
second slide (all in same PDF doc't)
14:13:57 [ht]
third slide
14:14:18 [ht]
fourth slide
14:15:07 [Karl-lap]
Karl-lap has joined #tp
14:15:16 [ht]
fifth slide
14:15:20 [maxf]
maxf has joined #tp
14:15:28 [DanC]
hmm... which is that semantic mapping spec?
14:15:35 [ht]
sixth slide
14:15:47 [ht]
jl hands over to Deborah Dahl
14:15:51 [JosD]
JosD has joined #tp
14:15:58 [ht]
14:17:57 [ht]
ht has joined #tp
14:18:26 [ht]
14:19:37 [Steven]
Steven has joined #tp
14:19:47 [ht]
14:20:20 [Alan-Lap]
Alan-Lap has joined #tp
14:20:22 [ht]
[speaker has net problems too]
14:20:35 [ht]
Activities: Multimodal interaction framework
14:20:49 [ht]
interaction management
14:21:00 [Ben]
Ben has joined #tp
14:21:12 [ht]
Extensible Multimodal annotation
14:21:34 [ht]
Ink Markup -- representing pen input
14:22:07 [ht]
dd hands over to Janina Sajka
14:22:18 [ht]
slide URL, anyone?
14:23:02 [ht]
WAI and multimodal
14:23:20 [dom]
the URL is on screen:
14:24:01 [slh]
slh has joined #tp
14:24:17 [ht]
14:24:47 [ht]
14:25:48 [ht]
14:27:06 [ht]
14:27:18 [ht]
14:27:55 [ht]
14:28:14 [micah]
micah has joined #tp
14:29:07 [Jonathan]
Jonathan has joined #tp
14:29:40 [ht]
dr introduces Roger Gimson again for Challenges section
14:29:51 [SueL]
SueL has joined #tp
14:30:37 [KevinLiu]
KevinLiu has joined #tp
14:30:40 [ht]
slide pointer????
14:30:59 [RAM]
RAM has joined #tp
14:31:30 [ht]
14:31:38 [ht]
14:32:08 [ht]
14:32:46 [ddahl]
ddahl has joined #tp
14:33:04 [judy]
judy has joined #tp
14:33:59 [ht]
14:34:36 [timbl__]
timbl__ has joined #tp
14:35:08 [ygonno]
ygonno has joined #tp
14:35:18 [marie]
marie has joined #tp
14:35:19 [ht]
sg hands over to Scott McGlashan
14:35:21 [ht]
14:35:36 [ht]
14:36:09 [shayman]
shayman has joined #tp
14:37:38 [mjd]
mjd has joined #tp
14:37:39 [bwm_]
bwm_ has joined #tp
14:37:50 [JaNYC]
JaNYC has joined #tp
14:38:57 [ht]
14:39:30 [ht]
14:39:33 [Patrick_S]
Patrick_S has joined #tp
14:40:11 [ht]
14:40:13 [mdubinko]
mdubinko has joined #tp
14:40:53 [Karl-lap]
Karl-lap has joined #tp
14:41:01 [Nobu]
Nobu has joined #tp
14:41:12 [ht]
sg hands over to Deborah Dahl
14:41:13 [ht]
14:41:21 [bwm]
bwm has joined #tp
14:42:10 [mimasa]
mimasa has joined #tp
14:42:17 [Tantek]
Are these presentations supposed to be member only? (e.g. URLs above have been mostly W3C member only)
14:42:26 [ht]
14:42:34 [Ian]
Tantek: Public
14:43:49 [yasuyuki]
yasuyuki has joined #tp
14:44:07 [bwm]
bwm has joined #tp
14:44:29 [RAM_]
RAM_ has joined #tp
14:44:40 [Tantek]
Ian: Not according to checklink:
14:44:41 [PStickler]
PStickler has joined #tp
14:45:05 [MJDuerst]
MJDuerst has joined #tp
14:45:08 [ht]
ht has joined #tp
14:45:17 [ivan]
ivan has joined #tp
14:45:21 [JosD]
JosD has joined #tp
14:45:25 [ht]
dr hands over to Janina Sajka
14:45:27 [RalphS]
RalphS has joined #tp
14:45:28 [ht]
no slides
14:45:32 [ht]
audio demo coming up
14:45:47 [ht]
js points out that disability evolves
14:46:04 [ht]
as people age, sight and hearing deteriorate
14:46:50 [ht]
if we get this [multimodality] right, we should be able to smoothly adjust the balance of modality for an individual as their needs change
14:46:58 [Liam]
Liam has joined #tp
14:47:21 [ht]
js how to break through the "the disability market is too small to drive development"
14:47:34 [micah]
micah has joined #tp
14:47:56 [ndw]
ndw has joined #tp
14:48:00 [marie]
marie has joined #tp
14:48:02 [Don]
Don has joined #tp
14:48:09 [zrendon]
14:48:15 [ht]
js limited vocab voice interaction is not far from providing for deaf/h-o-h users
14:48:23 [AndyS]
AndyS has joined #tp
14:48:28 [ht]
via translation to symbolic interaction
14:49:15 [Alan-Lap]
Alan-Lap has joined #tp
14:49:15 [ht]
js final point on fluidity: when technology is immature (i.e. speech in/out), early adopters will be the ones who _have_ to have it, that is, the blind user community
14:49:36 [ht]
so this isn't about disability, it's about being clever about using sound output
14:49:40 [GudgeScrb]
GudgeScrb has joined #tp
14:49:58 [ht]
general population will need higher quality, but they'll be interested then too
14:50:18 [ht]
js decides to postpone the demo in favour of questions
14:50:27 [ht]
question time -- mikes in aisles
14:50:30 [marie]
----Q&A
14:50:38 [marie]
Steven Pemberton
14:50:41 [ht]
dr, please ask people to identify themselves
14:50:52 [ht]
sp: isn't voice markup doing too much?
14:51:09 [ht]
better to integrate _into_ other languages, instead of trying to be free standing
14:51:25 [ht]
rgimson says yes, next generation will do that
14:51:42 [maxf]
s/rgimson/scott McGlashan/
14:51:48 [ivan]
ivan has joined #tp
14:51:49 [ht]
but for getting started doing a standalone solution was a good idea
14:51:54 [marie]
Rotan Hanrahan
14:52:02 [marie]
Mobileaware
14:52:12 [Alan-Lap]
Alan-Lap has left #tp
14:52:14 [ht]
In the early days, content creation was easy
14:52:18 [ht]
at the expense of access
14:52:22 [Alan-2]
Alan-2 has joined #tp
14:52:37 [ht]
today we've got richer delivery, but creation is now much harder
14:52:39 [ht]
14:52:51 [ht]
Larson: yes, it's more complex
14:53:03 [ht]
but it's possible, and the complexity enables more exciting content
14:53:11 [Tantek]
I agree that ease of authoring is still very important.
14:53:25 [ht]
authoring tools also are emerging which manange the complexity
14:53:32 [ht]
we'll get good stuff, and also terrible stuff
14:53:55 [marie]
Gerald Edgar, Boeing Corp.
14:54:05 [ht]
sj endorses authoring tools need to take up the slack
14:54:06 [Tantek]
tools are good to have, but don't underestimate impact and necessity of hand-coders.
14:54:36 [ht]
ge: size had changes, bandwidth has changed, but what about a radical shift, e.g. our punch presses could be web clients
14:54:45 [marja]
marja has joined #tp
14:54:47 [ht]
what kind of non-standard UI does this require
14:54:57 [JacekK]
JacekK has joined #tp
14:54:58 [KevinLiu]
KevinLiu has joined #tp
14:55:08 [ht]
RGimson: for such a machine, the key aspect is a common way of expressing interaction
14:55:40 [ht]
DI WG has been looking at XForms to provide a foundation for d-i interaction
14:56:05 [Karl-lap]
Tantek: Authoring will be still be possible. It doesn't make sense to compare the past with the future, in the sense that the people can continue to author HTML as before, and it's why it's wonderful. We just reached a point where we address a lot more techniques and languages.
14:56:07 [ht]
rg: if filling in a field controls what the machine will do, separation of form from semantics is the key
14:56:38 [ht]
rg: still more to do in the area of coping with the event aspects of this example
14:57:04 [ht]
??: not just the user interaction, but the machine tool is physically creating the output presentation
14:57:15 [chaalsBOS]
chaalsBOS has joined #tp
14:57:16 [ht]
rg: yes, the output presentation is a bit of 3d stuff
14:57:30 [marie]
Martin Duerst, W3C
14:57:32 [ht]
rg: outputing to physical as opposed to electronic media is important to HP
14:57:44 [ht]
md: different media have different challenges
14:58:13 [ht]
md: remind everyone that a lot of different [natural] languages out there, with many people speaking only one of them
14:58:46 [ht]
md: does progress in machine translation, perhaps not enough progress, point to an additional direction for your work
14:59:17 [ht]
md: multiple documents for multiple modalities in multiple languages increases the need to be good on re-use on other dimensions
14:59:46 [ht]
md: example -- voice interaction asks again if it gets the wrong answer, similar pblm for wrong language, maybe?
15:00:08 [ndw]
ndw has joined #tp
15:00:11 [ht]
JLarson: we have to work towards using _one_ document for the different modalities, with different purposing
15:00:28 [ht]
smcg: spoken language identification isn't up to it yet
15:00:49 [ht]
multilingual recognition yes, 50 -- 60 languages available at some level
15:01:06 [ht]
smcg: usual approach is generation from content on demand, to produce required language
15:01:29 [ht]
jlarson: alternative, e.g. airline seat consoles, is non-linguistic interfaces
15:01:35 [ht]
Pat Hayes
15:01:56 [ht]
ph: Advantages of speech, are advantages of natural language in general
15:02:10 [ht]
ph: if you're going to tackle NLU, that's a _very_ hard problem
15:02:33 [ht]
dd: yes, speech and language have similar advantages
15:02:49 [ht]
dd: in our groups, we're not trying to handle the full complexity of NLU
15:03:10 [ht]
dd: focus on specifying the limited number of responses which are relevant, to constrain the recognition pblm
15:03:41 [MJDuerst]
MJDuerst has joined #tp
15:03:43 [ht]
jlarson: we cheat a lot, we do word recognition, we write clever dialogues to guide people where we want to go
15:04:15 [ht]
smcg: we're doing better now because of a shift from rule-based to statistics-based approaches
15:04:38 [ht]
moving up from the low-level SR to NLU, forcing interpretation on _any_ utterance
15:04:41 [ivan]
ivan has joined #tp
15:04:55 [ht]
ph: yes, I'm aware, but the significance of a 'not' can easily be lost
15:05:02 [marie]
Hakon Lie, Opera
15:05:02 [ht]
dr: time to wind up
15:05:10 [ht]
hl: thanks for good work
15:05:28 [ht]
a furter relevant spec., "Media Queries", coming from CSS
15:05:49 [ht]
allowing ss rules to be gated by result of queries to destination medium
15:05:54 [Tantek]
15:06:07 [marie]
TV Raman, IBM
15:06:13 [ht]
raman rao
15:06:21 [RalphS]
TV Raman
15:06:41 [ht]
rr: back to question about authoring, yes, easy content creation is crucial
15:07:00 [ht]
rr: HTML started simple, got complex, then simpler with CSS
15:07:28 [ht]
rr: going forward with multimodality and speech, we need to look for the same separation
15:07:49 [ht]
rr: content authoring separated from mm interface on top
15:07:59 [ht]
Panel agrees
15:08:14 [ht]
dr: separation is very much on VB WG's agenda
15:08:40 [ht]
Ralph Swick: 3 MB bandwidth from here, if you stick to vanilla stuff
15:08:51 [ht]
please stop doing peer-to-peer file sharing
15:09:01 [ht]
or you risk getting cut off from net altogether
15:11:11 [olivier]
s/henri/henry/
15:13:04 [janet]
janet has joined #tp
15:13:21 [Marsh]
Marsh has joined #tp
15:16:24 [GudgeScrb]
GudgeScrb has joined #tp
15:17:18 [Gudge]
Gudge has joined #tp
15:17:37 [mimasa]
mimasa has joined #tp
15:20:38 [Ben]
Ben has joined #tp
15:21:13 [micah]
micah has joined #tp
15:22:09 [amy]
amy has joined #tp
15:27:15 [Norm]
Norm has joined #tp
15:27:57 [simonSNST]
simonSNST has joined #tp
15:30:36 [DonWright]
DonWright has joined #tp
15:32:29 [DanC-AIM]
DanC-AIM has joined #tp
15:33:07 [DanC-AIM]
Hi from WearableGizmo
15:33:33 [zrendon]
zrendon has left #tp
15:33:42 [ddahl]
ddahl has joined #tp
15:33:48 [Norm]
hi DanC-AIM
15:34:09 [Arthur]
Arthur has joined #tp
15:34:48 [MJDuerst]
MJDuerst has joined #tp
15:35:02 [Zarella]
Zarella has joined #tp
15:35:14 [dbooth]
dbooth has joined #tp
15:35:36 [marie]
---- Session 3: The Evolving Web Architecture
15:35:46 [Ian]
Panel: RF, TBL, PC, NW, DO, DC, CL, SW. Moderator Steve Zilles
15:35:51 [maxf]
maxf has joined #tp
15:36:06 [RalphS]
RalphS has joined #tp
15:36:08 [DanC-AIM]
Hi.
15:36:29 [Ian]
SZ: Plan: Overview, Arch Doc, XML ID, XML Profiles, Namespace Docs
15:36:34 [DanC-AIM]
Wow... Look at all these people in the audience! Must bt 500 or so!
15:36:41 [Ian]
SZ: 10-15 mins of questions at the end.
15:37:09 [Ian]
[Only TAG person missing is Tim Bray]
15:37:14 [DanC-AIM]
Wonder if the record will include photos of the audience.
15:37:29 [emmanuel]
emmanuel has joined #tp
15:38:21 [Jacek_K]
Jacek_K has joined #tp
15:38:27 [Ian]
Intro from SW:
15:38:34 [Ian]
15:38:57 [Liam]
Liam has joined #tp
15:39:11 [Lily]
Lily has joined #tp
15:39:21 [Steven]
Steven has joined #tp
15:40:00 [JosD]
A: I just spoke with Luc, who's actually a heavy believer in declarative languages, and it's quite clear that RDF could help a lot to achieve inter-app interoperability here
15:40:05 [Ian]
What we are Chartered to Do?
15:40:11 [Ian]
15:40:44 [Ian]
How We Work?
15:40:48 [Ian]
15:40:49 [yasuyuki]
yasuyuki has joined #tp
15:40:58 [JosD]
A: ""
15:41:10 [RalphS]
RalphS has joined #tp
15:41:32 [Gudge]
Gudge has joined #tp
15:42:16 [mimasa]
mimasa has joined #tp
15:42:18 [Ian]
The things we produce
15:42:22 [Ian]
15:42:34 [dougb]
dougb has joined #tp
15:42:55 [DanC-AIM]
Hmm.. poll audience as to who subscribes to www-tag?
15:43:05 [ht]
ht has joined #tp
15:43:09 [steph-tp]
steph-tp has joined #tp
15:43:09 [Ian]
Possible Misconceptions
15:43:11 [Steven]
ANd who reads it if subscribed :-)
15:43:15 [Ian]
15:44:11 [jeffsch]
jeffsch has joined #tp
15:44:35 [plh-lap]
plh-lap has joined #tp
15:44:36 [Ian]
TAG Communication
15:44:36 [Ian]
15:44:44 [Norm]
heh
15:44:51 [GudgeScrb]
GudgeScrb has joined #tp
15:45:05 [Burnett]
Burnett has joined #tp
15:45:21 [dadahl]
dadahl has joined #tp
15:45:34 [maxf`]
maxf` has joined #tp
15:45:59 [marie]
marie has joined #tp
15:46:05 [Ian]
Questions:
15:46:19 [Ian]
*
15:46:19 [Ian]
How do we increase the participation of W3C members on www-tag?
15:46:19 [Ian]
*
15:46:19 [Ian]
How could the TAG improve its interaction with W3C WGs and/or public?
15:46:19 [Ian]
*
15:46:20 [Ian]
Is there a better way for the TAG to organize its work?
15:46:22 [Ian]
(other than issues, findings and WDs)?
15:46:45 [tvraman]
tvraman has joined #tp
15:47:04 [Ian]
Architecture Document Overview
15:47:06 [Ian]
15:47:15 [mimasa0]
mimasa0 has joined #tp
15:47:33 [slh]
slh has joined #tp
15:47:41 [Ian]
DC: These are Tim Bray slides; some updates since then
15:47:49 [amy]
amy has joined #tp
15:47:50 [Ian]
DC: Editor's drafts since 15 Nov 2002
15:47:55 [chaalsBOS]
chaalsBOS has joined #tp
15:48:03 [Ian]
15 Nov 2002 draft
15:48:04 [Ian]
15:48:08 [JosD_]
JosD_ has joined #tp
15:48:08 [Don]
Don has joined #tp
15:48:18 [timbl__]
15:48:24 [Ian]
Why Webarch?
15:49:06 [Ian]
DC: What are the principles that I'd like webmasters/developers to know?
15:49:27 [dbooth]
dbooth has joined #tp
15:49:29 [howcome]
howcome has joined #tp
15:49:30 [Ian]
The Architectural Tripod
15:49:40 [MJDuerst]
MJDuerst has joined #tp
15:49:43 [mcglashan]
mcglashan has joined #tp
15:49:47 [Ian]
DC: Identification/Representation/Machine interaction
15:50:04 [RalphS]
RalphS has joined #tp
15:50:51 [Christoph]
Christoph has joined #tp
15:51:17 [Ian]
Principles re: Universal Addressing
15:51:19 [dougb62]
dougb62 has joined #tp
15:51:23 [MarkJ]
MarkJ has joined #TP
15:51:33 [Ian]
DC: Motification for using URIs: Network Effect
15:51:41 [ndw]
ndw has joined #tp
15:52:03 [dbooth]
"The URI, the whole URI, and nothing but the URI!"
15:52:59 [Ian]
Principles re: Resource Representations
15:53:13 [Ian]
DC: Those present at this meeting are our primary audience; specs need to be consistent with these principles
15:53:52 [Ian]
Resource Representations (2)
15:54:45 [bwm]
bwm has joined #tp
15:55:08 [hugo]
hugo has joined #tp
15:55:27 [Ian]
Resource Representations (3)
15:55:44 [Ian]
Principles re: Interaction
15:56:19 [Ian]
DC: Not much writing in this section; looking at Roy Fielding's thesis, e.g.
15:56:47 [Ian]
DC polls crowd
15:57:00 [Ian]
1) Subscribers to www-tag? a fair number
15:57:08 [Ian]
2) How many have skimmed arch doc? a few
15:57:27 [Ian]
[Questions]
15:57:30 [olivier]
should have asked "who left www-tag because of s/n ratio?" maybe :)
15:57:47 [Ian]
Roger Cutler, Chevron
15:58:27 [Ian]
RC, with trepidation: (1) disingenuous that it's a misconception that the TAG is not telling WGs what to do. The TAG IS telling WGs what to do.
15:58:48 [Ian]
RC: (2) If you read the arch doc, it's a more reasoned doc than many people's interpretation.
15:58:58 [Ian]
RC: Like what some people do to the Bible.
15:59:30 [Ian]
DC: I think people share RC's concerns. I was nervous about the TAG initially. But W3C considered that it was better to try than not to try.
15:59:35 [Ian]
(this experiment)
16:00:06 [Ian]
SZ: The TAG was set up to do some things that individual WGs can't do on their own.
16:00:16 [Chris]
Chris has joined #tp
16:00:21 [howcome]
howcome has joined #tp
16:00:24 [Tantek]
Tantek has joined #tp
16:00:25 [Ian]
SZ: The other piece of the TAG charter was to prepare Rec track docs. "AC accountability"
16:01:11 [Ian]
RC: In the WSA WG, someone expressed a strong opinion that a definition in the TAG's arch doc HAD to be used in the WSA document.
16:01:50 [Ian]
DO: This is about the definition of the term "agent."
16:01:51 [Liam]
Liam has joined #tp
16:02:21 [Ian]
DO: The WSA WG was working on a definition of the term "agent"; the definitions were unrelated to one another. In their first cut, the WSA WG took the TAG's dfn, changed it, and didn't reference the source.
16:02:24 [PGrosso]
Personally, I have very little interest in rehashing this "political" stuff. I want to have time to talk about XML profiles and xml:id. Can we shorten this current rehashing.
16:02:37 [Ian]
DO: I objected, stating that the WG should feed that info back to the TAG.
16:03:01 [Ian]
DO: Furthermore, I wanted a relationship such as "A Web Services agenda is a Web agent that..."
16:03:44 [Ian]
SZ: Would you consider the process to be working?
16:03:50 [ora]
ora has joined #tp
16:03:57 [Ian]
DO: Yes, I think so.
16:04:25 [timbl__]
timbl__ has joined #tp
16:04:30 [Ian]
Mike Champion (MC): What's the relationship between description and prescription in TAG activity?
16:04:50 [Ian]
MC: I see in the arch doc a lot of should's and must's. Not seeing the web as it is from the should's and must's.
16:05:09 [dougb]
dougb has joined #tp
16:05:15 [Ian]
MC: Where do you draw the line between stupid cruft that people do and Web principles?
16:05:17 [ivan]
ivan has joined #tp
16:05:43 [Ian]
CL: If people are doing something because they don't know better, we should improve outreach. If they do something because they have to, then we need to fix something.
16:05:52 [Ian]
DC: I would like to see more argument behind principle in the document.
16:06:24 [Ian]
CL: Also tension between brevity and rationale.
16:06:48 [KevinLiu]
KevinLiu has joined #tp
16:07:01 [Ian]
Noah Mendelsohn: I like what's in the arch doc, but it's not what I expected.
16:07:31 [Ian]
NW: I see instead a number of subtle insights. But perhaps also because architecture bases lie elsewhere; don't want to repeat.
16:07:33 [Ian]
s/NW
16:07:37 [Ian]
s/NW/NM
16:07:50 [Ian]
NM: A specific example: "How much of the Web do you see to be REST-ful?"
16:08:27 [JosD]
JosD has joined #tp
16:08:28 [Ian]
NM: A good arch doc should give the big picture.
16:09:02 [Ian]
TBL: Several different views of what "architecture" means. The TAG does something different from other WGs (e.g., WSA WG).
16:09:15 [Ian]
TBL: Some people expect block diagram (with successive elaboration on request).
16:09:31 [Ian]
TBL: I don't think we can put 4 corners around the Web.
16:09:57 [Ian]
TBL: When people either extend the Web or bring something into the Web, what are the things they need to look out for?
16:10:07 [Ian]
TBL: E.g., you can write as many new formats as you wish, but please use URIs
16:11:01 [Ian]
NM: Then say what TBL said in the arch doc: Here's why you shouldn't look for a block diagram in this doc.
16:11:10 [Ian]
NM: Or change the title to something like "Principles for using the Web"
16:11:12 [Chris]
'observations on we architecture'
16:11:31 [Ian]
RF: A lot of what NM said is likely to be covered in as-yet-unwritten sections of the doc
16:12:39 [Ian]
DO: The TAG has been largely user-driven in their first year, responding to questions that have been raised. The Arch Doc is a resource where those findings can be pinned.
16:12:47 [Ian]
DO: In year 2 we expect to fill in other parts of the arch doc.
16:13:23 [marie]
Ann Basseti, AB
16:13:26 [Ian]
Ann Bassetti (AB): I'm relieved that the TAG was created; these questions used to come to the AB!
16:13:36 [Ian]
(i.e., the Advisory Board)
16:14:10 [Ian]
AB: The Advisory Board's discussion is calmer now that Paul Cotton is on the TAG ;)
16:14:23 [Ian]
AB: I strongly encourage those in this room to subscribe to www-tag.
16:14:40 [henri]
henri has joined #tp
16:14:59 [Ian]
[AB does her usual stellar job of reminding folks that "W3C is YOU!"]
16:15:21 [Ian]
Jonathan Robie (JR)
16:15:33 [Ian]
JR: The TAG is doing a good job.
16:15:38 [Chris]
more interesting than xml-dev ;-)
16:15:41 [MSM]
Data Direct Technologies
16:16:14 [Ian]
JR: I hear the TAG saying "There's a TAG finding; if it's broken; please tell us."
16:16:38 [simonSNST]
simonSNST has joined #tp
16:16:53 [Ian]
IJ: Roger Cutler's suggestions on when-to-use-get are likely to result in a new revision.
16:17:24 [Ian]
[Chris Lilley presentation]
16:17:30 [Ian]
XML ID for well formed documents
16:17:33 [Ian]
16:17:53 [Ian]
CL: Here's the technical meat.
16:18:18 [dougb62]
dougb62 has joined #tp
16:18:27 [Ian]
CL: How many people would agree with the statement "ID's arise as the result of validation?"
16:18:35 [Ian]
CL: Can they arise through other means?
16:18:39 [Burnett_]
Burnett_ has joined #tp
16:18:40 [KevinLiu]
KevinLiu has joined #tp
16:18:40 [Ian]
CL points:
16:18:43 [Ian]
* The instance is well formed
16:18:43 [Ian]
* The instance is not valid(atable)
16:18:43 [Ian]
* The partnum attribute on foo is of type ID
16:18:45 [mdean]
mdean has joined #tp
16:19:16 [Ian]
Jonathan Marsh: We get into the infoset....
16:19:31 [DonWright]
DonWright has joined #tp
16:19:31 [Ian]
CL: Largely people assume that validation => fetching DTDs => IDs
16:19:34 [Norm]
Norm has joined #tp
16:19:44 [Ian]
Evidence of brokenness
16:19:47 [Ian]
16:19:50 [Roger]
Roger has joined #tp
16:20:16 [Ian]
CL: In DOM 2, getElementByID
16:21:22 [McGlashan]
McGlashan has joined #tp
16:21:32 [Ian]
CL: CSS2, ID selectors
16:21:51 [amy]
amy has joined #tp
16:22:01 [Ian]
CL: XHTML 1.0, user agent conformance: "# When a user agent processes an XHTML document as generic XML, it shall only recognize attributes of type ID (i.e. the id attribute on most XHTML elements) as fragment identifiers."
16:22:25 [Ian]
CL: SOAP doesn't have a DTD at all (security and performance reasons).
16:22:36 [Ian]
CL: But it has ID:
16:22:55 [Ian]
NM: It's a schema ID, not a DTD ID
16:23:44 [Ian]
TAG Issue
16:23:47 [Norm]
Gudge says SOAP IDs are neither DTD IDs nor XML Schema IDs.
16:24:02 [Ian]
[CL hints that he is writing a finding as we speak on this topic.]
16:24:28 [Ian]
CL: Lots of discussion on www-tag; people made good comments and suggestions.
16:24:34 [micah]
micah has joined #tp
16:24:45 [Ian]
An example - xml:id
16:24:50 [Hixie]
Hixie has joined #tp
16:24:55 [Gudge]
my mail is at
16:25:03 [Ian]
CL: Class of solutions using a well-known name; if you want an ID, use that name.
16:25:24 [Ian]
CL: Let's put what we want in an XML namespace.
16:25:32 [Ian]
CL: E.g., xml:id (like xml:base)
16:25:43 [Ian]
CL: * Needs small change to instance document for each affected ID attribute
16:25:54 [Ian]
CL: Can't call things "partnum"; need to call xml:id
16:25:59 [Ian]
CL: Class 2 of solutions
16:26:05 [Ian]
16:26:13 [Ian]
xml:idAttr
16:26:35 [Ian]
CL: Example: <foo xml:
16:26:43 [Ian]
CL: Says what attribute in this (sub)tree is of type ID
16:26:55 [steve]
steve has joined #tp
16:26:57 [Ian]
CL: Scope issues arise.
16:27:05 [Ian]
CL: (When mixing namespaces)
16:27:48 [PGrosso]
the first sent of this slide is wrong.
16:27:50 [Ian]
CL: Another class of solution: require the internal DTD subset
16:27:54 [Ian]
16:28:02 [Ian]
CL: A predeclared attribute in the xml namespace, of type ID
16:28:09 [Ian]
Example:
16:28:11 [Ian]
<!DOCTYPE foo [
16:28:11 [Ian]
<!ATTLIST foo id ID #IMPLIED>
16:28:11 [Ian]
]>
16:28:11 [Ian]
<foo id="x1"/>
16:29:03 [Ian]
CL: To some extent, saying how to solve the problem when you have a DTD or Schema is sidestepping the issue.
16:29:09 [Ian]
CL Questions:
16:29:11 [Ian]
16:29:20 [mimasa]
mimasa has joined #tp
16:29:32 [mdubinko]
xml:id -- just do it
16:29:46 [Ian]
Al Gilman (AG)
16:30:26 [Ian]
AG: We have an early draft of XML Accessibility Guidelines. These are guidelines for people building XML vocabularies.
16:30:53 [Ian]
AG: It sounds like this needs a defaulting rule. If you have an attribute called "id" it should either be of type ID or, if not, in the document the type should be indicated.
16:31:24 [Ian]
AG: Let simple processors do simple things. Provide info for smart processors to do more.
16:31:34 [mdubinko]
why isn't anyone using the 2nd microphone?
16:31:44 [Ian]
CL: I would characterize that as the "sometimes steal 'id'" model.
16:32:01 [Ian]
AG: If you intend to use it for some other type, then you have to specify it inline; make the exception known.
16:32:06 [Ian]
Richard Tobin (RT)
16:32:39 [bwm]
bwm has joined #tp
16:32:42 [Ian]
RT: There's a common thread in several specs since XML - gradual removal of the internal subset and DTDs generally.
16:32:54 [Ian]
RT: There are three parts of DTDs handled in different ways:
16:32:58 [marie]
marie has joined #tp
16:33:02 [Ian]
a) Content model and typing by xml schema
16:33:10 [howcome]
howcome has joined #tp
16:33:10 [Ian]
b) External entities -> XInclude.
16:33:15 [emmanuel]
emmanuel has joined #tp
16:33:22 [Ian]
c) No current proposal for replacing character refs and internal entities.
16:33:33 [Ian]
RT: And yet! The question of IDs still arises.
16:33:52 [Ian]
RT: Why isn't the question of IDs covered by one of the three above technologies?
16:34:00 [Ian]
RT: I think IDs are more fundamental than the rest of typing.
16:34:21 [Ian]
RT: I think the xml:id or "steal id" proposals are the right ones.
16:34:28 [Ian]
RT: Take IDs out of typing.
16:34:38 [Ian]
CL: IDREF also needs to be address, I would think.
16:34:42 [Ian]
Steven Pemberton (SP)
16:34:57 [Ian]
SP: The TAG needs a rep from a mobile phone company - cost of downloading another resource.
16:35:16 [Ian]
SP: Good thing about small phones is they hurt less when someone you tell "download' to throws them at you.
16:35:34 [Ian]
SP: We need this solution in a Rec-trcak document.
16:35:36 [shayman]
shayman has joined #tp
16:35:52 [amy]
amy has joined #tp
16:36:31 [Ian]
CL: TAG expects to summarize points that have been made, but not to do the work.
16:36:40 [Ian]
SP: Please get this done quickly.
16:36:49 [Ian]
Rigo Wenning (RW)
16:37:03 [Ian]
RW: Relationship between ID and privacy.
16:37:31 [Ian]
RW: P3P WG struggled for a while over questions on type ID. Please stick to syntax of ID; if you try to get into semantics, you'll get lost.
16:37:37 [Ian]
[Norm Walsh Presentation]
16:37:47 [Ian]
XML Profiles
16:37:51 [Ian]
16:38:02 [dino]
can norm make his slide text bigger?
16:39:31 [Ian]
Overview:
16:39:49 [Chris]
Chris has joined #tp
16:39:50 [Ian]
The Problems
16:39:57 [Ian]
NW:
16:39:57 [Ian]
*
16:39:57 [Ian]
Subsets (profiles) are bad for interoperability.
16:39:59 [yasuyuki]
yasuyuki has joined #tp
16:40:13 [Ian]
What to Do?
16:40:28 [chrisf]
chrisf has joined #tp
16:40:37 [Ian]
NW:
16:40:37 [Ian]
The TAG considered the issue and decided that a reasonable compromise might be to define a single subset.
16:41:13 [Ian]
NW: And that is backwards compatible with xml 1.1, language excludes DTD declarations.
16:41:19 [Ian]
Related Issues
16:41:22 [Ian]
16:41:40 [Ian]
Discussion
16:41:42 [PStickler]
PStickler has joined #tp
16:41:56 [Ian]
*
16:41:56 [Ian]
Is the potential for a proliferation of specialized XML subsets a problem that the W3C should attempt to address?
16:41:56 [Ian]
*
16:41:56 [Ian]
Should the W3C pursue a subset of XML 1.1?
16:41:56 [Ian]
*
16:41:58 [Ian]
How would this work relate to XML unification/simplification efforts such as Tim Bray's SW draft.
16:42:14 [gerald]
gerald has joined #tp
16:42:14 [PStickler]
PStickler has joined #tp
16:42:16 [Ian]
Skunkworks spec from TB:
16:42:29 [Ian]
Henry Thompson (HT)
16:42:42 [bwm]
s/can/may/
16:42:47 [Ian]
HT: Yes, this is a problem and we should do something about it.
16:42:58 [Ian]
HT: I don't think the notion of subset is the right way to pursue it.
16:43:11 [dino]
bwm, this IRC channel is public, if that is what you ask
16:43:22 [Ian]
HT: The core value of XML is interoperability.
16:43:39 [Ian]
HT: The XML spec designed in two alternative conformance levels.
16:43:41 [Chris]
conformant processing of all content
16:43:52 [Ian]
HT: I think the correct way to approach this problem is to introduce a third conformance level.
16:43:54 [Christoph]
Christoph has joined #tp
16:43:54 [Chris]
with different results depending on validation or not
16:44:00 [Ian]
HT: You can conform and ignore the DOCTYPE statement.
16:44:04 [Chris]
third conformance level with ignored doctypes
16:44:17 [Ian]
HT: That way all xml processors will still be able to process all xml documents.
16:44:31 [Ian]
HT: E.g., SOAP Processor that conforms to this third class.
16:45:02 [Ian]
RF: The need that I see in SOAP is the ability to tell a fully level compliant XML processor to disable those features.
16:45:28 [jeffm]
jeffm has joined #tp
16:45:30 [Ian]
RF: It's a different type of problem; not just stds compliance but providing developers with an option to do less.
16:45:34 [Ian]
Arnaud Le Hors (ALH)
16:45:42 [Ian]
ALH: We discussed this issue yesterday in XML Core.
16:45:44 [olindan]
olindan has joined #tp
16:45:53 [Ian]
ALH: I got an action to give a status report on our thoughts
16:46:11 [Ian]
ALH: We have been trying to not jump to any conclusions; we are going through the exercise of defining requirements first.
16:46:30 [Ian]
ALH: I think the TAG's expression of its conclusion was misleading.
16:46:43 [Ian]
ALH: I want to clarify that we are following the process of requirements, then proposing a solution.
16:46:55 [Ian]
ALH: We also invite other WGs to tell us what their requirements are.
16:47:10 [Ian]
ALH: The main incentive for doing work in this area is to avoid the proliferation of subsets.
16:47:21 [Ian]
ALH: One proposal : XML 1.0 without DOCTYPEs.
16:47:32 [Ian]
ALH: As a test, would this meet the needs of the XMLP WG?
16:48:04 [ora]
ora has left #tp
16:48:21 [Burnett_]
Burnett_ has left #tp
16:48:31 [Ian]
PC: I'm pleased that the WG where this work should be done is addressing this.
16:48:42 [Ian]
DC: The TAG also looked at requirements.
16:48:53 [PGrosso]
[adding to the Ian's last ALH line] "...if we toss doctype decl but not PIs"
16:49:09 [Ian]
Michael Sperberg-McQueen (MSMSMSMSMSMSMSMMSQMMM)
16:49:28 [Ian]
MSM: I'm glad that Core is looking at this. It's useful to learn from experience.
16:49:51 [Chris]
or a conformance level that says ignore doctypes and ignore pis - wait, ignoring pis is the current situation .....
16:50:10 [Ian]
MSM: We can learn from XML 1.0 experience. Failure of community to take up stand-alone solution.
16:50:32 [Ian]
MSM: We would not, e.g., have the problem today that implementations assume that the DOCTYPE declaration is an instruction to the processor to validate.
16:50:41 [Ian]
MSM: It's a declarative statement, not imperative.
16:50:59 [Ian]
MSM: If implementers don't provide an option to turn it off, we will always have the problem RF cites.
16:51:26 [Ian]
MSM: We already have three conformance levels in practice (1) validating (2) non-validating but DTD-aware and (3) non-validating and DTD-unaware.
16:51:57 [Ian]
MSM: Whatever we do, the solution probably needs to incorporate a replacement for one important function - binding an instance to a particular document type definition.
16:52:22 [wendy]
wendy has joined #tp
16:52:22 [Ian]
MSM: Maybe a solution is another magic attribute (a la schema location). Need to solve nesting problem.
16:52:35 [Ian]
Paul Cotton (PC)
16:52:44 [Ian]
Namespace Documents
16:53:08 [Ian]
16:53:27 [Ian]
namespaceDocument-8 : What should a "namespace document" look like?
16:53:31 [Ian]
16:53:38 [Ian]
PC: Several TAG participants working on a finding.
16:53:50 [Ian]
PC: We welcom your input.
16:54:12 [Ian]
History
16:54:42 [Ian]
PC: Disagreement about "#
16:54:42 [Ian]
* Schema languages are ideal for this"
16:55:22 [janet]
Chris might better direct his mutterings to user agent and plugin developers... ;)
16:55:28 [Tantek]
Tantek has joined #tp
16:57:18 [Ian]
PC: Tim Bray 14 theses
16:57:20 [Ian]
16:58:05 [Ian]
PC: 12. Namespace documents should be human-readable.
16:58:29 [Ian]
PC: Those publishing W3C drafts need to provide human-readable namespace docs
16:58:57 [Ian]
PC: This issue is about "what format for the namespace doc"
16:59:22 [Ian]
PC: TB conclusion conflicts twith TBL's "Namespace documents should not be "schemas".
16:59:29 [Norm]
Norm has joined #tp
16:59:30 [Ian]
Namespace document alternative formats
16:59:32 [Ian]
17:01:42 [Ian]
PC: Proposed alternatives - RDDL, RDF in HTML
17:02:00 [Ian]
PC: TBL desire for namespace to be machine readable without intervening processing.
17:02:30 [Ian]
PC: Question, e.g., of HTML user agent ignoring RDF, and RDF agent ignoring XHTML parts.
17:02:38 [Ian]
PC: TAG held a RDDL challenge to request alternatives.
17:03:06 [Ian]
PC: NW summary of proposals
17:03:14 [Ian]
17:04:04 [Ian]
PC: TAG considered results of challenge and commissioned a draft proposal (PC notes that the commissioned work is not a done deal).
17:04:19 [Ian]
Minimal RDDL
17:04:23 [Ian]
17:04:46 [DaveO]
DaveO has joined #tp
17:04:49 [Ian]
PC: Uses a few XHTML elements (A) and attributes (nature, purpose)
17:04:59 [Ian]
PC: Did not propose to re-use rel/rev attributes of HTML
17:05:11 [Ian]
PC: Can use other useful <a> attributes like "title" and "longdesc"
17:05:27 [Ian]
PC: Most of these proposals are mutually transformable; the proposals are all very similar.
17:05:41 [Ian]
Questions:
17:05:57 [Ian]
* Q1: Can namespace documents be human readable and machine processable?
17:05:57 [Ian]
* Q2: What do you think about the Minimal RDDL proposal?
17:05:57 [Ian]
* Q3: Do you think the TAG should progress the Minimal RDDL proposal on the W3C Recommendation track?
17:06:07 [Steven]
Steven has joined #tp
17:06:21 [SueL]
SueL has left #tp
17:06:44 [ivan]
ivan has joined #tp
17:07:54 [Ian]
David Cleary (?)
17:08:09 [Ian]
DC: Is the TAG considering deprecating the use of URNs for namespace documents?
17:08:22 [Ian]
[Point made about using URNs when you don't want the URI dereferenced.]
17:08:40 [Ian]
DC: We decided that resources SHOULD have representations available.
17:09:14 [Ian]
RF: You can use a URI that has a derference mechanism already, or use a URI for which you are going to deploy the dereference mechanism yourself.
17:09:27 [Ian]
PC polls room for who has read RDDL proposal: A few people
17:09:46 [Ian]
TBL: You should not use URNs since you SHOULD make available a representation. [RF's comment followed TBL's]
17:10:13 [Ian]
TBL: One reason why this issue has been difficult to resolve - some of the people doing DAML, OWL, etc. are different from folks doing XML processing.
17:10:40 [Ian]
TBL: Sem Web processors can resolve queries by picking up machine-readable defns of terms in real time.
17:10:49 [Ian]
TBL: Getting info about terms is useful.
17:10:52 [Arthur]
Arthur has joined #tp
17:11:18 [Ian]
TBL: In RDF, all the metadata can be in the RDF doc itself.
17:11:35 [Ian]
TBL: The need for pointers to different types of resources (e.g., schemas) isn't as great in the sem web application.
17:12:15 [Ian]
TBL: It's not obvious whether the TAG should be trying to make everyone use the same thing; solution might be to fill a couple of particularly large gaps.
17:12:30 [Ian]
SZ: Have you written reqs for what a namespace doc should do?
17:12:39 [geoff_a]
geoff_a has joined #tp
17:12:40 [Ian]
DC: Yes, 6 proposals that are in discussion.
17:12:46 [Zarella]
Zarella has left #tp
17:12:50 [Ian]
SZ: Many thanks to TAG and to audience for asking questions.
17:12:56 [Ian]
[Lunch]
17:13:57 [DaveO]
I said 14 requirements, not 6 proposals, are under discussion.
17:14:48 [gerald]
gerald has joined #tp
17:26:17 [PGrosso]
PGrosso has left #tp
18:13:35 [henri]
henri has joined #tp
18:14:03 [Roger]
Roger has joined #tp
18:14:04 [henri_]
henri_ has joined #tp
18:18:20 [amy]
amy has joined #tp
18:19:53 [howcome]
howcome has joined #tp
18:27:28 [mdubinko]
mdubinko has joined #tp
18:31:23 [Alan-2]
Alan-2 has joined #tp
18:31:58 [olivier]
olivier has joined #tp
18:33:18 [Christoph]
Christoph has joined #tp
18:33:23 [Christoph]
Christoph has left #tp
18:38:11 [geoff_a]
geoff_a has joined #tp
18:38:38 [jeffm]
jeffm has joined #tp
18:39:11 [ygonno]
ygonno has joined #tp
18:39:35 [mdubinko]
mdubinko has joined #tp
18:44:20 [marja]
marja has joined #tp
18:47:42 [JacekK]
JacekK has joined #tp
18:48:47 [david_e3]
david_e3 has joined #tp
18:48:54 [shayman]
shayman has joined #tp
18:49:30 [Norm]
Norm has joined #tp
18:54:10 [Tantek]
Tantek has joined #tp
18:54:45 [reagle]
reagle has joined #tp
18:56:12 [shayman]
18:56:30 [shayman]
... sorry, no need for help
18:58:54 [ivan]
ivan has joined #tp
18:59:48 [Christoph]
Christoph has joined #tp
18:59:50 [chrisf]
chrisf has joined #tp
19:01:22 [SueL]
SueL has joined #tp
19:01:50 [RAM_]
RAM_ has joined #tp
19:01:53 [PGrosso]
PGrosso has joined #tp
19:01:57 [dougb]
dougb has joined #tp
19:02:05 [Marsh]
Marsh has joined #tp
19:02:10 [lofton]
lofton has joined #tp
19:02:20 [dbooth]
dbooth has joined #tp
19:03:19 [dino]
dino has joined #tp
19:03:25 [geoff_a]
geoff_a has joined #tp
19:03:46 [Ben]
Ben has joined #tp
19:03:48 [luu]
luu has joined #tp
19:04:15 [RylaDog]
RylaDog has joined #tp
19:04:25 [marie]
marie has joined #tp
19:04:30 [JosD]
JosD has joined #tp
19:04:32 [marja]
marja has joined #tp
19:04:35 [micah]
micah has joined #tp
19:04:49 [em-lap]
em-lap has joined #tp
19:04:51 [DanC]
DanC has joined #tp
19:05:21 [maxf]
maxf has joined #tp
19:05:23 [amy]
amy has joined #tp
19:05:28 [ivan]
Slides (in one file):
19:06:17 [marie]
Session 4: Integrating our Products
19:06:59 [mimasa]
mimasa has joined #tp
19:07:02 [MSM]
MSM has joined #tp
19:07:59 [ddahl]
ddahl has joined #tp
19:09:22 [timbl__]
timbl__ has joined #tp
19:12:05 [Liam]
Liam has joined #tp
19:13:07 [DanC]
oooh... ahhh...
19:13:29 [dougb]
dougb has joined #tp
19:15:55 [mdubinko]
Steven uses XForms to perform basic editing of XHTML
19:16:03 [mdubinko]
resulting in the reaction DanC mentioned
19:18:25 [ivan]
Novell's demonstration;
19:18:39 [ivan]
a full calculator, only using the actions of xforms plus xpath, no scripting
19:18:43 [Roger]
Pretty small font.
19:19:12 [bh]
bh has joined #tp
19:19:22 [KevinLiu]
KevinLiu has joined #tp
19:20:19 [yasuyuki]
yasuyuki has joined #tp
19:20:33 [marie]
David shows the zipcode resolver example
19:20:34 [Jonathan]
Jonathan has joined #tp
19:21:03 [marie]
then, a phone number filter
19:21:16 [ivan]
ivan has joined #tp
19:21:18 [amy]
amy has joined #tp
19:21:42 [KevinLiu]
rrsagent where am i?
19:21:52 [RRSAgent]
See
19:21:56 [ivan]
last example: travel recording example
19:21:57 [JacekK]
JacekK has joined #tp
19:22:00 [em-lap]
what broswer is being used in this presenation?
19:22:21 [ivan]
em: this is not a browser, it is Novell's standalone xform processor
19:22:31 [em-lap]
thanks ivan
19:23:21 [ivan]
shows the dependencies among field appearances
19:23:39 [mdubinko]
details at
19:23:40 [ivan]
done purely using the dependencies based on the xpath values, no scripting
19:24:19 [ivan]
steve: important point that when you make a submit, you do not replace the whole document
19:24:27 [ivan]
------------- xsmiles demo
19:25:08 [mdubinko]
details for this one at
19:25:10 [ivan]
smil document with svg
19:25:21 [ivan]
(zooming in to some svg part)
19:25:40 [ivan]
it is a reearch browser at HUT, ongoing work
19:25:48 [ivan]
completely open source in java
19:25:55 [DanC]
hmm... TAG issue on mixed namespace docs... I wonder if these xsmiles folks have some advice.
19:26:05 [ivan]
main princpiles: standard compliance, mixing xml docs,
19:26:14 [Dave]
Dave has joined #tp
19:26:37 [ivan]
support xml+ns, xslt, xhtml basic + css, xsl fo
19:26:47 [ivan]
showing browser with xsl fo
19:26:51 [bwm]
bwm has joined #tp
19:27:01 [ivan]
showing also multipage documents
19:27:07 [maxf]
maxf has joined #tp
19:27:13 [Nobu]
Nobu has joined #tp
19:27:30 [ivan]
shows smil example with xforms ('phone ordering system')
19:27:38 [simonSNST]
simonSNST has joined #tp
19:28:24 [ivan]
as a research project is to implement the latest of w3c; concentrating a lot on xforms
19:28:52 [ivan]
eg a bookmark editor
19:29:21 [ivan]
(using list massaging in xforms)
19:30:08 [jeffm]
jeffm has joined #tp
19:30:36 [ivan]
shows a smil + xform demo to launch audio messages
19:31:07 [ivan]
(submission errors are reported by voice)
19:31:23 [ivan]
using unwritable four letter words....
19:31:55 [ivan]
svg+xforms demo: map of Finland with user interface in xforms
19:33:41 [ivan]
mediaqueries: weather demo : for desktop, it shows as a 3D graph (using X3D), for phone, the outlook is just a small screen with data
19:33:59 [ivan]
the gui is actually a smil document
19:34:16 [ivan]
looking at a digital tv, there is again another smil document
19:34:56 [ivan]
clap, clap, clap...
19:35:44 [ivan]
--------- mathml demo (growing up with mathml)
19:36:15 [AlanK]
AlanK has joined #tp
19:36:16 [marie]
19:36:18 [maxf]
19:36:31 [olindan]
olindan has joined #tp
19:36:47 [ivan]
19:36:49 [micah]
micah has joined #tp
19:36:58 [Liam]
Liam has joined #tp
19:37:41 [chaalsBOS]
chaalsBOS has joined #tp
19:38:11 [JacekK]
foo
19:38:31 [ndw]
ndw has joined #tp
19:38:49 [ivan]
19:40:45 [Zarella]
Zarella has joined #tp
19:42:20 [ivan]
------------- Demonstrations
19:42:47 [ivan]
shows an xhtml doc with mathml inside
19:42:58 [ivan]
two halves: presentation markup and content markup, shown on the example
19:43:19 [ivan]
rendered in mozilla; mozilla has a native presentation mathml
19:43:38 [ivan]
client side xsl transforms content markup into presentation markup
19:43:58 [ivan]
same document in ie, looks the same
19:44:10 [ivan]
uses behaviours, and uses math player for design science
19:44:37 [janet]
Jacek, I'll be getting the URI soon
19:44:39 [ivan]
the same stylesheet transforms to html plus the necessary extension mechanism
19:44:54 [maxf]
JacekK, no URI for demo yet, but a similar presentatin is at
19:45:06 [ivan]
shows the xhtml source with a stylesheet at the top
19:45:17 [maxf]
works in Mozilla and IE !
19:45:21 [maxf]
(and Amaya)
19:45:39 [ivan]
in mathml I can use links, shows a link to maple
19:46:13 [maxf]
You can copy and paste the formulas from your browser, and send them directly to your computer algebra system
19:46:21 [ivan]
shows in presentation markup the formula is copy pastes to maple, the copy paste is in mathml
19:46:46 [ivan]
it could be a mathematical service, the point is that this is not an image
19:47:05 [ivan]
19:48:06 [PStickler]
PStickler has joined #tp
19:50:02 [ivan]
19:50:46 [maxf`]
maxf` has joined #tp
19:51:46 [ivan]
19:52:28 [Bernard]
Bernard has joined #tp
19:53:43 [ivan]
clap clap clap....
19:54:05 [lofton]
lofton has joined #tp
19:54:12 [ivan]
--------- question time
19:54:36 [ivan]
daniel austin (?)
19:54:45 [simonSNST]
that is correct.
19:54:51 [asir]
asir has joined #tp
19:54:55 [mdubinko]
mdubinko has joined #tp
19:55:00 [ivan]
what strikes me is the commonalities of the problems, how do we make documents from multiple namespaces
19:55:07 [dubya1]
dubya1 has joined #tp
19:55:20 [ivan]
has w3c the intention to solve this problem once and for all, rather than in pieces
19:55:42 [PStickler]
PStickler has joined #tp
19:55:45 [ivan]
Stephen: is probably a problem for the tag
19:55:58 [ivan]
danc: you guys know more of this that the tag does
19:56:22 [KevinLiu]
s/that/than
19:56:42 [ivan]
the tag had the issue of mixing namespaces, so there is the work done in the mean time, it is good that part of this work is done
19:56:56 [ivan]
do you see a solution from your perspective
19:57:10 [ivan]
Stephen: I believe it is doable, I do not have a clear picture
19:57:23 [ivan]
philippe hoschka: to respond to austin's question
19:57:50 [ivan]
there was a task force to look at that issue, right now there is a new interest for that, Ph le Hegaret is looking for people working on this
19:58:02 [RalphS]
Component Extension Framework
19:58:09 [maxf`]
Component Extensions (aka Plug-in API)
19:58:09 [ivan]
if you are interested in that talk to him or to me, there is a chance that we will address the issue
19:58:27 [maxf`]
there is a requirements document at
19:58:28 [marie]
steve zilles
19:58:59 [ircleuser]
ircleuser has joined #tp
19:59:01 [ivan]
ivan has joined #tp
19:59:18 [DanC]
tag issue on mixed namespace docs, which got exploded into 3 smaller bits...
19:59:19 [marie]
sz: there is now more consciousness of what the pbs really are
19:59:26 [marie]
tv raman
19:59:59 [ivan]
there is also a set of fundamental xml-ish format, like what steven mentioned
20:00:04 [ivan]
the id problem
20:00:19 [ivan]
these are purely syntax, can be solved independently
20:01:09 [ivan]
chris lilley: responding to daniel's point, the problem is when people create part of document, not the whole thing
20:01:11 [DanC]
hmm... interesting point about "those bits can only go in once, so can't be used in XForms". "self-similar syntax" really does have teeth.
20:02:04 [ivan]
ann basetti: one of the problems, why is it all these wonderful work has not been implemented at large
20:02:13 [ivan]
we need these things in products
20:02:24 [ivan]
it takes a loooong time to get it into products
20:02:39 [ht]
ht has joined #tp
20:02:43 [ivan]
timbl:
20:02:50 [dbooth]
dbooth has joined #tp
20:03:27 [ivan]
the same thing that applies to mathml in svg is the same when you encrypt something which you xslt and you put in something else
20:03:41 [ivan]
should we do the xinclude processing, then the encryption, does this mess up something
20:03:56 [ivan]
the model must be a top down one, elaborating top down
20:04:20 [sandro]
sandro has joined #tp
20:04:27 [ivan]
give up, too fast to for me...
20:04:43 [ivan]
you asked whether w3c would solve it: you ARE w3c,
20:05:08 [RalphS]
TimBL: don't think in terms of defining a document, rather define elements
20:05:28 [ivan]
tobin, edinburgh:
20:05:46 [sandro]
TimBL: You wont be able to change the meaning of your cousin elements.
20:05:47 [ivan]
mime to documents, some people said as it was the same as multinamespace document
20:06:04 [ivan]
different people want to define theirown document
20:06:33 [ivan]
the solution the entity is not to tie to the namespace, it is not a one-to-one correpondence
20:07:10 [ivan]
secondly: tim describes a top down processing model
20:07:33 [ivan]
there are lots of different thing one can do with the same document, I want to separate the processing of the document from the document itself
20:08:07 [Ian]
DC: "The future is longer than the past."
20:08:10 [ivan]
danc: lots of time wg do sometimes first something which is fast but broken
20:08:21 [ivan]
keep the good the fight, future is longer than the past
20:08:22 [mitrepaul]
mitrepaul has joined #tp
20:08:42 [ivan]
ivan has left #tp
20:09:01 [marie]
Session 5: W3C Glossary: Schema and Tools for Interoperability and Common Understanding
20:09:08 [dj-scribe]
--------------------------------------
20:10:43 [marie]
marie has joined #tp
20:10:44 [dj-scribe]
Presenter 1: Wendy Chisholm (WC) from W3C
20:11:09 [JosD_]
JosD_ has joined #tp
20:11:31 [dj-scribe]
Panel: Lofton Henderson (QA), Hugo Haas (WS), Norm Walsh (XMLSpec), Olivier THeraux (QA)
20:15:16 [ivan]
ivan has joined #tp
20:15:26 [Steven]
Steven has joined #tp
20:15:33 [Tantek]
URL for slides?
20:15:43 [dj-scribe]
not available as yet
20:16:00 [dj-scribe]
nor these ones
20:16:10 [RRSAgent]
See
20:16:41 [dj-scribe]
Presenter 2: Lofton Henderson (QA)
20:16:46 [PStickler]
PStickler has joined #tp
20:17:13 [RSSAgent]
the question is not to know where you are but where you go
20:17:26 [olivier]
olivier has joined #tp
20:18:03 [dj-scribe]
no slides
20:18:07 [dj-scribe]
available
20:18:12 [DanC]
:-{
20:18:20 [judy]
judy has joined #tp
20:18:24 [dj-scribe]
which is a shame, given than there are complaints about projected text size
20:18:35 [DanC]
pointer to WAI glossary? QA glossary?
20:18:40 [Christoph]
Christoph has joined #tp
20:18:50 [dom]
for QA glossary
20:18:56 [amy]
amy has joined #tp
20:18:59 [olivier]
20:19:12 [olivier]
I think those are his slides
20:19:17 [judy]
can someone near lofton get him to enlarge his slides? completely unreadable from the back -- and mostly inaudible as well
20:19:27 [grussell]
grussell has joined #TP
20:19:28 [RylaDog]
20:19:43 [DanC]
hmm... some of these terms are parameterized... e.g. "Conforming Document" --
20:19:43 [chaalsBOS]
... is WAI Glossary
20:20:19 [steph-tp]
steph-tp has joined #tp
20:20:50 [bwm]
bwm has joined #tp
20:21:36 [Burnett_]
Burnett_ has joined #tp
20:21:38 [DanC]
hmm... I much prefer glossaries that refer to the discussion of the term in context; i.e. more of an index.
20:21:46 [maxf]
maxf has joined #tp
20:22:17 [dj-scribe]
Presenter 3: Hugo Hass (Web Services)
20:22:59 [dj-scribe]
s/Hass/Haas/
20:23:22 [tvraman]
tvraman has joined #tp
20:23:30 [dj-scribe]
ie. aas not ass :)
20:24:37 [chrisf]
chrisf has joined #tp
20:25:20 [halindrom]
halindrom has joined #tp
20:25:21 [JosD_]
DanC, any example?
20:25:27 [henri]
slides are
20:26:45 [henri]
html version
20:26:51 [dj-scribe]
Presenter 4: Norm Walsh (XML Spec)
20:27:21 [tvraman]
tvraman has left #tp
20:27:43 [Stuart]
Stuart has joined #tp
20:28:38 [libby]
libby has joined #tp
20:29:15 [Z]
Z has joined #tp
20:29:48 [Z]
Z has left #tp
20:30:02 [dj-scribe]
Presenter 5: Olivier Theraux (QA Glossary Tools)
20:30:26 [amy]
amy has joined #tp
20:30:46 [ndw]
Norm's slides are at
20:31:41 [henri]
olivier's slides :
20:31:59 [henri]
html version :
20:36:12 [simonSNST]
simonSNST has joined #tp
20:38:53 [McGlashan]
McGlashan has joined #tp
20:39:09 [ndwalsh]
ndwalsh has joined #tp
20:40:00 [Stuart]
Stuart has left #tp
20:40:49 [dj-scribe]
-------- Q&A --------
20:40:56 [DanC]
stay tuned... to what/where?
20:41:22 [dj-scribe]
Charles McN (W3C)
20:41:30 [dom]
the glossary project page
20:41:34 [dom]
linked from the slides
20:41:35 [dj-scribe]
oops representing Sidar at the mic.
20:42:04 [dj-scribe]
CMN: Are you looking at going over existing specs? That's what we are doing for translations of terms.
20:42:11 [DanC]
ah... glossary project home.
20:42:21 [reagle]
reagle has left #tp
20:42:42 [DanC]
seems to be in weblog form... is an RSS feed available?
20:43:01 [dj-scribe]
OT: THe ultimate goal is that we provide tools. Not just a single glossary.
20:43:11 [dj-scribe]
Roger Cutler (Chev-Tex)
20:43:25 [dj-scribe]
RC: I've been helping HH on the WS glossary.
20:43:43 [dj-scribe]
RC: I disagree that a glossary arch with levels will scale.
20:44:01 [dj-scribe]
RC: You keep using "agent" as an example - that is a case that will work.
20:44:03 [grussell]
grussell has left #TP
20:44:17 [r12aBOS]
r12aBOS has joined #tp
20:44:19 [dj-scribe]
RC: many other terms won't work.
20:44:36 [dj-scribe]
RC: If you use terms with a different mindset you are screwed.
20:44:43 [DanC]
more glossary fodder:
20:44:49 [dj-scribe]
WC: We are not limiting the number of definitions.
20:44:55 [dj-scribe]
Arnaud le Hors (IBM)
20:45:07 [dj-scribe]
ALH: I'm puzzled how the W3C staff address issues.
20:45:24 [dj-scribe]
ALH: Some of these things are in XML form, clearly marked.
20:45:42 [dj-scribe]
ALH: Don't ignore the XML that is already there.
20:45:56 [dj-scribe]
OT: I focussed on HTML because it is hard to extract.
20:46:04 [dj-scribe]
OT: Some people don't want to use XML Spec.
20:46:16 [dj-scribe]
OT: tool must be flexible
20:46:34 [dj-scribe]
ALH: You should lobby more people to use XML Spec
20:46:59 [dj-scribe]
Martin, a disembodied voice, says some terms in XML spec are not marked up.
20:47:12 [dj-scribe]
Norm Walsh: people can use bad markup anywhere
20:47:20 [dj-scribe]
Tantek Celik (Microsoft)
20:47:59 [dj-scribe]
TC: <term> and <termdef> are new, why aren't we using <dl> <dd> <dt>?
20:48:19 [dj-scribe]
NW: XML Spec has more precise semantics.
20:48:22 [RylaDog]
and <dfn>
20:48:22 [PGrosso]
I think the english word "term" predates HTML.
20:48:32 [mitrepaul]
rrsagent, where am I?
20:48:32 [RRSAgent]
See
20:48:33 [DanC]
(go TC! that was my question to spec-prod years ago. or to SGML-ERB or something. yes, seems gratuitous to me too.)
20:48:59 [dj-scribe]
OT: I read HTML 4.01 - it's less than clear. It isn't pushy about this. Freedom in use of <dl>
20:49:15 [dj-scribe]
---------------------------------------------------
20:49:21 [PGrosso]
I thought XML was extensible--why can't someone develop a DTD that uses terms in their own language like English instead of html-eze!
20:49:21 [dj-scribe]
END OF SESSION
20:51:19 [libby]
so, as a little experiment, would anyone like to try this little rdf toy?
20:56:07 [McGlashan]
McGlashan has left #tp
20:56:53 [ht]
ht has joined #tp
21:01:21 [Ben]
Ben has joined #tp
21:01:27 [Christoph]
join #svg
21:06:29 [simon-scr]
boohoo hoo
21:07:01 [Zarella]
*so do we have to do what simon says?
21:07:16 [simon-scr]
that is correct.
21:07:27 [Zarella]
boohoo hoo
21:08:46 [simon-scr]
Session 6: One Web or Four? -----------------------------
21:09:07 [simon-scr]
-------------------------------
21:09:11 [simon-scr]
Session 6: One Web or Four?
21:09:16 [simon-scr]
-------------------------------
21:11:46 [amylap]
amylap has joined #tp
21:14:18 [shayman]
shayman has joined #tp
21:15:49 [simon-scr]
simon-scr has joined #tp
21:16:24 [Gudge]
Gudge has joined #tp
21:17:06 [ivan]
ivan has joined #tp
21:17:09 [geoff_a]
geoff_a has joined #tp
21:17:14 [simon-scr]
stuart williams introduces.
21:17:32 [mdubinko]
mdubinko has joined #tp
21:17:39 [simon-scr]
web is growing, moving in many different directions. how to maintain coherence.
21:17:50 [simon-scr]
outside w3c, activities on the grid.
21:17:58 [simon-scr]
each speaker will speak about 10 minutes.
21:18:12 [PStickler]
PStickler has joined #tp
21:18:18 [bwm]
bwm has joined #tp
21:18:45 [henry_edi]
henry_edi has joined #tp
21:18:48 [simon-scr]
Panelists (cont'd): Brian McBride, Co-Chair, RDF Core WG; on the Semantic Web
21:18:53 [maxf]
maxf has joined #tp
21:19:00 [simon-scr]
Moderator: Stuart Williams, TAG
21:19:18 [simon-scr]
slides:
21:19:24 [r12aBOS]
r12aBOS has joined #tp
21:19:37 [mimasa]
mimasa has joined #tp
21:19:38 [simon-scr]
steven's slides:
21:20:00 [simon-scr]
noah's slides:
21:20:13 [simon-scr]
brian's slides:
21:20:14 [PStickler]
Question for panel: Is a description of a resource a representation of that resource?
21:20:18 [simon-scr]
----------
21:20:27 [simon-scr]
Steven - One web or four
21:20:29 [simon-scr]
----------
21:20:52 [simon-scr]
current slide set:
21:21:11 [simon-scr]
concern today with human web, not the humans here, though.
21:21:46 [simon-scr]
reason HTML is successful is it is easy to use.
21:21:54 [simon-scr]
granma can make a web site.
21:22:01 [Ben]
Ben has joined #tp
21:22:09 [simon-scr]
1991, a grand ol' age.
21:22:21 [simon-scr]
now power packed pcs, etc.
21:22:44 [simon-scr]
since then, xml has evolved.
21:23:01 [Tantek]
Tantek has joined #tp
21:23:14 [simon-scr]
an html doc 2001 is bigger, but computer bigger, so no sweat.
21:24:07 [simon-scr]
by 2010, all using euro. woohoo.
21:24:11 [mdubinko]
no sweat for the computer, but what about the user?
21:24:48 [simon-scr]
why not leave to authoring tools?
21:25:13 [simon-scr]
vi rules!
21:25:26 [amy]
amy has joined #tp
21:25:51 [DanC]
if you count heads, I'm pretty sure, to several orders of magnitude, all authors use frontpage
21:26:12 [simon-scr]
conclusion: computers are getting more powerful, people aren't.
21:26:32 [steve]
steve has joined #tp
21:26:34 [KevinLiu]
KevinLiu has joined #tp
21:26:36 [simon-scr]
roy fielding, speaking on the 'Boring' web.
21:26:44 [simon-scr]
----------
21:26:49 [simon-scr]
Roy Fielding
21:26:51 [simon-scr]
----------
21:26:53 [mitrepaul]
mitrepaul has joined #tp
21:26:56 [RRSAgent]
See
21:27:23 [dbooth]
dbooth has joined #tp
21:27:33 [shayman]
shayman has joined #tp
21:27:33 [MarkJ]
MarkJ has joined #TP
21:27:36 [simon-scr]
is there a url for these slides?
21:27:39 [mnot]
mnot has joined #tp
21:28:10 [simon-scr]
high-level requirements.
21:28:23 [simon-scr]
low entry-barrier.
21:28:32 [DanC]
"The only time you hear about HTTP is when something goes wrong." -- RoyF
21:28:33 [slh]
slh has joined #tp
21:28:37 [ScottMcG]
ScottMcG has joined #tp
21:28:40 [simon-scr]
multiple organizational bounsaries.
21:28:53 [ddahl]
ddahl has joined #tp
21:28:54 [simon-scr]
distributed hypermedia system.
21:29:05 [simon-scr]
MSM, thanks.
21:29:20 [simon-scr]
we need to plan for gradual fragmented change.
21:29:38 [simon-scr]
it doesn't cahnge that much, like html.
21:30:02 [simon-scr]
distributed hypermedia system.
21:30:13 [simon-scr]
good for large data xfers.
21:30:19 [olivier]
olivier has joined #tp
21:30:25 [simon-scr]
sensitive to user-perceived latency
21:30:44 [Roland]
Roland has joined #tp
21:30:46 [simon-scr]
capable of disconnected ... (missed last word).
21:30:53 [Ian]
operation
21:30:59 [simon-scr]
HTTP
21:31:01 [simon-scr]
Ina, thanks.
21:31:06 [simon-scr]
s/Ina/Ian/
21:31:17 [simon-scr]
REST Architectural style
21:31:49 [Steven]
Steven has joined #tp
21:31:53 [simon-scr]
an attempt to come up with a rationale for showing/telling folks how their software sucks.
21:32:11 [simon-scr]
what is it you are trying to achieve with your product on the web?
21:32:44 [simon-scr]
REST Architectural style is the basis for how Roy desigened HTTP 1.1 extensions, and for defense of them.
21:33:07 [simon-scr]
REST STyle Derivation Graph (no ref).
21:33:16 [janet]
simon, will have uris in a sec
21:33:21 [simon-scr]
client-sever paradigm.
21:33:28 [simon-scr]
janet, dake.
21:33:29 [Steven]
Steven has joined #tp
21:33:32 [simon-scr]
s/dake/danke/
21:33:43 [simon-scr]
sever, hmm...
21:33:51 [simon-scr]
s/sever/server/
21:33:51 [Ian]
Representational State Transfer (REST)
21:33:55 [Ian]
21:34:04 [RRSAgent]
See
21:34:06 [simon-scr]
Ian, thanks bud.
21:34:26 [simon-scr]
REST Process View graph.
21:34:37 [simon-scr]
REST Uniform interface.
21:34:52 [simon-scr]
Pictures are not sufficient.
21:34:57 [timbl__]
timbl__ has joined #tp
21:35:04 [simon-scr]
five primary interface constraints.
21:35:09 [Norm]
Norm has joined #tp
21:35:23 [simon-scr]
hard to keep without refs.
21:35:43 [simon-scr]
noted on slides.
21:35:48 [simon-scr]
panel questions.
21:35:57 [simon-scr]
how many webs should there be?
21:36:25 [ted]
ted has joined #tp
21:36:33 [simon-scr]
----------
21:36:47 [simon-scr]
noah mendelsohn
21:36:48 [simon-scr]
----------
21:37:15 [simon-scr]
21:37:25 [simon-scr]
noah has a sense of there being more than one web.
21:37:30 [simon-scr]
this is how he thinks about it.
21:37:36 [Olin_Dan]
Olin_Dan has joined #tp
21:37:43 [simon-scr]
at core we have a web of names.
21:37:48 [henri]
henri has joined #tp
21:37:49 [simon-scr]
named by a URI.
21:37:55 [frankmcca]
frankmcca has joined #tp
21:38:02 [simon-scr]
in this, there is a web of widely deployed shcemes.
21:38:15 [simon-scr]
de facto, it is a web of things you can minipulate.
21:38:22 [simon-scr]
roy has given us a model.
21:38:34 [simon-scr]
RESTful web.
21:38:54 [simon-scr]
http/https are protocols of REST.
21:39:03 [simon-scr]
core protocols of web as deployed.
21:39:37 [simon-scr]
also web of widely deployed media types.
21:39:43 [DanC]
cool picture. Ian, this would be a great "story" for the arch doc intro, no/
21:39:45 [ndw]
ndw has joined #tp
21:39:45 [DanC]
no?
21:40:22 [Ian]
I'll check it out with Noha.
21:40:24 [Ian]
Noah
21:41:09 [simon-scr]
technology comparison.
21:41:18 [simon-scr]
on browseable web, things are uris.
21:41:23 [dougb]
dougb has joined #tp
21:41:29 [simon-scr]
folks don't use uris aggresively enough.
21:41:43 [simon-scr]
for example, you don't see uris for every stock quote.
21:42:23 [simon-scr]
web services need to run over more than http.
21:42:42 [simon-scr]
furthermore, history that SOAP has misued HTTP.
21:43:24 [simon-scr]
this slide -
21:43:52 [marja]
marja has joined #tp
21:44:03 [RalphS]
Noah: "the option is now there [in SOAP 1.2] to do things correctly"
21:44:18 [simon-scr]
noah claims, you cannot rely on people to know if they got the right thing.
21:45:00 [simon-scr]
slide -
21:45:12 [Tantek]
amazing. is there an HTML version of this presentation where the text is in markup instead of being trapped in a .gif?
21:45:28 [JosD_]
JosD_ has joined #tp
21:45:31 [simon-scr]
not that i am aware of :^(
21:46:08 [janet]
hi, tantek
21:46:28 [janet]
slides currently are in this format - generated from Freehand
21:46:58 [bwm]
slides -
21:47:03 [simon-scr]
Conclusions covered on
21:47:14 [simon-scr]
----------
21:47:14 [janet]
Slides for Roy's presentation now available:
21:47:24 [simon-scr]
----------
21:47:27 [simon-scr]
Brian McBride, Co-Chair, RDF Core WG; on the Semantic Web
21:47:29 [simon-scr]
----------
21:47:43 [simon-scr]
21:47:52 [geoff_a]
/msg frankmcca so whaddya think of Noah's talk?
21:48:05 [timbl__]
specifcially that slide was mostly w3cplenaryhowmanywebs16.gif
21:48:11 [simon-scr]
how many webs? ONE - but it is multifaceted and its architecture has structure
21:48:24 [simon-scr]
What does sWeb need from web architecture?
21:48:32 [simon-scr]
naming...
21:48:45 [simon-scr]
retrievability
21:48:50 [simon-scr]
precision
21:48:56 [AlanK]
timbl, please see private message
21:48:56 [simon-scr]
structure
21:49:28 [simon-scr]
Structure the architecture -
21:50:23 [simon-scr]
Naming and Retrievability
21:50:29 [simon-scr]
21:50:39 [simon-scr]
Naming - sWeb needs to name things other than web resources
21:50:47 [simon-scr]
with some precision - e.g. a car and a picture of a car are not the same thing
21:51:03 [simon-scr]
Retrievability - sWeb needs to be able to retrieve information associated with a name
21:51:09 [simon-scr]
e.g. RDF Vocabulary definitions, OWL ontologies
21:51:36 [simon-scr]
For example...
21:51:42 [simon-scr]
21:52:46 [DanC]
hmm... no # in the URI of the 'non document' issues list.
21:52:50 [simon-scr]
discussion of specific example.
21:53:32 [DanC]
ah... there is a # in the 2nd option on this slide.
21:53:33 [simon-scr]
hope id to have last call comments generated by software agents.
21:53:38 [simon-scr]
s/id/is/
21:53:49 [Christoph]
Christoph has joined #tp
21:53:52 [timbl__]
In fact the web server doesn't say "that is not a document, look at Overview", it just returns the contents of Overview, n'est-ce pas?
21:53:55 [chaalsBOS]
s/hope/vision of hell/
21:53:59 [DanC]
"One of my versions of hell is software generating last call comments." -- bwm
21:54:11 [dom]
timbl__, it doesn't, but that's a bug in Apache
21:54:15 [simon-scr]
got it. thanks for the correction.
21:54:21 [dom]
it should set the content-location: header to Overview.html
21:54:41 [simon-scr]
Precision -
21:54:47 [dom]
(which is not exactly saying "this is not a document" either, but closer)
21:54:59 [simon-scr]
sWeb is building formal models - needs firm foundations to build on
21:55:10 [simon-scr]
[[A resource can be anything that has identity.]]
21:55:13 [MJDuerst]
MJDuerst has joined #tp
21:55:24 [simon-scr]
RFC 2396
21:55:34 [simon-scr]
[[More precisely, a resource R is a temporally varying membership function M R (t), which for time t maps to a set of entities, or values, which are equivalent.]]
21:55:40 [simon-scr]
from roy's thesis.
21:56:07 [encre]
encre has joined #tp
21:56:28 [encre]
encre has left #tp
21:56:59 [simon-scr]
----------
21:57:04 [simon-scr]
stuart williams
21:57:07 [simon-scr]
----------
21:57:13 [simon-scr]
OPEN FOR QUESTIONS
21:57:25 [simon-scr]
jrobie...
21:57:41 [simon-scr]
strike...
21:57:48 [simon-scr]
roy comments.
21:57:54 [simon-scr]
jrobie....
21:58:20 [luu]
luu has joined #tp
21:58:24 [simon-scr]
there are some things that are obviously core, or you don't have a web. you can argue how many tou can squeeze in.
21:58:43 [simon-scr]
each of you has shown what a web is, while leaving something out that was essetial to someone else.
21:59:01 [Gottfried]
Gottfried has joined #tp
21:59:09 [simon-scr]
it is possible for many of us to spend a good part of our careers, working on something that someone else has solved using a different method.
21:59:27 [simon-scr]
(all jrobie comments abbove, as noted)
21:59:55 [rigo]
rigo has joined #tp
22:00:07 [simon-scr]
i think instead of arguing who gets to be in the middle, we have to assume differen folks use different tools.
22:00:18 [mgylling]
mgylling has joined #tp
22:00:28 [simon-scr]
patrick (?) -
22:00:43 [simon-scr]
is a desrciption of a resource a valid description of the resorce?
22:00:55 [simon-scr]
if it is, (can someone fill in?)
22:01:01 [ivan]
ivan has joined #tp
22:01:23 [dbooth]
then the resource is a description of a resource
22:02:00 [RalphS]
RoyF: " if you have a description of a resource that is the resource, then the resource is a description of a resource"
22:02:32 [simon-scr]
RF: there are huge discussions on www-tag.
22:02:43 [janet]
sw folks, please help
22:02:53 [rigo]
:)
22:02:58 [simon-scr]
RF: folks assume there is a framework that is consistent.
22:03:03 [RalphS]
RoyF: if you have a picture of a car then of course people would consider that different from the car
22:03:24 [timbl__]
AV, please cut Roy's mike ;-)
22:03:25 [simon-scr]
jack (?)
22:03:25 [AlanK]
Technical description of discussion: Resources are cars.
22:03:39 [ndw]
timbl__: lol
22:04:01 [mdubinko]
as are descriptions of discussions :-)
22:04:12 [simon-scr]
are web services part of the www, or is the www part of the ws world, or an application thereof?
22:04:18 [mdubinko]
mdubinko has joined #tp
22:04:33 [simon-scr]
how about implementing http over soap and calling it shttp?
22:04:40 [simon-scr]
(above was jack)
22:05:01 [janet]
Jacek Kopecski (sp), Systinet
22:05:04 [RylaDog]
xhttp
22:05:05 [simon-scr]
NM: we engineered soap to be more RESTful...
22:05:11 [simon-scr]
janet, thanks.
22:05:15 [janet]
sure
22:06:02 [simon-scr]
NM: the fact that our envelopes use uris...
22:06:15 [simon-scr]
NH: gives us better use (?) of them.
22:07:29 [simon-scr]
RF: most of his criticisms over the years were of SOAP 1.1
22:08:03 [simon-scr]
pat hayes...
22:08:09 [janet]
Pat Hayes, U of west FLa
22:08:15 [janet]
(Web Ont WG)
22:08:19 [simon-scr]
PH: there are no names on the web at all. just links.
22:08:46 [simon-scr]
there is a huge hole in this story.
22:08:56 [simon-scr]
PH: there are no protocols for naming things.
22:08:59 [janet]
sorry, jacek, for mangling your last name
22:09:03 [marie]
marie has joined #tp
22:09:08 [dadahl]
dadahl has joined #tp
22:09:12 [simon-scr]
PH: we have to invent ad hoc ways of naming things.
22:09:18 [simon-scr]
PH: mass delusion.
22:09:24 [simon-scr]
(illusion?)
22:09:40 [ndw]
delusion he said
22:09:48 [simon-scr]
BSM: i think you overstate your case.
22:09:52 [RylaDog]
I heard illusion
22:10:01 [mimasa]
mimasa has joined #tp
22:10:10 [simon-scr]
the names are not an illusion, but the bing between the object and the name (something).
22:10:20 [simon-scr]
s/bing/binding/
22:10:54 [timbl__]
Indeed one names thinhe alluded to an illusion but was deluded.
22:11:25 [timbl__]
Pat alluded to an illusion but was deluded.
22:11:29 [amy]
amy has joined #tp
22:11:36 [Olin_Dan]
Olin_Dan has left #tp
22:11:38 [simon-scr]
i missed the name of this gentleman.
22:11:42 [wendy]
wendy has joined #tp
22:11:47 [libby]
jeremy carroll
22:11:51 [janet]
HP
22:11:57 [ndw]
Jeremy Carrol from HP
22:12:00 [simon-scr]
libby, thanks.
22:12:17 [simon-scr]
NM: in priciple it is nice to have a truly uniform naming system.
22:12:26 [ht]
ht has joined #tp
22:12:55 [JosD_]
JosD_ has joined #tp
22:12:55 [simon-scr]
NM: then you get to the fact of the engineering matter.
22:13:05 [Nobu]
Nobu has joined #tp
22:15:16 [simon-scr]
RF: things are coming together vs. falling apart.
22:15:25 [simon-scr]
David Orchard, BEA.
22:15:36 [Steven]
For the record: I have nothing against URIs, nor against typing them
22:16:15 [simon-scr]
DO: what different constraints than REST are being applied...
22:16:19 [Norm]
SOAs?
22:16:29 [ht]
Service-Oriented Architectures
22:16:39 [Norm]
ty
22:16:48 [simon-scr]
DO: should W3C, in particualr TAg, be in the business of documenting one set of constraints?
22:17:10 [simon-scr]
DO: or describing overlap?
22:17:23 [simon-scr]
NM: i would appreciate form the TAG some calrity.
22:17:52 [simon-scr]
NM: i do not believe everything has to be RESTful. i would like to see the TAG weigh in on this.
22:18:28 [simon-scr]
NM: looking at scenarios of steaming vidoe, etc., REST is good.
22:18:48 [simon-scr]
s/steaming/streaming/
22:18:54 [simon-scr]
phew.
22:20:02 [simon-scr]
RF: the goal of REST is not to tell folks the extent of the web.
22:20:16 [DanC]
p2p is tricky... it's clearly too important to ignore, but I haven't found time to play with it enough to answer the questions Noah just riffed about.
22:20:21 [simon-scr]
(what was the tail end of that comment?)
22:20:58 [simon-scr]
RF: if there are aspects of REST that don't fulfill web services, that's ok.
22:21:11 [Norm]
Noah said he'd like to have URIs for the P2P resources, but they involve an engineering architecture that's not obviously like http. Among other things.
22:21:13 [DanC]
NM: how about p2p and isochronous stuff? (voice/video)? is that RESTful? should it use HTTP?
22:22:04 [simon-scr]
RF: but you have to go back and decide (something).
22:22:22 [AK-Scribe]
Last session: Ian Jacobs chair
22:22:37 [AK-Scribe]
IJ: Let's do breathing exercises.
22:22:48 [AK-Scribe]
Is the Integer 1 a Resource?
22:23:11 [AK-Scribe]
Panelists: TimBL, SteveB
22:23:17 [RylaDog]
not true
22:23:23 [mdubinko]
22:23:56 [AK-Scribe]
IJ proposes various topics
22:24:23 [Norm]
No, mdubinko, that's 404 not 1 :-)
22:24:26 [AK-Scribe]
Henry Thompson: About 15 people had a BOF on Linking.
22:24:47 [Steven]
20 more like
22:24:58 [AK-Scribe]
... The Liking WG expired at end of 2002. There are structural and technical issues on moving forward about talking about links.
22:24:58 [dbooth]
Tech Plenary feedback form:
22:25:25 [AK-Scribe]
The BOF agreed about some procedural issues. This will be sent to some list soon.
22:25:47 [DanC]
meanwhile, ht@w3.org will (likely) reach henry (re linking).
22:25:53 [PStickler]
val:(
)1 (c.f.
)
22:26:04 [AK-Scribe]
The AB is dealing with Normative Errata. I think they've (finally) done a good job.
22:26:20 [AK-Scribe]
The new version of XML schema is trying out this new process.
22:26:35 [AK-Scribe]
Frank McCabe
22:26:49 [AK-Scribe]
BOF on Semantic Web Services. About 20 people.
22:26:59 [amy]
amy has joined #tp
22:27:00 [PStickler]
"1"^^<
> denotes the integer 1, and RDF says it's a resource ;-)
22:27:07 [DanC]
hmm... PStickler, I'd expect to find compare/contrast with data: in draft-pstickler-val. I don't see any.
22:27:16 [AK-Scribe]
Topics: What is meant by SW Services, and what to do about them.
22:27:27 [AK-Scribe]
Thinking of starting an IG
22:27:54 [AK-Scribe]
There will be a mailing list [scribe didn't get the name]
22:28:16 [AK-Scribe]
The semantic web also requires services, such as ontology services.
22:28:26 [PStickler]
val: is similar to data: but more specialized
22:28:42 [DanC]
[list name www-ws, perhaps?
]
22:28:47 [AK-Scribe]
There is a class of services which have publicly understood semantics
22:28:59 [AK-Scribe]
Encourage people to participate
22:29:23 [AK-Scribe]
Martin Duerst: report on BOF on ??? (couldn't understand)
22:29:28 [DanC]
who was the guy who spoke about semantic web services?
22:29:39 [DanC]
??? = glyph variants, I think.
22:29:42 [MSM]
Gaiji
22:29:45 [AK-Scribe]
Frank McCabe
22:29:57 [MSM]
Gaiji not the same as glyph variants
22:30:13 [RylaDog]
Kanji?
22:30:17 [marie]
w3c-char-glyph
22:30:20 [AK-Scribe]
Martin: w3c-char-glyph
22:30:24 [DanC]
22:30:32 [timbl__]
LX ?
22:30:50 [AK-Scribe]
David Marston, IBM, XSLT and Xpath conformance testing
22:30:54 [MSM]
Kanji which are new / unconventional / new variants of existing glyphs (so glyph variants aren't irrelevant)
22:30:56 [AK-Scribe]
in coord with OASIS
22:31:07 [AK-Scribe]
we're ready to show the test organization to the world
22:31:21 [AK-Scribe]
WIll be a good example of the QAWG test guideines
22:31:31 [AK-Scribe]
s/guideines/guidelines/
22:31:46 [MJDuerst]
gaiji subsumes glyph variants and non-encoded characters
22:32:00 [AK-Scribe]
Paul Cotton: As a TAG member, I notice only a few people on www-tag list.
22:32:27 [AK-Scribe]
When mixed namespace docs were being discussed, it was pointed out as an important problem.
22:32:32 [MJDuerst]
but you could say that gaiji is a particular solution for these issues
22:32:41 [AK-Scribe]
How are we going to get those people engaged (if they aren't on the list)
22:33:11 [AK-Scribe]
I'd like to hear suggestions from the audience how the TAG is supposed to learn their views on the issues.
22:33:38 [DanC]
list per issue... ping has a tool that implemented that cheaply... nosylists or some such...
22:33:41 [AK-Scribe]
Should the TAG have some other mechanism to garner input?
22:33:57 [PGrosso]
The TAG should put a special filter on www-tag that doesn't accept any more than 2 postings per day by the same person.
22:34:05 [AK-Scribe]
TimBL: The guy who asked this question caught me at coffee break.
22:34:19 [MJDuerst]
PGrosso++!
22:34:32 [AK-Scribe]
There are some people who expected it to be solved in another context (didn't get).
22:34:33 [MJDuerst]
(except for the TAG members, of course)
22:34:42 [RalphS]
PGrosso++
22:34:48 [AK-Scribe]
The Tech plenary is a good place to bring these things up
22:35:04 [RylaDog]
XML Processing something...
22:35:06 [AK-Scribe]
The TAG doesn't solve all the problems, we try to pass them off.
22:35:12 [maxf]
maxf has joined #tp
22:35:21 [PStickler]
The key difference between val: and data: is that data: would force all datatypes to be defined as content types whereas val: URIs are just RDF typed literals in URI form
22:35:30 [AK-Scribe]
Janet Daly: Paul Grosso on IRC suggests limiting the number of posts per day on www-tag
22:35:38 [DanC]
PStickler, pls say that in your draft.
22:36:10 [AK-Scribe]
Arnaud Lehors: I don't think that your care of the importance of a problem implies you want to work on solving it
22:36:11 [PStickler]
The draft has expired. I guess I should re-publish it, with inclusion of the comparison with data:
22:36:12 [JacekK]
outside of AC meetings, the tag list is OK, but in the AC meeting a bit of advertisement for the TAG issues would be helpful. We heard of three issues, we might have heard in less detail of more issues, and maybe early in the morning
22:36:35 [gerald]
gerald has joined #tp
22:37:02 [simonSNST]
can someone mention to folks to turn theirs phones to vibrate or something?
22:37:04 [AK-Scribe]
Al Gilman: Once an issue is accepted, the topic should be moved to a list which is archived, but not distributed
22:37:23 [Norm]
Not a list, but a wiki or other web forum thing.
22:37:40 [DanC]
(I tried launching a TAG wiki early on. it died due a combination of technical and social factors)
22:37:41 [AK-Scribe]
Marty Bingham: I'm concerned that working within WAI, we've been successful with outreach. (?)
22:38:03 [DanC]
Harvy Bingham
22:38:13 [AK-Scribe]
EricP: Report on RDF-Query and RuleML BOF... We had the longest running BOF.
22:38:38 [AK-Scribe]
We talked a lot about abstractions, etc, etc,. We started coming up with a model.
22:39:12 [AK-Scribe]
The problem is that there are RDF recommendations, but now how do you use it? Lots of query languages and protocols.
22:39:17 [AK-Scribe]
Where is the commonality.
22:39:37 [AK-Scribe]
Susan Lesch: Markup language tokens BOF (?)
22:40:17 [AK-Scribe]
We covered Classes, links, etc, etc. We came up 3 or 4 projects: Help for new/all editors. Document production tools.
22:40:35 [AK-Scribe]
Steven Pemberton: RDF... Who wants to write or read that stuff? (applause)
22:41:16 [DanC]
22:41:22 [AK-Scribe]
TimBL: There are a lot of use cases for RDF queries
22:41:29 [simonSNST]
poor scribe.
22:41:39 [JacekK]
On TAG: maybe the TAG could use bugzilla for issue tracking? (Bugzilla was suggested in the WS-Desc WG)
22:41:42 [AK-Scribe]
But they're not in XML, which is why they're easy.
22:41:50 [DanC]
<- query part of semweb arch meeting
22:41:59 [karl-QA]
"""The first time I tried the RDFLib Python libraries, the lightbulb finally flashed on."""
22:41:59 [libby]:
alberto and andy;'s document
22:42:06 [karl-QA]
22:42:14 [AK-Scribe]
Al Gilman: XAG makes radical claim is that the most useful doc is one readable both by people and machines.
22:42:51 [AK-Scribe]
We have in "accessibilty" some information which some users will process and others won't.
22:43:09 [AK-Scribe]
There's a strong appeal that you have and document a model.
22:43:36 [AK-Scribe]
There's not good documentation of what works and doesn't work.
22:44:15 [AK-Scribe]
Speculate we want to do lots of prototyping in RDF along the way
22:44:43 [frankmcca]
frankmcca has joined #tp
22:45:03 [AK-Scribe]
[scribe loses thread of this exposition]
22:46:28 [AK-Scribe]
Brian McBride, HP Labs: There was an occasion some months ago that Ian Horowitz said "no one can write this stuff (RDF-XML)"
22:46:37 [AK-Scribe]
I did it in the back of the room.
22:46:43 [libby]
libby has left #tp
22:46:48 [KevinL]
KevinL has joined #tp
22:46:49 [libby]
libby has joined #tp
22:46:51 [AK-Scribe]
RDF is very elegant. Don't confuse it with the XML syntax
22:47:03 [sjh]
sjh has joined #tp
22:47:26 [AK-Scribe]
Roger Cutler: Al Gilman said "the core of the web is interaction between an individual and a machine"
22:47:35 [AK-Scribe]
I think that's the web of yesterday.
22:47:46 [Steven]
Steven has joined #tp
22:48:09 [Norm]
Norm has joined #tp
22:48:19 [AK-Scribe]
The new metaphor is business to business interaction. The W3C is not in the leadership position. I think it's OASIS
22:48:52 [Steven]
Steven has joined #tp
22:48:59 [AK-Scribe]
Pat Hayes: The DAML is encoded in RDF-XML. There are 6 million lines of code.
22:49:19 [hugo]
hugo has joined #tp
22:49:24 [AK-Scribe]
Who wants to write this stuff? Who wants to write XML? We do it because it's useful.
22:49:45 [PGrosso]
he said "who wants to write HTML"
22:49:53 [Steven]
Steven has joined #tp
22:50:00 [AK-Scribe]
Hakon Lie: Lots of people are using Emacs. It can be written beautifully.
22:50:21 [AK-Scribe]
I think a goal for W3C would be to reuse elements and attributes without using namespaces.
22:50:33 [Steven]
Steven has joined #tp
22:50:37 [AK-Scribe]
IJ: Tantek suggested there be a W3C namespace
22:51:12 [AK-Scribe]
TimBL: I think it's a perfectly reasonable idea.
22:51:27 [DanC]
in "It can be written beautifully." Hakon was talking about HTML.
22:51:50 [AK-Scribe]
[scribe welcomes all corrections]
22:51:59 [AK-Scribe]
Timbl: Just propose a WG
22:52:17 [AK-Scribe]
Steve Bratt: Let's applaud Ian.
22:53:08 [AK-Scribe]
Thanks to Amy Van der Heil, Josh Friel, Saeko Takeuchi, Marisol Diaz,...
22:53:10 [RylaDog]
I really would like to respond to the W3C vs OASIS comment...........
22:53:16 [AK-Scribe]
Thanks to systems team
22:53:23 [AK-Scribe]
thanks to scribes
22:53:38 [AK-Scribe]
Amy: Thanks to Ralph!
22:54:00 [AK-Scribe]
Steve: Thanks to Program Committee
22:54:17 [dbooth]
dbooth has joined #tp
22:54:52 [AK-Scribe]
Steve: Fill out the Survey
22:54:57 [dbooth]
Survey again is at
22:55:10 [Steven]
Reception now?
22:55:17 [JacekK]
yay!
22:55:27 [AK-Scribe]
Reception at 7 PM
22:55:31 [karl-QA]
Danc: It's avery good idea. If you want help as me for very BETA dumb tester
22:55:31 [PGrosso]
PGrosso has left #tp
22:55:35 [karl-QA]
I volunteer
22:55:55 [Zarella]
Zarella has left #tp
22:56:03 [ygonno]
ygonno has left #tp
22:56:25 [caribou]
caribou has left #tp
22:59:39 [yasuyuki]
yasuyuki has left #tp
23:00:20 [JacekK]
JacekK has left #tp
23:03:02 [jim]
jim has joined #tp
23:03:25 [ddahl]
ddahl has joined #tp
23:03:40 [jim]
this is a test
23:03:41 [ddahl]
hi
23:03:45 [RylaDog]
RylaDog has left #tp
23:04:21 [mdubinko]
mdubinko has joined #tp
23:04:37 [ddahl]
ddahl has left #tp
23:05:43 [jim]
jim has left #tp
23:13:09 [marie]
ADJOURNED
23:28:33 [Tantek]
Tantek has joined #tp
23:34:22 [Tantek]
anybody else notice that the diagram on this slide
is missing CSS?
23:38:21 [Tantek]
since all XML based semantic content markup languages can(should?) use CSS for presentation, and using CSS to separate the presentation from the content markup helps accessibility (user style sheets), internationalization (:lang etc.), device independence (media queries)
23:40:07 [RalphS]
rrsagent, please excuse us
23:40:08 [RRSAgent]
I see no action items | http://www.w3.org/2003/03/05-tp-irc | CC-MAIN-2018-26 | refinedweb | 16,983 | 72.5 |
Simple example about animating a button
I post this only for people learning like me. Even if it can be done a lot better we go through a learning curve.
import ui import time ''' this code is not to be considered serious, more the idea than anything else. For new guys like me learning... ''' @ui.in_background def btn_font_animation(sender): start = int(sender.font[1])+1 finish = int(start * 1.4) for i in range(start,finish,2): sender.font = (sender.font[0],i) time.sleep(.02) for i in range(finish,start,-1): sender.font = (sender.font[0],i) time.sleep(.01) sender.font = (sender.font[0], start) if __name__ == '__main__': v = ui.View(name = 'Button Font Animation Test') v.frame = (0,0,540,576) v.background_color = 'red' btn = ui.Button(title = 'Press Me') btn.width, btn.height = 300,100 btn.x = (v.width / 2 ) - (btn.width / 2) btn.y = (v.height / 2 ) - (btn.height /2) - 45 btn.border_width = .5 btn.tint_color = 'white' btn.font = ('<system-bold>', 36) btn.action = btn_font_animation v.add_subview(btn) v.present('sheet')
I could also imagine the talented graphics guys in this community opening a repo of a class of animation effects for buttons/ui elements.
in general, I would caution against using sleep for animations...
ui.animatewas made for this purpose (you provide a function that changes a ui parameter, and a time, and the animation does the interpolation, all while keeping your ui responsive).
however it appears that font size is not
animateable, so sleep is one way to do this. all other positioning attributes, color, transforms, etc are animate able, I suspect this was an oversight...
I think if someone took the lead of designing the interface to the class, I would imagine there would be a lot of activity in a project like this.
Maybe I am wrong, but I dont think so. My casual and inexperienced observations lead me to believe that a lot more tools/reusable code would be written for Pythonista if some really good software architects set out some frameworks for different categories that are Pythonista centric. What to call them? Extensions?, XCMD's, DLL's, plugins....it really doesn't matter. But it is a very familiar concept with operating systems and applications. Anyway, I am not a software architect, but I can write to a specification if one exists.
@JonB, yes I have been using ui.animate for sliding animations etc. I tried many things for the font animation using ui.animate, but could not get it to work (reasonably). I mean without a big fudge. I know for example I could have an object offscreen and using the values of that object to control the font size. It just does not seem right to do it that way. I also know the other advantage of using ui.amimate is its on a different thread, meaning that an animation running with ui.animate and some other task on ui_background will run simultaneously. 2 tasks running on ui_background will be run serially from my experience.
Oh, I am sure the only reason font can not be used is because it's a tuple
you may be able to get two such animations to run by using ui.delay that calls itself, rather than
in_background.
Hmm, I am pretty sure my comments about being able to use ui.animate to change a font size with a control offscreen appears to be garbage. My lack of understanding with what ui,animate is doing. I think I get it now. For some reason I thought the function passed to ui.animate was being called repeatedly. After some experimentation, it's clear it's not the case. I was sure I had seen it exhibit that behaviour, must of got mixed up with ui.background or something. Oh, live and learn. Like learning a foreign language, very embarrassing to talk to locals murdering their language, but it's the only way to learn.
One thing I think I discovered along the way experimenting, is that appears if a button (I only tried with a button) has an alpha of 0.0 it does not respond to clicks. However btn.on_screen reports True. Not sure if this intended or a bug. Or I am still running down the confusion road.
@JonB, yes I see a potential problem with the ui_background approach for ui elements. Keep clicking the btn multiple times, things can go a little haywire. I guess some state flags could help avoid this. I with give the ui.delay a go to do the same thing, but I would have thought it would not be fast enough to get a nice snappy font size zoom, and zoom back out. | https://forum.omz-software.com/topic/1837/simple-example-about-animating-a-button | CC-MAIN-2022-27 | refinedweb | 790 | 69.28 |
IRC log of wam on 2009-03-26
Timestamps are in UTC.
13:00:15 [RRSAgent]
RRSAgent has joined #wam
13:00:15 [RRSAgent]
logging to
13:00:22 [ArtB]
ScribeNick: ArtB
13:00:25 [ArtB]
Scribe: Art
13:00:28 [ArtB]
Chair: Art
13:00:33 [ArtB]
Date: 26 March 2009
13:00:41 [ArtB]
Meeting: Widgets Voice Conference
13:00:42 [tlr]
zakim, call thomas-781
13:00:42 [Zakim]
ok, tlr; the call is being made
13:00:44 [Zakim]
+Thomas
13:00:47 [ArtB]
Agenda:
13:01:04 [ArtB]
Regrets: Jere
13:01:12 [ArtB]
Present: Art, Thomas, Frederick
13:01:44 [ArtB]
Regrets+ Bryan
13:01:54 [Zakim]
+ +1.919.536.aaaa
13:02:02 [Zakim]
+Mark
13:02:03 [Zakim]
+??P9
13:02:10 [darobin]
Zakim, P9 is me
13:02:10 [Zakim]
sorry, darobin, I do not recognize a party named 'P9'
13:02:11 [ArtB]
Present+ Mark
13:02:13 [arve]
arve has joined #wam
13:02:15 [ArtB]
Present+ Andy
13:02:16 [darobin]
Zakim, ??P9 is me
13:02:16 [Zakim]
+darobin; got it
13:02:27 [ArtB]
Present+ Robin
13:02:29 [mpriestl]
mpriestl has joined #wam
13:02:56 [Zakim]
+ +47.23.69.aabb
13:03:00 [arve]
zakim, aabb is me
13:03:00 [Zakim]
+arve; got it
13:03:04 [ArtB]
RRSAgent, make log public
13:03:12 [ArtB]
Present+ Arve
13:03:51 [Zakim]
+[IPcaller]
13:03:54 [ArtB]
RRSAgent, make minutes
13:03:54 [RRSAgent]
I have made the request to generate
ArtB
13:04:07 [ArtB]
Present+ Marcos
13:04:12 [arve]
zakim, whoi is making noise?
13:04:12 [Zakim]
I don't understand your question, arve.
13:04:14 [darobin]
Zakim, who's making noise?
13:04:16 [arve]
zakim, who is making noise?
13:04:32 [Zakim]
darobin, listening for 10 seconds I heard sound from the following: fjh (5%), Art_Barstow (58%), darobin (65%), arve (15%), [IPcaller] (35%)
13:04:38 [ArtB]
Topic: Review and tweak agenda
13:04:39 [darobin]
Zakim, mute me me
13:04:39 [Zakim]
I don't understand 'mute me me', darobin
13:04:43 [w3c_]
w3c_ has joined #wam
13:04:43 [darobin]
Zakim, mute me
13:04:46 [Zakim]
arve, listening for 10 seconds I heard sound from the following: Art_Barstow (63%), darobin (66%)
13:04:46 [ArtB]
AB: I posted the agenda on March 25
Note DigSig is not on today's agenda.
13:04:48 [Zakim]
darobin should now be muted
13:04:56 [ArtB]
... Are there any change requests?
13:04:57 [fjh]
q+
13:05:02 [fjh]
zakim, unmute me
13:05:02 [Zakim]
fjh should no longer be muted
13:05:16 [ArtB]
FH: want to add DigSig namespaces
13:05:45 [ArtB]
AB: OK but will limit the time
13:05:54 [ArtB]
AB: any other requests?
13:05:55 [ArtB]
[None]
13:06:08 [ArtB]
Topic: Announcements
13:06:14 [ArtB]
AB: any short announcements? I don't have any.
13:06:19 [ArtB]
[ None ]
13:06:23 [ArtB]
Topic: DigSig
13:06:36 [ArtB]
AB: go ahead Frederick
13:06:41 [ArtB]
FH: I made a few changes
13:06:56 [ArtB]
... checker complained
13:07:00 [ArtB]
MC: fixed it
13:07:11 [ArtB]
FH: namespace question
13:07:18 [ArtB]
... is it OK to not use date
13:07:30 [ArtB]
TR: I need to check the namespace policy
13:07:33 [tlr]
13:07:33 [darobin]
Zakim, unmute me
13:07:33 [Zakim]
darobin should no longer be muted
13:07:54 [ArtB]
RB: namespace policy should permit this
13:08:08 [ArtB]
TR: I don't see any problems; we can go ahead
13:08:14 [ArtB]
FH: then I think we're all set
13:08:18 [ArtB]
MC: agreed
13:08:25 [fjh]
zakim, mute me
13:08:25 [Zakim]
fjh should now be muted
13:08:52 [ArtB]
AB: the DigSig WD should be published early next week
13:08:54 [ArtB]
Topic: P&C spec: L10N model
13:09:08 [ArtB]
AB: one of the open issues is if the P&C's localization model should be one master config file only versus a master config file plus locale-specific config files to override definitions in the master config file. Marcos created lists of advantages and disadvantages of both models. Some people have expressed their preference. The tally appears to be: Only one: Marcos; One Plus: Josh, Benoit; Can Live With Either: Jere. The thread is here: <
13:09:14 [Zakim]
+ +45.29.aacc
13:10:22 [ArtB]
AB: I would like to get consensus i.e. a resolution on this today and a gentle reminder that "I Can Live With It" will help us get the next LCWD published. Let's start with Marcos - do you see a single model that addresses everyone's concerns?
13:11:35 [ArtB]
MC: the new model doesn't address the concern where multiple localizers are involved in the process pipeline
13:11:54 [ArtB]
... the new model is easier to implement
13:12:26 [ArtB]
... agree the config file could grow to an un-manageable size
13:12:40 [ArtB]
... the I18N WG said the new model is OK
13:12:54 [ArtB]
... I think we could merge the models
13:13:09 [ArtB]
BS: I don't understand the merge model Marcos
13:13:38 [ArtB]
MC: have the main config file but if the app has lots of localized data that data can be put in separate files
13:13:46 [MoZ]
MoZ has joined #wam
13:14:58 [abraun]
abraun has joined #wam
13:15:26 [ArtB]
AB: any other comments?
13:15:40 [w3c_]
when using both models there would need a sort of precedence of some sort so that 2 information do not overlap
13:16:05 [ArtB]
RB: so is the idea to have a single file for v1.0 and then in v1.* move to support the old model
13:16:15 [ArtB]
MC: yes, that is true
13:16:56 [darobin]
RB: I think it makes sense to start with something simple and only add the more advanced features if we need them later
13:17:02 [ArtB]
MC: the model is to use a single config doc for 1.0
13:17:20 [ArtB]
... inside that file the xml:lang attr is used to localize specific elements and attrs
13:17:36 [fjh]
s/fixed it/will fix it/
13:18:17 [ArtB]
... in subsequent version of P+C we add support for locale-specific conf files
13:18:32 [ArtB]
AB: is this right Marcos?
13:18:35 [ArtB]
MC: yes
13:19:18 [ArtB]
AB: any comments about this evolution path
13:19:29 [ArtB]
... Note that timeless is not on the call
13:19:36 [w3c_]
w3c_ has left #wam
13:19:54 [ArtB]
... He objected to the new model but did not include any rationale for his objection
13:20:15 [ArtB]
... Benoit, what are your thoughts on this evolution proposal?
13:20:22 [ArtB]
BS: I think I can live with it
13:20:38 [ArtB]
... I do think localizers having their separate files is better
13:21:02 [ArtB]
... but having just one config file wil be easier for the developer
13:21:39 [ArtB]
AB: I think we have consensus to go forward with Marcos' proposal
13:22:11 [w3c]
w3c has joined #wam
13:22:24 [ArtB]
AB: draft resolution: for v1.0 we will use the new l10n model proposed by Marcos and consider multiple locale-specific config files for the next version
13:22:32 [ArtB]
AB: any objections?
13:22:34 [ArtB]
[ None ]
13:23:30 [ArtB]
RESOLUTION: for v1.0 we will use the new L10N model proposed by Marcos and consider multiple locale-specific config files for the next version
13:23:38 [ArtB]
Topic: P&C spec: status of <access> element:
13:23:47 [ArtB]
AB: last week the <status> element was noted as an open issue that must be addressed before we can publish a new LCWD.
If I recall correctly, no one volunteered to submit any related inputs. The note in the ED says "ISSUE: This element is currently under review. A new proposal will be available in the next few days that will provide the ability to list which URIs can be accessed.".
13:24:09 [darobin]
s/<status>/<access>/
13:24:27 [mpriestl]
q+
13:24:37 [ArtB]
AB: Marcos, what is the status and what specific inputs are needed?
13:24:55 [ArtB]
MC: I am researching how to address this
13:25:01 [ArtB]
... looking at what Opera does
13:25:14 [Marcos]
I need to align it with
13:25:15 [ArtB]
... but we probably will want to do something a bit different
13:25:33 [ArtB]
... the above is by Dan Connolly
13:25:43 [tlr]
q+
13:26:16 [ArtB]
TR: what alignment with DC's draft is needed?
13:26:25 [ArtB]
MC: need to align with terminology
13:26:59 [ArtB]
... need to break up the scheme parts to diferent attrs
13:27:06 [ArtB]
... e.g. port can be a list
13:27:45 [ArtB]
TR: this is similar to some work in POWDER WG
13:28:13 [ArtB]
... wonder if this needs to depend on the URLs in DC's work
13:28:20 [ArtB]
... but we can take it to e-mail
13:28:52 [ArtB]
... doing this should take a week or two and will require some changes
13:29:17 [ArtB]
RB: can we please get a pointer to POWDER work?
13:29:30 [ArtB]
TR: will get one; not sure if there needs to be a dependency
13:29:38 [ArtB]
... we should take this to e-mail
13:30:45 [ArtB]
MP: we previously discussed a hybrid approach
13:31:19 [ArtB]
... and then define some precedence rules if there are conflicts in host elements
13:31:36 [ArtB]
... for v1 can we just go with URI
13:31:57 [ArtB]
... and if a hybrid approach really is needed we do that in a subsequent version of the spec
13:32:11 [ArtB]
... What do you think about that approach?
13:32:22 [ArtB]
MC: could be a prob in some use cases
13:32:32 [ArtB]
... some web apps have many subdomains
13:32:47 [ArtB]
... then those couldn't be accessed
13:32:58 [ArtB]
RB: but could use *.foo
13:33:10 [ArtB]
MC: yes, that's an option
13:33:12 [darobin]
RB: e.g.
http://*.googlemaps.com
13:33:33 [Zakim]
-[IPcaller]
13:34:00 [Zakim]
+??P2
13:34:14 [ArtB]
AB: any last comments before this discussion moves to the mail list
13:35:05 [ArtB]
MC: if we use wildcards, it opens a different set of questions
13:35:16 [ArtB]
... e.g. what part of the scheme are "*" permitted
13:35:39 [ArtB]
RB: typically, don't need too many ports
13:36:07 [ArtB]
... want to start with something simple for v1
13:36:18 [ArtB]
... and possibly ask for more feedback
13:37:14 [ArtB]
AB: please take the discussion to the mail list
13:37:49 [ArtB]
... MC, can you make a short proposal on the mail list?
13:37:54 [ArtB]
MC: yes I will
13:38:09 [ArtB]
... re wildcarding, CORS tried this and it didn't really work
13:38:15 [ArtB]
Topic: P&C spec: <update> element given Apple's patent disclosure
13:38:53 [ArtB]
AB: Apple's disclosure raises the question "what, if any, changes must be made to the P&C spec?" where one major concern is if P&C has a dependency on Updates. There appear to be two relevant pieces of text: Section 7.14 (<update> element)
and Step 7.
13:39:29 [Zakim]
- +1.919.536.aaaa
13:39:34 [ArtB]
AB: My take is that Section 7.14 is OK as written given what we know today (PAG hasn't even had its first meeting). The element's processing in Step 7 could be qualified with something like "this step is only performed if the UA implements [Widgets Updates] but I can live with the existing text.
13:40:16 [ArtB]
AB: One other option is to put a Warning in 7.14 e.g. "Warning: this feature may be removed because ...".
13:40:54 [ArtB]
AB: what are people's thoughts on this?
13:41:28 [ArtB]
BS: without any info from the PAG, I think we should keep it and add some type of warning
13:42:13 [ArtB]
TR: is the question, how far can the spec go given the PAG?
13:42:37 [ArtB]
... I think the group cannot go beyond LC but will verify with Rigo
13:42:44 [Zakim]
+ +1.919.536.aadd
13:43:44 [ArtB]
AB: the syntax is in the PC spec but the proc model is in the Updates spec
13:43:50 [ArtB]
MC: yes that is correct
13:44:20 [ArtB]
MC: we could remove <update> element from P+C and define it in the Updates spec
13:44:34 [ArtB]
AB: any comments on Marcos' proposal?
13:44:40 [ArtB]
AB: I like that proposal
13:44:49 [ArtB]
BS: I would be opposed to it
13:45:07 [ArtB]
TR: I will discuss this Rigo and cc member-webapps
13:45:08 [Benoit]
but I do not want to hold the P&C spec with this
13:45:48 [ArtB]
TR: I can understand the concern about a normative ref for a spec that may be stalled
13:46:13 [ArtB]
AB: we will wait for some feedback from TR and Rigo before we implement MC's proposal
13:46:27 [ArtB]
Topic: P&C spec: step 7 - need to add <preference> element and the <screenshot> element;
13:46:39 [ArtB]
AB: last week <preference> and <screenshot> were noted as needing work. I believe Robin agreed to help with this. What is the status and plan?
13:47:20 [ArtB]
RB: I haven't made a lot of progress on this
13:47:42 [ArtB]
MC: I will try to finish this by tomorrow
13:47:55 [ArtB]
... I have been blocked by the consensus on the L10N model
13:48:10 [ArtB]
... but now that we have that consensus, I can make the appor changes
13:48:23 [ArtB]
Topic: P&C spec: XML Base
13:48:33 [ArtB]
AB: Thomas and Marcos have exchanged some emails about this
What is the status and what specifically needs to be done to address the issue?
13:49:16 [ArtB]
MC: this relates to the L10N model too
13:49:32 [MoZ]
MoZ has joined #wam
13:50:07 [ArtB]
... the xml:lang value needs to match the name of a localized folder
13:50:29 [ArtB]
... TR is wondering if XML base is the right solution for this
13:51:07 [ArtB]
... there are some other related issues too; I've been talking to Robin and others in Opera about this
13:51:25 [ArtB]
... Not having a URI scheme for widgets cause problems too
13:51:36 [ArtB]
... ZIP relative paths are not URIs
13:52:24 [ArtB]
TR: we want a model to make refs from within the html
13:52:40 [ArtB]
... but mapping URI refs to something else
13:52:53 [ArtB]
... using XML base is not going to help
13:53:09 [ArtB]
... as it confuses the left and right sides of the mapping
13:53:42 [ArtB]
... The spec lang MC wrote redfines XML base
13:54:06 [ArtB]
MC: I still want to try to solve this with XML Base
13:54:42 [ArtB]
... our solution will have to work with HTML base
13:55:06 [ArtB]
TR: if there is a URI scheme defined that points at things within the widget
13:55:16 [ArtB]
... then we can use that URI scheme throughout
13:55:18 [ArtB]
MC: yes
13:56:33 [ArtB]
TR: does the base paramter sit on the URI side of the mapping or the other side
13:56:51 [ArtB]
... similar to some questions we had about References in DigSig
13:57:12 [ArtB]
... struggling with a missing design decision
13:58:23 [ArtB]
TR: there are two things: uri ref and the other is paths to the zip
13:59:13 [ArtB]
... think most things should be in URI side but some things should be on the zip side
13:59:25 [asledd]
asledd has joined #wam
13:59:27 [ArtB]
... Need to get some consistency in the various specs
13:59:39 [ArtB]
RB: agree we must solve this problem
14:00:00 [tlr]
RB: metadata files will feel more comfortable in URI space
14:00:28 [tlr]
TR: This is another instance of the URI discussion. We have some things that live in URI space. We have some things that live in Zip path space. We need to do a translation between the two and say where that happens.
14:00:37 [darobin]
RB: we have to solve this anyway for the content of the widgets (HTML, SVG), so since we need to solve it, and since it would be more comfortable to use URIs in config.xml we ought to solve it once and use it everywhere
14:00:39 [tlr]
TR: Right now, we're reinventing that translation over and over again. That way lies madness
14:01:58 [ArtB]
.AB: other than "take this to the mail list", who is going to do what to help us get closure here?
14:02:21 [darobin]
s/.AB/AB/
14:02:29 [ArtB]
AB: any last comments?
14:02:45 [ArtB]
Topic: A&E spec
14:02:59 [ArtB]
AB: the latest ED of the A&E spec includes many Red Block Issues. I'd like to go thru as many of them at a high level and for each of them get a sense of what specific inputs are needed and the plan to get those inputs. Latest ED is:
14:04:06 [ArtB]
Arve: Marcos, the latest ED says 25 March but I don't think it is the latest version
14:04:15 [ArtB]
AB: yes, I was wondering the same thing
14:04:25 [fjh]
fjh has joined #wam
14:05:31 [ArtB]
Arve: should we go thru all of the Red Blocks?
14:05:41 [ArtB]
AB: I want to understand what needs to be done
14:05:54 [ArtB]
Arve: re Window issue
14:06:50 [ArtB]
... who can talk to HTML WG
14:07:06 [ArtB]
RB: I think Window will be split out as soon as an Editor is identified
14:07:15 [ArtB]
MC: but no one has agreed to be the Editor
14:07:40 [ArtB]
AB: so what does this mean in terms of the progression of this spec?
14:07:55 [ArtB]
MC: I don't think we need a depedency on the Window spec
14:08:21 [ArtB]
... We can just add some text about the "top level ... "
14:08:29 [ArtB]
Arve: yes, we can make it informative ref
14:08:39 [darobin]
14:08:57 [ArtB]
TR: agree, it can be Informative ref
14:09:21 [ArtB]
AB: do we consensus the dependency is an Informative ref?
14:10:14 [ArtB]
Arve: yes
14:10:40 [Benoit]
Benoit has joined #wam
14:10:56 [ArtB]
... I can re-write this Red Block
14:11:14 [ArtB]
... I only want a DOM 3 Core ref and Widget ref but nothing else
14:11:30 [ArtB]
... and XHR as is done already
14:11:32 [Benoit]
Benoit has joined #wam
14:11:51 [ArtB]
AB: any objections to Arve's proposal?
14:12:09 [ArtB]
RB: that's OK; could even make the dependencies in a sep doc
14:12:12 [Benoit]
Benoit has joined #wam
14:12:19 [ArtB]
[ No objections ]
14:12:34 [ArtB]
AB: next, Section 5 - Resolving DOM Nodes
14:12:44 [ArtB]
Arve: we don't need to say anything about the URI scheme here
14:13:17 [ArtB]
... I propose removing this section
14:13:33 [Benoit]
Benoit has joined #wam
14:13:54 [ArtB]
... and be a bit more specific about how URIs are used where appropriate in the spec
14:14:23 [ArtB]
AB: so you propose remove seciton 5?
14:14:25 [ArtB]
Arve: yes
14:14:33 [ArtB]
AB: any objections to that proposal?
14:14:37 [ArtB]
[ None ]
14:14:59 [ArtB]
AB: next is 7.3 - identifier attr
14:15:35 [ArtB]
... "Issue: how does an author access the widget's id as declared in the config document? Also, what happens if this is not unique? How is uniqueness assured?
14:16:03 [ArtB]
Arve: not sure what we should do here
14:16:34 [ArtB]
... my proposal is to use an equivalent element in the config file and to use that
14:16:55 [ArtB]
AB: any questions or concerns about that proposal?
14:17:10 [ArtB]
... Marcos, what element would be used?
14:17:14 [ArtB]
MC: not sure
14:17:31 [tlr]
q+
14:17:51 [ArtB]
AB: so the action for you Arve is to check the config file and come back with a proposal?
14:17:55 [ArtB]
Arve: yes
14:18:38 [ArtB]
ACTION: Arve create a proposal for the A+E's section 7.3 Red Block issue re the identifier attribute
14:18:38 [trackbot]
Created ACTION-325 - Create a proposal for the A+E's section 7.3 Red Block issue re the identifier attribute [on Arve Bersvendsen - due 2009-04-02].
14:19:04 [ArtB]
TR: is this just needed at runtime?
14:19:22 [ArtB]
... is this put in the base URI
14:19:32 [ArtB]
... want to understand what is needed for
14:19:43 [ArtB]
Arve: we do not need to define how it is used
14:20:04 [ArtB]
... at runtime, a unique id is generated
14:20:15 [ArtB]
... and randomizes the base uri
14:20:24 [ArtB]
TR: this seems like an imple detail
14:20:37 [ArtB]
... want to understand how it is used by widget instance
14:20:39 [darobin]
s/imple/simple/
14:20:44 [Benoit]
Benoit has joined #wam
14:20:48 [ArtB]
MC: yes, what would a developer use it for?
14:21:13 [Benoit_]
Benoit_ has joined #wam
14:21:14 [ArtB]
TR: what is this attr used for?
14:21:33 [tlr]
it might be that the attribute you really want is origin
14:21:35 [ArtB]
... I don't think I'm getting an answer that substantiates its need
14:21:42 [ArtB]
MC: yes, I agree with TLR
14:21:42 [tlr]
but that's defined elsewhere ;)
14:21:51 [ArtB]
Arve: perhaps you're right
14:22:24 [ArtB]
BS: what about cross-widget comm?
14:22:43 [ArtB]
MC: not sure we want to include it for that use Benoit
14:22:54 [ArtB]
TR: I propose we remove identifier attribute
14:23:09 [ArtB]
Arve: if wanted to use post message, could use this
14:23:55 [tlr]
sure
14:24:07 [ArtB]
AB: let's stop discussion and take this to the mail list
14:24:13 [tlr]
AB: raise question in response to Arve's draft on the mailing list
14:24:15 [tlr]
TR: sure
14:25:53 [ArtB]
Arve: I will submit proposals for all of the Red Block issues starting with the one in Section 7.8
14:25:59 [ArtB]
AB: that would be excellent Arve!
14:26:11 [ArtB]
Topic: Window Modes spec
14:26:22 [ArtB]
AB: what is the status and next steps?
14:26:37 [arve]
anyone who wants to derive an origin url, could do so using document.domain
14:26:47 [ArtB]
MC: we don't have any new status to report
14:27:04 [ArtB]
... we need an editor
14:27:13 [ArtB]
AB: do we have a skeleton doc?
14:27:32 [ArtB]
... I mean anything checked into CVS?
14:27:35 [ArtB]
MC: No
14:27:44 [ArtB]
AB: any volunteers to drive this?
14:27:49 [tlr]
arve, nooo
14:27:55 [ArtB]
RB: I will take it!
14:28:13 [ArtB]
... it may be about 10 days though before I can start working on it
14:28:34 [ArtB]
AB: excellent Robin!
14:28:38 [Zakim]
- +1.919.536.aadd
14:29:04 [fjh]
fixes in widget signature complete, apart from latest comments received from Bondi and date of document
14:29:08 [ArtB]
AB: any other hot topics
14:29:13 [ArtB]
AB: Meeting Adjourned
14:29:21 [Zakim]
-fjh
14:29:24 [Zakim]
-Art_Barstow
14:29:25 [Zakim]
-Mark
14:29:25 [Zakim]
- +45.29.aacc
14:29:25 [Zakim]
-Thomas
14:29:27 [Zakim]
-darobin
14:29:27 [Zakim]
-??P2
14:29:30 [ArtB]
RRSAgent, make minutes
14:29:30 [RRSAgent]
I have made the request to generate
ArtB
14:29:39 [Zakim]
-arve
14:29:40 [Zakim]
IA_WebApps(Widgets)9:00AM has ended
14:29:41 [Zakim]
Attendees were fjh, Art_Barstow, Thomas, +1.919.536.aaaa, Mark, darobin, +47.23.69.aabb, arve, [IPcaller], +45.29.aacc, +1.919.536.aadd
14:29:48 [arve]
tlr: sorry, a bit imprecise
14:30:06 [tlr]
+trout
14:30:15 [Marcos]
hhehe
14:30:19 [Marcos]
leave poor Zakim alone
14:30:21 [arve]
but I think we can ignore identifier until needed
14:30:21 [tlr]
zakim, darobin has trout
14:30:21 [Zakim]
sorry, tlr, I do not recognize a party named 'darobin'
14:30:45 [tlr]
arve, agree on ignoring identifier until needed
14:30:53 [tlr]
it won't do us any good now
14:31:27 [Marcos]
fjh, do you want met to handle Rainer's comments ?
14:32:16 [ArtB]
RRSAgent, bye
14:32:16 [RRSAgent]
I see 1 open action item saved in
:
14:32:16 [RRSAgent]
ACTION: Arve create a proposal for the A+E's section 7.3 Red Block issue re the identifier attribute [1]
14:32:16 [RRSAgent]
recorded in | http://www.w3.org/2009/03/26-wam-irc | CC-MAIN-2016-22 | refinedweb | 4,407 | 76.45 |
There are 2 parts to getting the -fassociative-math command-line flag translated to LLVM FMF:
- In the driver/frontend, we accept the flag and its 'no' inverse and deal with the interactions with other flags like -ffast-math. This was mostly already done - we just needed to pass the flag on as a codegen option. The test file is complicated because there are many potential combinations of flags here.
- In codegen, we map the option to FMF in the IR builder. This is simple code and corresponding test.
For the motivating example from PR27372:
float foo(float a, float x) { return ((a + x) - x); } $ ./clang -O2 27372.c -S -o - -ffast-math -fno-associative-math -emit-llvm | egrep 'fadd|fsub' %add = fadd nnan ninf nsz arcp contract float %0, %1 %sub = fsub nnan ninf nsz arcp contract float %add, %2
So 'reassoc' is off as expected (and so is the new 'afn' but that's a different patch). This case now works as expected end-to-end although the underlying logic is still wrong:
$ ./clang -O2 27372.c -S -o - -ffast-math -fno-associative-math | grep xmm addss %xmm1, %xmm0 subss %xmm1, %xmm0
We're not done because the case where 'reassoc' is set is ignored by optimizer passes. Example:
$ ./clang -O2 27372.c -S -o - -fassociative-math -emit-llvm | grep fadd %add = fadd reassoc float %0, %1 $ ./clang -O2 27372.c -S -o - -fassociative-math | grep xmm addss %xmm1, %xmm0 subss %xmm1, %xmm0 | https://reviews.llvm.org/D39812 | CC-MAIN-2019-35 | refinedweb | 245 | 65.01 |
David Miller <davem@davemloft.net> writes:> I agree with this analysis.>> The Linux man page for times() explicitly lists (clock_t) -1 as a> return value meaning error.>> So even if we did make some effort to return errors "properly" (via> force_successful_syscall_return() et al.) userspace would still be> screwed because (clock_t) -1 would be interpreted as an error.>> Actually I think this basically proves we cannot return (clock_t) -1> ever because all existing userland (I'm not talking about inside> glibc, I'm talking about inside of applications) will see this as an> error.>> User applications have no other way to check for error.>> This API is definitely very poorly designed, no matter which way we> "fix" this some case will remain broken.A possible remedy is to return the ticks since process start time, whichdelays the wrap around much further. POSIX only demands consistencywithin the same process | http://lkml.org/lkml/2007/11/8/51 | CC-MAIN-2015-11 | refinedweb | 147 | 57.06 |
Start a Second Android Activity From The First
If you are new to Android development it is often easier to see simple examples rather than decode the developers online documentation. Building on the simple example seen in Your First Android Java Program - Hello World! another screen containing a simple message is loaded from a button. This demonstrates the principles of starting a new User Interface (UI) screen, helping to understanding on how Android handles UI creation.
To get a screen up and running in an app the following is required:
- The definition of the screen must be composed in a layout.
- An
Activityclass must be defined in a Java class file to handle the screen.
- Android must be notified that the Activity exists, via the app's manifest file.
- The app must tell Android to start the new screen.
Studio handles most of the plumbing when adding a new activity to an app, performing tasks 1 to 3 automatically when an Activity is created using Studio's options. This leaves only a small amount of code to be written to load a second screen from the first app screen.
Start with a Basic Hello World App
Fire up Android Studio and open the Hello World project (see Your First Android Java Program - Hello World! on how to create the Hello World app).
Add Another Activity
In the Project explorer tree in the Studio select the app folder. (See the Android Project Structure article to become familiar with project files). Using the File menu or the context menu (commonly right-click) add a new Activity to the app via New and Activity. A Basic Activity can be used:
Set the properties for the new Activity, e.g.:
- Activity Name - Screen2
- Layout Name - secondscreen
- Title - Screen 2
Select the Parent to be the first Activity, com.example.helloworld.MainActivity (the parent is the screen the app returns to when the back button is pressed). All other settings remain as default (not a Launcher Activity, not a Fragment, Package set to com.example.helloworld, Target Source Set is main).
Click Finish and the Activity and associated layout is created. (If an error message is displayed on the new screen try the Invalidate Caches / Restart option on the File menu as discussed in the article Your First Android Java Program.)
Add a Button to the First Screen
Add a
Button to the first screen in the layout folder. (Tip: To find an item, e.g. layout, click the to level in the Project explorer and start typing the item's name to start a search.) The file activity_main.xml contains the screen elements for the first screen. With the file open drag and drop a Button widget onto the screen:
Set the Button's text to Next, use the right hand Component Tree tab to see the Properties.
Add Text to the Second Screen
Open the content_screen2.xml file in the layout folder and drag and drop a
TextView on to the screen:
As for the Button set the TextView text, e.g. to Hello! Again.
AndroidManifest.xml
When the Screen2 Activity was added to the app the correct definitions were added to the app's manifest file, AndroidManifest.xml in app/src/main. This file must be present in an app project. It provides the Android Operating System (OS) all the information it needs to manage the application and the components it contains. If AndroidManifest.xml is opened it will be seen that both the initial screen, MainActivity and the new screen, Screen2 are defined in activity sections. The information in these sections tells Android about the screens in an app.
Add Code to Start Screen 2
The button on the first screen will tell Android of our "intention" to start the Activity that loads the new Screen 2. To do this the button runs an
onClick method:
public void onClick(View v) { Intent intent = new Intent(MainActivity.this, Screen2.class); startActivity(intent); }
In onClick the name of the required activity, Screen2.class, is passed in an
Intent object to the
startActivity method. The startActivity method is available on a
Context object; Context has a host of useful methods which provide access to the environment in which the app is executing. Context, and hence startActivity is always available within an Activity due to Android API subclassing.
The onClick method is connected to the button by an
OnClickListener callback from a
View. The following code is added to the MainActivity class, before the last closing brace (curly bracket, }) in the file MainActivity.java. Press Alt-Enter when prompted for the correct
import statements to be added automatically:
class handleButton implements OnClickListener { public void onClick(View v) { Intent intent = new Intent(MainActivity.this, Screen2.class); startActivity(intent); } }
The Intent object is also given a reference to the app Context, and since MainActivity is subclassed we can use this (here MainActivity.this because of the inner class for the onClick handler). The startActivity method gives Android the opportunity to perform any required housekeeping and then fire up the Activity named in the Intent (Screen2).
The findViewById method, available to activities, is used to get a reference to the button. The setOnClickListener method can then be called to link the button to the onClick code. This is done before the closing brace at the end of the
onCreate method in MainActivity:
findViewById(R.id.button).setOnClickListener(new handleButton());
This is the full MainActivity code:
package com.example.helloworld; import android.content.Intent; handleButton()); } class handleButton implements View.OnClickListener { public void onClick(View v) { Intent intent = new Intent(MainActivity.this, Screen2.class); startActivity(intent); } } }
When the app runs the first screen will show:
And pressing the Next button shows:
A button is not required to go back to the first screen. Android automatically provides a back button as part of the platform.
This article has shown the basics of starting another screen in an app. The code is available in a zip file, secondscreen.zip, with an instructions.txt on how to import the project into Studio. Later articles will show how data can be passed between screens.
See Also
- See some more Android Example Projects.
Archived Comments
Arthur Lu in January 2018 said: For some reason, whenever I run my code, it is unable to open the second activity, and instead re-opens the main activity.
Arthur Lu in January 2018 said: OK, so it turns out I accidentally deleted some important code on the second activity.
Gili Alafia in January 2018 said: But that doesn't answer the question of how to open the app from screen2 to mainactivity.
Dan from Tek Eye in January 2018 said: As it says in the article, use back (e.g. via the arrow on the action bar) to return to the first activity. If you want an explicit button just use the same type of code in the second screen that was used for the main activity:
. . . findViewById(R.id.button2).setOnClickListener(new handleButton()); } class handleButton implements View.OnClickListener { public void onClick(View v) { Intent intent = new Intent(Screen2.this, MainActivity.class); startActivity(intent); } }
Author:Daniel S. Fowler Published: Updated: | https://tekeye.uk/android/examples/starting-a-second-activity | CC-MAIN-2018-26 | refinedweb | 1,191 | 55.84 |
A Practical Implementation of Event Sourcing with MySQL
True Story Follows
I started writing this entry in November of 2016, and with the results of the most recent state ballot measures, marijuana will soon be completely legal in California. With that in mind, one thing is for sure: event sourcing will become more popular as a means to model the world in an application. I was introduced to this approach of persisting data while perusing YouTube videos, but only recently did I make an attempt to bring it to the streets of production. I wanted to take a moment to document the approach and considerations. At this point in time, the system has not been fully testing in production over a long duration, so that will have to wait for another day.
The Problem
Scalability is a typical concern expressed with application development, and a typical architecture involves object relational mapping where entities used to model the world are mapped 1:1 in a database table. Therefore, to process these entities, a network request must be made to the database to ascertain the current state of the world. Increased workers will increase load to the database. Responsible engineers will add caching layers that add acceptable complexity. But, with both the database and a caching server (i.e. Redis), the required network calls inherently create the possibility of failure, which must be accounted for in any calling code. Queries to each of these sets of servers will be optimized for remote network calls, which will differ from code that’s contained entirely within a single process. Function calls to an API inside of a process generally don’t require awkward query patterns optimized for network access (i.e. I can call a function thousands of times with no concern for adverse performance consequences), and those calls will be entirely deterministic with no possibility of failure. The point, then, is that as a distributd application grows, the amount of object distribution in a typical setup will increase which proportionally increases the opportunities for failure.
Event Sourcing
Unbenknownst to me, object relational mapping is simply one way to model the world in a persistence layer, and it’s possible to represent state in fundamentally different manners. In particular, event sourcing is the idea that state can be obtained by applying an ordered series of all state changes that have ever occurred. No destructive updates happen, and instead all writes to a database are append-only. Replay those events in order to arrive at the current state of the world.
Practical Implementation
I won’t focus too much on what event sourcing is or why it’s beneficial because there are an abundant papers and videos on the subject (mostly by Greg Young and Martin Fowler). Example implementations, however, were not as abundant.
High Level Overview
A core concept to event sourcing is that we’re effectively taking the idea of a commit log which might typically exist in a database anyway and we’re bubbling that up to the application level. The fundamental idea in my design is that we want to control as much as possible on the application side to include object schemas, timestamp generation, and concurrency control. In this way, I’m explicitly not tied to any particular database, and it would be trivial to change persistence layer should I desire.
The most basic premise is that we’re writing to an append only database table. In my case, I achieved this by using a timestamp rather than an auto increment ID. In my opinion, this is simpler just to not have to worry about cross data center concerns and eliminate any dependency on a particular database.
Typical examples of event sourcing, at least to my knowledge, assume eventual consistency so that database changes can be published. In my use case, however, consistency was a must and therefore a conventional read query is made whenever an up to date state of the world is needed.
Here’s where things start to get a little bit bananas. The core advantage of an event sourced setup is that I don’t need to query a large volume of data to ascertain the state of the world. I only need to fetch the differential from when I last read from the database. In the general case, this is simple and straightforward. However, an edge case exists in which timestamps generated at write time are persisted to the database out of order. If I continuously only fetch a differential based on my last read timestamp, it’s possible that my application could miss a single event in the system, which over time could create some extremely obscure bugs.
I chose to deal with the aforementioned problem in a similar manner to how VOIP phones work with what’s called a jitter buffer. The idea is that there’s an arbitrary time range that’s used to buffer incoming events so that we can re-order them if packets arrive out of order. For a VOIP phone, this is something like 300 milliseconds, and the delay is not discernable to a human. What this means in the design here for event sourcing is that we allot some buffer range (in my case it’s an arbitrary and conservatively high value of 5 seconds) where we always set the last read timestamp as the current time minus the buffer range. Then, subsequent queries will query from that timestamp and forward. This will result in duplicate rows being fetched, but it will also guarantee that we have a complete list of all events that have happened assuming it takes less than 5 seconds for a single write to happen after a timestamp is generated on the application side. This can become confusing for sure on the application side and is probably the biggest drawback of the above approach because the mental barrier to understand the system for onboarding teammates will be higher. The implication of replaying events over and over is that we need to be able to reset the state of the world to a previous state of the world in a point in time. Therefore, code in your state machine will end up looking something like this:
@classmethod def ingest_events(cls, complete_ordered_events, maybe_incomplete_jitter_events): Context.inconsistent_state_of_world = Context.inconsistent_state_of_world.apply_events( complete_ordered_events ) Context.consistent_state_of_world = Context.inconsistent_state_of_world.apply_events( maybe_incomplete_jitter_events )
I’m still not done making things weird. The above scenario, where I need to be able to mutate a datastructure and then return it to its original state brings me to a situation where I can finally have a practical reason to bring purely functional data structures to a production environment. At least, the concepts behind them, namely copy-on-write operations are used for every update to an object. This effectively means that old references to an object are still unchanged and still valid, and new changes are purely non-destructive.
The Database Table
Again, Greg Young outlines the basic strategy for how to create a database table for event sourcing, but I made some changes to his approach based on some assumptions I was able to make that he was not. Namely, data needs to be denormalized to optimize reads, but while Greg accounts for this in the database table design, the use case I’m putting together assumes that data is denormalized directly in memory on a worker machine.
Therefore, the table is made up of a timestamp in microseconds (BIGINT), an aggregate UUID (16 bytes), and an aggregate version (an integer) to create a composite primary key. This is intentionally the only index in this database table. Now we have additional columns for the event type ID (an integer) that will allow the application to discern what kind of event happened and how to interpret the additional data. The next column is a blob of bytes that’s just a serialized version of the event that happened. The data can be de-serialized based on the event type ID. Finally, there’s an additional column for event metadata (another binary blob of some serialized object). Here we can include things that might end up help troubleshoot a problem down the road. In my case, I’m including the user ID of the person that took the action, the git hash of the version of the code running when the write happened, the process ID of the application that made the write, and the host machine.
While the metadata piece may sound a little aggressive, in my first instance of having to debug an issue that came up, it was the most luxurious debugging experience I’ve ever had. For example, the version of the code was known, so I knew exactly where to start looking if it was a recent change that introduced the problem, and I was able to quickly infer that two separate but related rows were being written from separate hosts entirely, which in the context of the problem I was debugging was enough to isolate exactly what the problem was.
> describe event_store; +-------------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------------+--------------+------+-----+---------+----------------+ | timestamp_us | bigint(20) | NO | PRI | NULL | auto_increment | | aggregate_uuid | binary(16) | NO | PRI | NULL | | | aggregate_version | mediumint(9) | NO | PRI | NULL | | | event_type_id | smallint(6) | NO | | NULL | | | event_data | longtext | NO | | NULL | | | event_meta | text | NO | | NULL | | +-------------------+--------------+------+-----+---------+----------------+ 6 rows in set (0.00 sec)
Write Pattern
Writes are fast because the table writes can only be maximally optimized in one way in an append only fashion. Any additional indices on a database table will also slow down writes, which in this case we don’t have. While writes are completely separate from reads in event sourcing, in my code I accounted for this inconvenience by adding a “read your own writes” feature. Concurrency control is already implemented during writes. I won’t elaborate on this because Greg Young already has an in-depth and well written explanation that I won’t do justice, but it is worth nothing that while Greg advocates using stored procedures for optimistic concurrency control, I chose to opt for a round trip to the database so that I wasn’t dependent on MySQL. But I digress. The point is, if an application instance writes a change, and it’s been determined that no conflicting writes happened, then it’s safe to apply my own writes to the local state of the world even though we have a possible stale picture of the remainder of the world.
Read Pattern
The beauty of event sourcing is that writes are maximally optimized with an append only pattern, and reads are also maximally optimized because we read with the same high locality in a linear fashion, and we only need to read the differential of the data. The events are applied to an in-memory representation of the world on each worker machine, and now actual queries to the in-memory representation are also maximally optimized because the data should be denormalized into an appropriate data structure for the intended query patterns (additionally, the objects are entirely in memory, so it’s also fast).
As noted briefly above, the true luxury comes from the fact that any queries to the in-memory state of the world have no possible failure cases, and results are entirely deterministic. If you want to impress the ladies or the gentleman based on whatever you’re into by showing off a complete absence of runtime exceptions, this is how you do it!
Finally, the reads to the database are against rows that are inherently immutable. We can also take advantage of this by making all of our queries transaction isolation level READ UNCOMMITTED. What this means for MySQL is that the rows returned to you could possibly be dirty and some of the columns in the data might very well be stale. But for our use case, we already know that no destructive updates happen, so querying in this manner is perfectly acceptable. The actual performance enhancement here is that MySQL will read rows without locking anything. So you can forget about those 60 nanoseconds to acquire a lock! But also, you can read without blocking writes, and throughput should be maximized if multiple workers are reading and writing.
Example Usage
Now, our API can end up looking something like this:
import datetime from cqrs.api import SnapshotOfWorld from cqrs.world import example_module_for_objects # context manager will handle locking world and applying updates from the # database # sla_latency will ensure that we only update the world once every n seconds. # This is not required, but if your application doesn't necessarily need # consistency (mine needs it in some cases; other cases not), then you can get # some big performance wins with SnapshotOfWorld(sla_latency=datetime.timedelta(seconds=15)): # inside context manager, all objects exist in memory and are synchronized # with the database arbitrary_id = 100 example_module_for_objects.get_by_id(arbitrary_id)
Tying It All Together
Again, there’s more details to the implementation that I won’t go into great detail about because I’d just be restating what’s written on the fancy pants white papers about event sourcing. Namely, you still need to implement a background cron job that periodically takes snapshots of the world at a particular timestamp and then saves those serialized versions in Amazon S3. Without that, startup for an individual worker will require reading the entire database table of all events for all time. Based on the design outlined here though, one thing you’ll need to do is to serialize the world from some point in the not so distant past to avoid the possibility of another write coming in as you’re serializing the world that has an identical timestamp as the last read timestamp. Now, once you load a snapshot from a particular timestamp, you only need to query for values whose timestamps are greater than the snapshot timestamp.
In my implementation, the developers should have a reasonably straightforward experience in implementing new features around event sourcing by simply subclassing an abstract class and implementing the abstract methods. These classes are “state machines” and they’re responsible for ingesting events that have already been ordered and deduped (for optimistic concurrency control) by the event reader class. The state machine specifies what event types to listen for, how to serialize and deserialize a snapshot of its section of the state of the world, and how to apply incoming events to “the world”.
“The world” is the in-memory representation of all objects that should be queryable by the rest of your application code. Since each application instance only needs to be concerned with its own state relative to the stream of events, it’s also trivial to add things like caching decorators where the cache is busted in between every state change, which in practice should be infrequent enough to make the caching worth it. Caching in this case simply means an additional query optimization on what should already be an extremely fast in-memory query.
Probably the last note to make is a bug I traced down based on this implementation. Events are read in batches of 10,000 and are fed to each individual state machine. If one state machine depends on another state machine, you’ll run into errors if the events aren’t fed to the dependent state machine first. Therefore, in your initial setup you’ll need to create a dependency tree of state machines and establish the order in which all state machines should be fed events.
Conclusion
Event sourcing is pretty hot right now. As long as your data set is finite, event sourcing in my opinion is a quite reasonable approach. In terms of development time, it is so far my experience that the up front cost in creating the system is fairly high, but the cost of adding new events and new features is much lower. | http://scottlobdell.me/2017/01/practical-implementation-event-sourcing-mysql/ | CC-MAIN-2019-26 | refinedweb | 2,644 | 54.66 |
Hi Marcus ,
I actually tried the same for extending a standard service for lead replication.
Basically from sproxy for the service I went to the proxy editor and added the data type I created in MDR namespace ( while the service itself is in esr) .
When I excuted the service to test from sproxy the extended field appears along with the new extended fields . In wsdl section some how the new fields does not appear right away , but when I change the user settings for wsdl display and come back to sproxy I was able to locate the extended field in wsdl file .We have used this wsdl ( in HCI ) and are able to pass the data .
The only question remains is that when I move the proxy enhancement to qa system I might have to maintain the name space manually again in the system before moving the transport . however how it will behave is still to be checked .
Do let me know if you have already achieved the same in your case.
Regards
Aditya
Hello Aditya,
thank you for answering to my question.
We just tried to send the data and as it arrived correctly. So I did not investigate further. But in the meantime the enhancements are visible in the WSDL.
As far as I remember we maintained the namespace manually in the test system.
Best regards,
Markus
Hi All ,
Just wanted to provide an update that this works . I.e MDR extension ( data type extension in ABAP) can be used in an ESR based webserivice. This is specifically useful in cases like ours where we USE HCI and have no access to ESR objects for edit .
Scenario : We extended the lead replication web service for Sap hybris marketing ( on premise ) which would be sending the Leads to C4C via HCI . We had a requirement to Add some additional fields in the lead Proxy object .
Steps :1) The the custom name space for Backend MDR in each client where you want this extension ( Say )
2) Naviagte to the Proxy structure where you want to make changes via the proxy editor .
3) Create the data type extension object and get the WSDL or the extended proxy object in Sproxy .
( you can refer to the other blogs for details )
4) Search for Exit / Badi / Implicit enhancement where the values for these custom fileds could be passed back to the generated extended structure .
5) Provide this WSDL to HCI and make corresponding settings .
Hope this helps ...
Add comment | https://answers.sap.com/questions/136279/meta-data-repository-mdr-to-enhance-standard-webse.html | CC-MAIN-2018-43 | refinedweb | 416 | 70.13 |
Flush data to output file
#include <funtools.h>
void FunFlush(Fun fun, char *plist)
The FunFlush routine will flush data to a \s-1FITS\s0 output file. In particular, it can be called after all rows have been written (using the FunTableRowPut() routine) in order to add the null padding that is required to complete a \s-1FITS\s0 block. It also should be called after completely writing an image using FunImagePut() or after writing the final row of an image using FunTableRowPut().
The plist (i.e., parameter list) argument is a string containing one or more comma-delimited keyword=value parameters. If the plist string contains the parameter \*(L"copy=remainder\*(R" and the file was opened with a reference file, which, in turn, was opened for extension copying (i.e. the input FunOpen() mode also was \*(L"c\*(R" or \*(L"C\*(R"), then FunFlush also will copy the remainder of the \s-1FITS\s0 extensions from the input reference file to the output file. This normally would be done only at the end of processing.
Note that FunFlush() is called with \*(L"copy=remainder\*(R" in the mode string by FunClose(). This means that if you close the output file before the reference input file, it is not necessary to call FunFlush() explicitly, unless you are writing more than one extension. See the evmerge example code. However, it is safe to call FunFlush() more than once without fear of re-writing either the padding or the copied extensions.
In addition, if FunFlush() is called on an output file with the plist set to \*(L"copy=reference\*(R" and if the file was opened with a reference file, the reference extension is written to the output file. This mechanism provides a simple way to copy input extensions to an output file without processing the former. For example, in the code fragment below, an input extension is set to be the reference file for a newly opened output extension. If that reference extension is not a binary table, it is written to the output file:
/* process each input extension in turn */ for(ext=0; ;ext++){ /* get new extension name */ sprintf(tbuf, "%s[%d]", argv[1], ext); /* open input extension -- if we cannot open it, we are done */ if( !(ifun=FunOpen(tbuf, "r", NULL)) ) break; /* make the new extension the reference handle for the output file */ FunInfoPut(ofun, FUN_IFUN, &ifun, 0); /* if its not a binary table, just write it out */ if( !(s=FunParamGets(ifun, "XTENSION", 0, NULL, &got)) || strcmp(s, "BINTABLE")){ if( s ) free(s); FunFlush(ofun, "copy=reference"); FunClose(ifun); continue; } else{ /* process binary table */ .... } }
See funtools(7) for a list of Funtools help pages | https://www.carta.tech/man-pages/man3/FunFlush.3.html | CC-MAIN-2018-51 | refinedweb | 447 | 59.33 |
This is the assignment for weeks 8-9, on Reader and State monads.
Jacobson's reader monad only allows for establishing a single binding relationship at a time. It requires considerable cleverness to deploy her combinators in a way that establishes multiple binding relationships, as in
John_x thinks Mary_y said he_x likes her_y.
See her 1999 paper for details. Essentially, she ends up layering several Reader monads over each other.
Here is code for the arithmetic tree Chris presented in week 8. It computes
\n. (+ 1 (* (/ 6 n) 4)). Your task is to modify it to compute
\n m. (+ 1 (* (/ 6 n) m)). You will need to modify five lines. The first one is the type of a boxed int. Instead of
type num = int -> int, you'll need
type num = int -> int -> int
The second and third are the definitions of
midand
map2. The fourth is the one that encodes the variable
n, the line that begins
(Leaf (Num (fun n -> .... The fifth line you need to modify is the one that replaces "4" with "m". When you have these lines modified, you should be able to execute the following expression:
# match eval t2 with Leaf (Num f) -> f 2 4;; - : int = 13
Based on the evaluator code from assignment 7, and what you've learned about the Reader monad, enhance the arithmetic tree code to handle an arbitrary set of free variables. Don't use Juli8 libraries for this; just do it by hand. Return to the original code (that is, before the modifications required by the previous problem).
Start like this:
type env = string -> int type num = env -> int let my_env = fun var -> match var with "x" -> 2 | "y" -> 4 | _ -> 0;;
When you have it working, try
# match eval t2 with Leaf (Num f) -> f my_env;; - : int = 13
For this problem you don't need to demonstrate how to implement binding expressions like
let x = 3 in .... You just need to compute the value of possibly open expressions, relative to the supplied
env.
Okay, now what changes do you need to make to add in expressions like
let x = 3 in ...
Add in the Option/Maybe monad. Start here:
type num = env -> int option
Show that your code handles division by zero gracefully.
Consider the following code which uses the Juli8 libraries for OCaml.
module S = Monad.State(struct type store = int end);; let xx = S.(mid 1 >>= fun x -> put 20 >> modify succ >> get >>= fun y -> mid [x;y]) in S.run xx 0
Recall that
xx >> yyis short for
xx >>= fun _ -> yy. The equivalent Haskell code is:
import Control.Monad.State let { xx :: State Int [Int]; xx = return 1 >>= \x -> put 20 >> modify succ >> get >>= \y -> return [x,y] } in runState xx 0
Or:
import Control.Monad.State let { xx :: State Int [Int]; xx = do { x <- return 1; put 20; modify succ; y <- get; return [x,y] } } in runState xx 0
Don't try running the code yet. Instead, get yourself into a position to predict what it will do, by reading the past few discussions about the State monad. After you've made a prediction, then run the code and see if you got it right.
Here's another one:
(* start with module S = ... as before *) let yy = S.(let xx = modify succ >> get in xx >>= fun x1 -> xx >>= fun x2 -> xx >>= fun x3 -> mid [x1;x2;x3]) in S.run yy 0
The equivalent Haskell code is:
import Control.Monad.State let { xx :: State Int Int; xx = modify succ >> get; yy = xx >>= \x1 -> xx >>= \x2 -> xx >>= \x3 -> return [x1,x2,x3] } in runState yy 0
What is your prediction? What did OCaml or Haskell actually evaluate this to?
Suppose you're trying to use the State monad to keep a running tally of how often certain arithmetic operations have been used in computing a complex expression. You've come upon the design plan of using the same State monad module
Sfrom the previous problems, and defining a function like this:
let counting_plus xx yy = S.(tick >> map2 (+) xx yy)
How should you define the operation
tickto make this work? The intended behavior is that after running:
let zz = counting_plus (S.mid 1) (counting_plus (S.mid 2) (S.mid 3)) in S.run zz 0
you should get a payload of
6(
1+(2+3)) and a final
storeof
2(because
+was used twice).
Instead of the design in the previous problem, suppose you had instead chosen to do things this way:
let counting_plus xx yy = S.(map2 (+) xx yy >>= tock)
How should you define the operation
tockto make this work, with the same behavior as before?
Here is how to create a monadic stack of a Reader monad transformer wrapped around an underlying Option monad:
module O = Monad.Option (* not really necessary *) module R = Monad.Reader(struct type env = (* whatever *) end) module RO = R.T(O) (* wrap R's Transformer around O *)
You can inspect the types that result by saying
#show RO.result(in OCaml version >= 4.02), or by running:
let env0 = (* some appropriate env, depending on how you defined R *) in let xx = RO.(mid 1) in RO.run xx env0
and inspecting the type of the result. In Haskell:
import Control.Monad.Reader -- substitute your own choices for the type Env and value env0 let { xx :: ReaderT Env Maybe Int; xx = return 1 } in runReaderT xx env0
Okay, here are some questions about various monad transformers. Use OCaml or Haskell to help you answer them. Which combined monad has the type of an optional list (that is, either
Noneor
Some [...]): an Option transformer wrapped around an underlying List monad, or a List transformer wrapped around an underlying Option monad? Which combined monad has the type of a function from
stores to a pair
('a list, store): a List transformer wrapped around an underlying State monad or a State transformer wrapped around an underlying List monad?
The last two problems are non-monadic.
This is a question about native mutation mechanisms in languages that have them, like OCaml or Scheme. What an expression like this:
let cell = ref 0 in let incr c = (let old = !cell in let () = cell := old + 1 in ()) in (incr cell, !cell, incr cell, incr cell)
will evaluate to will be
((), n, (), ())for some number
nbetween
0and
3. But what number is sensitive to the details of OCaml's evaluation strategy for evaluating tuple expressions. How can you avoid that dependence? That is, how can you rewrite such code to force it that the values in the 4-tuple have been evaluated left-to-right? Show us a strategy that works no matter what the expressions in the tuple are, not just these particular ones. (But you can assume that the expressions all terminate.)
In the evaluator code for Week 7 homework, we left the
LetRecportions unimplemented. How might we implement these for the second,
env-using interpreter? One strategy would be to interpret expressions like:
letrec f = \x. BODY in TERM
as though they really read:
let f = FIX (\f x. BODY) in TERM
for some fixed-point combinator
FIX. And that would work, supposing you use some fixed point combinator like the "primed" ones we showed you earlier that work with eager/call-by-value evaluation strategies. But for this problem, we want you to approach the task a different way.
Begin by deleting all the
module VA = ...code that implements the substitute-and-repeat interpreter. Next, change the type of
envto be an
(identifier * bound) list. Add a line after the definition of that type that says
and bound = Plain of result | Recursive of identifier * identifier * term * env. The idea here is that some variables will be bound to ordinary
results, and others will be bound to special structures we've made to keep track of the recursive definitions. These special structures are akin to the
Closure of identifier * term * envwe already added to the
term(or really more properly
result) datatype. For
Closures, the single
identifieris the bound variable, the
termis the body of the lambda abstract, and the
envis the environment that is in place when some variable is bound to this lambda abstract. Those same parameters make up the last three arguments of our
Recursivestructure. The first argument in the
Recursivestructure is to hold the variable that our
letrecconstruction binds to the lambda abstract. That is, in:
letrec f = \x. BODY in TERM
both of the variables
fand
xneed to be interpreted specially when we evaluate
BODY, and this is how we keep track of which variable is
f.
Just making those changes will require you to change some other parts of the interpreter to make it still work. Before trying to do anything further with
letrec, try finding what parts of the code need to be changed to accommodate these modifications to our types. See if you can get the interpreter working again as well as it was before.
OK, once you've done that, then add an extra line:
| LetRec of identifier * term * term
to the definition of the
termdatatype. (For
letrec IDENT1 = TERM1 in TERM2. You can assume that
TERM1is always a
Lambdaterm.) Now what will you need to add to the
evalfunction to get it to interpret these terms properly? This will take some thought, and a good understanding of how the other clauses in the
evalfunction are working.
Here's a conceptual question: why did we point you in the direction of complicating the type that environments associate variables with, rather than just adding a new clause to the
resulttype, as we did with Closures? | http://lambda.jimpryor.net/exercises/assignment8-9/ | CC-MAIN-2018-13 | refinedweb | 1,602 | 72.87 |
This is the mail archive of the cygwin-apps@cygwin.com mailing list for the Cygwin project.
Igor Pechtchanski wrote: > On Sat, 12 Jul 2003, Max Bowsher wrote: >>> 2003-07-11 Igor Pechtchanski <pechtcha@cs.nyu.edu> >>> >>> * String++.h (TOSTRING): New macro. >>> [snip] >> >> Do we need __TOSTRING__ and TOSTRING? Since they are defined in the same >> file, it isn't really making the namespace cleaner. > > Yes, we do need two macros. The helper macro (__TOSTRING__) can be named > something else, but it's needed to force parameter expansion. Otherwise, > TOSTRING(__LINE__) would have produced "__LINE__", not the stringified > value of __LINE__. This is straight from the K&R book... OK, I didn't know that. Would you add a comment on this subtlety? > However, I just looked, and this kind of macro seems to be defined already > in /usr/include/symcat.h (XSTRING). I'm not sure whether it's better to > use the pre-existing macro, or to to define our own (with a more intuitive > name, IMO). The macro is simple enough. Opinions? IMO, define our own - more obvious name, plus symcat.h would really be a Cygwin header - My self-built gcc-mingw 3.3 doesn't search /usr/include. I don't know whether 3.2-3 does or not. Max. | https://sourceware.org/legacy-ml/cygwin-apps/2003-07/msg00151.html | CC-MAIN-2022-27 | refinedweb | 215 | 78.96 |
Working with real-time applications requires extra attention to analyze the data that is constantly transferred between the client and the server. To assist in this task, Fiddler Everywhere has the best resources through WebSocket inspectors. Check out this article on how to use them.
With the frequent use of smart devices connected to the internet, the need for real-time communication is something increasingly present in people’s lives. Real-time applications have flourished, especially in recent years. Perhaps the best known of them are social networking apps, but there are several others that have the same relevance and need—video conferencing apps, online games, chat, ecommerce transactions and many others.
With the high demand for this type of application, the need to maintain these applications has also increased to support user demand. In this article, we’ll build a real-time app and look at the importance of analyzing the data that makes up those apps, the benefits of capturing WebSocket traffic, and how this can be an easy task using Fiddler Everywhere.
WebSockets are web applications that perform a process called “handshake” between the application and the server. In simple terms, the application establishes a connection with the server and transfers data between them, keeping the connection open, which makes this process very fast—thus allowing communication in real-time.
This technology is not exclusive to .NET. There are WebSockets present in other platforms, such as Node. In the context of ASP.NET Core, it is possible to implement WebSockets from scratch, but there is a very useful native library called SignalR, which contains everything needed to implement a real-time application. If you want to know more about SignalR, I suggest reading the official Microsoft documentation, which you can access through this link: Introduction to SignalR.
Next, we will create an application and implement a WebSocket. Then we will analyze how this process is done through Fiddler Everywhere.
The project that we will create in this article will be a web project developed with .NET 6, in which we will use the resources of SignalR. It will be a simple chat message application to demonstrate using Fiddler Everywhere and how to analyze data between client and server with the help of Fiddler features.
You can access the complete source code of the final project at this link: Source Code.
The SignalR server library is included in the ASP.NET Core shared framework. The JavaScript client library isn’t automatically included, so that is why, in this tutorial, we’ll be using Library Manager (LibMan) to get the client library from unpkg.
So, in Visual Studio:
A
Hub class allows the client and server to communicate directly. SignalR uses a Hub instead of controllers like in ASP.NET MVC. So, we need to create a class that will inherit from the
Hub main class.
In the SignalRChat project folder, create a “Hubs” folder. In the Hubs folder, create the “ChatHub” class with the following code:
using Microsoft.AspNetCore.SignalR; namespace ChatSignalR.Hubs; public class ChatHub : Hub { public async Task SendMessage(string user, string message) => await Clients.All.SendAsync("ReceiveMessage", user, message); }
The ChatHub class inherits from SignalR’s native class (SignalRHub). The
Hub class is responsible for managing connections, groups and messaging systems.
The SendMessage method can be called by a connected client and send a message to all clients. This method is asynchronous to provide maximum scalability. The JavaScript client code that calls this method will be discussed later.
Now that we’ve created the
Hub class, we need to add the SignalR and ChatHub class settings we just created, so replace the code in the Program.cs file with the code below:
using ChatSignalR.Hubs; var builder = WebApplication.CreateBuilder(args); // Add services to the container. builder.Services.AddRazorPages(); builder.Services.AddSignalR();.MapHub<ChatHub>("/chatHub"); app.Run();
The next step is to create the client that will communicate with the server (hub). First we will add the HTML markup with the text boxes and the button to send the message, then we will add the JavaScript code that will contain the functions and actions performed by the HTML page.
Replace the contents of the
Pages/Index.cshtml file with the following code:
>
Now we just need to create the JavaScript code that will execute the functions of sending and receiving messages.
The code below creates and starts a connection, and adds in the Send button a handler that sends messages to the hub. Finally, it adds a handler to the connection object that receives messages from the hub and adds them to the list.
So, in the
wwwroot/js folder, create a file with the extension .js (chat.js) with the following code:
"use strict"; var connection = new signalR.HubConnectionBuilder().withUrl("/chatHub").build(); document.getElementById("sendButton").disabled = true; connection.on("ReceiveMessage", function (user, message) { var li = document.createElement("li"); document.getElementById("messagesList").appendChild(li); li.textContent = `${user} says ${message}`; }); connection.start().then(function () { document.getElementById("sendButton").disabled = false; }).catch(function (err) { return console.error(err.toString()); });(); });
Run the application via Visual Studio or terminal.
The name and message are displayed on both pages instantly.
Real-time communication requires extra attention to the events that happen while the application is running, from its inception to the present moment. As the connection is established only once, the speed with which information is crossed between the client and the server makes it necessary to use tools that capture the details of these operations.
Imagine an online game where some players are reporting bugs when they enter a certain location within the game. How would you go about finding out where the problem is if, every millisecond, thousands of pieces of information are transferred between the player’s client and the game’s server? You would need to filter and analyze the information relevant to that area where users reported the problem. Some specialized tool are necessary for this purpose.
Fiddler Everywhere is perfect for this type of task, as it has several features that help a lot in the work of analyzing the handshake between applications.
The tools available on Fiddler Everywhere are listed below for a complete analysis of the execution of our chat application. You can check the official documentation of all inspector types functions at this link: WebSocket Inspectors.
By definition, Fiddler Everywhere captures all the traffic present on your machine. To analyze the application that we created earlier, we need to filter it in the list. The GIF below shows how to do this:
Below are the tabs available in Fiddler Everywhere and how to use them in the application we just built.
The Handshake tab for the WebSocket API provides several types of inspection tools, which allow you to examine different parts of the WebSocket requests and responses—each of which is listed below and the abstracted details of our application’s execution.
1.1 Headers Inspector
The Header Inspector tab allows you to view the HTTP headers of the request and response.
In this tab we can check the HTTP method used—in this case, GET.
The path of the URL being requested—(/chatHub?id=…).
The HTTP version—HTTP/1.1.
The Request line can consist of one or more lines containing name-value pairs of metadata about the request and the client, such as User-Agent and Accept-Language.
Each HTTP response starts with plain text headers that describe the result of the request. The first line of the response (the Status line) contains the following values:
The HTTP version—HTTP/1.1.
The response status code—101.
The Status line can consist of one or more lines containing name-value pairs of metadata about the response and the server, such as response file length and content type.
1.2 Parameter Inspector
The Parameter Inspector, only available in the Request section, displays the content of any parameters of input terminals. In the chat app example, we do not have values being sent and received via URL. Instead, other values are displayed in key/value format like “transport” = “webSockets”, “requestUrl” = “ among others.
1.3 Cookies Inspector
The Cookies Inspector displays the contents of any outbound Cookie and Cookie2 request headers and any inbound Set-Cookie, Set-Cookie2 and P3P response headers.
1.4 Raw Inspector
Raw Inspector allows you to view the complete request and response, including headers and bodies, in text format. Most of the inspector represents a large area of text that displays the body of text interpreted using the detected character set with the headers, the byte order marker or an embedded META tag declaration.
If the request or the response is displayed with the content encoded or compressed, the Raw Inspector comes with a special decoding button (located in the Inspector toolbar). Clicking on it will decode the content.
Also available are the Preview Inspector and Body Inspector, but they don’t fit the context of the example in the article.
The Messages tab renders a list of WebSocket messages sent from the client or received from the server. The list is constantly populated with new future messages until two-way communication is disconnected.
Each incoming WebSocket message can be inspected separately through the Metadata Inspector and the Message Inspector.
2.1 Metadata Inspector
The Metadata Inspector contains timestamps and masking information about the selected WebSocket message.
2.2 Message Inspector
The Message Inspector contains the non-masked message content in plain text or JSON (depending on the message format).
The use of WebSocket Inspectors is indispensable for working with real-time applications. Fiddler Everywhere has everything needed for a complete analysis of data traffic in this type of application.
In this article, we created an example of a real-time application and saw how to use the features available in Fiddler through the WebSocket inspectors, which make the analysis task much easier. | https://www.telerik.com/blogs/how-to-use-fiddler-everywhere-real-time-apps | CC-MAIN-2022-21 | refinedweb | 1,644 | 54.63 |
OpenGL the pale.
Lets have a quick wander down the code and see whats what…
#include "gl_core_3_2.h" #include <GLFW/glfw3.h>
The first thing to mention is I’ve used a code generator to make me a procedure loader to load up all the functions that appeared after OpenGL1.1 – why this isn’t done automatically when a core profile context is created, well maybe I shouldn’t expect “common” sense !
I used the generator here which both seems robust and should work OK on other platforms too (Although I could guess developing windows OpenGL applications is not a joyous experience!)
Next we have a very simple set of shaders to make up a simple shader program
const char* vertex_shader = /*01*/ "#version 150 core \n" /*02*/ " \n" /*03*/ "in vec3 vp; \n" /*04*/ "uniform float u_time; \n" /*05*/ "out float blank; \n" /*06*/ " \n" /*07*/ "void main () { \n" /*08*/ " vec4 p = vec4(vp, 1.0); \n" /*09*/ " p.x = p.x + (sin(u_time+p.y)/4.0); \n" /*10*/ " p.y = p.y + (cos(u_time+p.x)/4.0); \n" /*11*/ " gl_Position = p; \n" /*12*/ " blank = sin((u_time+p.x+p.y)*4.0); \n" /*13*/ "}";
const char* fragment_shader = /*01*/ "#version 150 core \n" /*02*/ " \n" /*03*/ "out vec4 frag_colour; \n" /*04*/ "uniform float u_time; \n" /*05*/ "in float blank; \n" /*06*/ " \n" /*07*/ "void main () { \n" /*08*/ " frag_colour = vec4 (0.5, abs(sin(u_time)), abs(cos(u_time)), 1.0); \n" /*09*/ " if (blank<0) frag_colour = vec4 (0.5, abs(cos(u_time)), abs(sin(u_time)), 1.0); \n" /*10*/ "}
There’s not a great deal to say here, except do note that I label line numbers, although you might have to do a little maintenance – doing this is very helpful when shader compiling fails…
if (!glfwInit ()) { fprintf (stderr, "10 ERROR: could not start GLFW3\n"); return 1; } // tell glfw we want the core profile glfwWindowHint (GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint (GLFW_CONTEXT_VERSION_MINOR, 2); glfwWindowHint (GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); GLFWwindow* window = glfwCreateWindow (640, 480, "Hello Triangle", NULL, NULL); if (!window) { fprintf (stderr, "20 ERROR: could not open window with GLFW3\n"); glfwTerminate(); return 1; }
I like to let GLFW do the heavy lifting and as you can see making a compatible context is no where near as painful as EGL can be!
glfwMakeContextCurrent (window); if(ogl_LoadFunctions() == ogl_LOAD_FAILED) { fprintf (stderr, "30 ERROR: failed to load gl functions\n"); glfwTerminate(); return 1; }
Once we actually have a context created we can go ahead and load up all those missing function pointers.
The rest of the code such that it is – is very simple “ordinary” GL code, coming from GLES2.0 (mobile) arena, the shader is where a far bit has been changed and that you need a vertex array. | http://bedroomcoders.co.uk/opengl-core-profile-an-introduction/ | CC-MAIN-2021-04 | refinedweb | 452 | 67.89 |
Entity Framework was created solely for working with relational data on the full version of .NET. In EF 7, neither of those statements is true.
The platform goal for Entity Framework 7 includes
- .NET Full Framework
- ASP.NET 5
- Windows 10 UAP
- Mac
- Linux
On the provider side, EF 7
- Relational Providers: SQL Server, SQLite, Postgres
- Azure Table Storage
- Redis
- In Memory Provider (for testing)
The Top-level experience is the same as EF 6, you will still be working with DbContext, DbSet, etc. Internally the core has been rewritten. This means the metadata, change tracking, query pipeline, etc. are all different, but for most applications the developer won’t notice.
The core is being changed for a number of reasons. One reason is that current architecture is very difficult to work with. Even basic things like plugging in a logging framework is far more difficult that necessary. By rewriting the core, confusing APIs and behaviors can finally be removed.
EF is well known for being memory hungry and slow. A major emphasis for the rewrite is to address these concerns. This is important across the board, from small mobile devices with limited battery life to massive cloud severs where you pay for CPU utilization.
Logging
Logging in Entity Framework is based on the ILogger interface from the Microsoft.Owin.Logging namespace. Microsoft’s hope is that this will become the standard interface that all .NET logging frameworks support.
SQL Generation Improvements
Insert and update operations are slightly better in EF 7. Say, for example, you want to apply a discount to four products in a table. When using EF 6, this would require 1+N round-trips to the database: one to load the data and one for each row. In EF 7, save operations are batched so that you only need two round-trips to the database.
This is still slower than the one round trip that you would get when using native SQL, but it also works for non-relational databases.
Mixing SQL and LINQ
EF 7 supports mixing FromSQL with LINQ expressions. This allows you to access things that EF cannot normally work with such as Table Value Functions or index-hinted tables.
context.FromSql<Customer>("SELECT * FROM Customer (WITH (IX_Index)").OrderBy(c => c.Name)
This will generate the correct SQL to perform the order by, where, etc. in the database.
EF and Mobile Devices
As mentioned above, one of the goals of EF 7 is to not be limited to just desktop applications. One of the use cases for this is offline mobile devices. The idea is that you should be able to use the same code on your mobile device to locally cache and manipulate data that you would use on your server.
For more information on Entity Framework 7, watch the Channel 9 video Entity Framework 7: Data for Web, Phone, Store, and Desktop.
Community comments
A Logger interface
by li min feng,
A Logger interface
by li min feng,
Your message is awaiting moderation. Thank you for participating in the discussion.
hehe, will microsoft will kill log4net, NLog? | https://www.infoq.com/news/2015/04/EF-7/ | CC-MAIN-2021-39 | refinedweb | 516 | 65.42 |
TWiki
>
Public Web
>
DpHipe
>
MyFirstJavaTask
(revision 12) (raw view)
Edit
Attach
view all tags
<!-- * Set TOPICTITLE = My first Java HIPE task --> | !* ---+ My first Java HIPE task %STARTINCLUDE% This tutorial will teach you the following: 1. How to write a simple Java task using Eclipse. 1. How to add the task to HIPE as a plug-in. %STOPINCLUDE% Requisites: * You know at least the basics of Java and are familiar with concepts such as package, class and superclass. * You have set up Eclipse for HIPE development. To learn how to do so, read the _[[HipeDevGettingStarted][Getting started with HIPE development]]_ tutorial to the end of the _Creating an Eclipse project_ section. ---++ Writing the task in Eclipse *This tutorial has been tested with Eclipse 3.7 "Indigo". Steps may differ for other versions of Eclipse.* The following is a simple Java task that takes a _table dataset_ as input and outputs an array with the averages of each row of the table dataset. For more information about table datasets, see the [[][Scripting and Data Mining]] guide. Start Eclipse. Choose _File --> New --> Java Project_. Enter ==TableAverageTask== in the _Project name_ field. Click _Next_. In the next window, click on the _Libraries_ tab. Click _Add Library_. The _Add Library_ dialogue window appears. Select _HCSS Project Container_ from the list and click _Next_. If you do not see _HCSS Project Container_ among the available libraries, you need to install the _HCSS Project_ plugin as described in the _[[HipeDevGettingStarted][Getting started with HIPE development]]_ tutorial. In the configuration dialogue window for the _HCSS Project Container_ library, enter ==hcss.dp.core== in the _Project_ field, the major version number of your HIPE installation in the _Track_ field (==8.0== for HIPE 8.x, and so on), and click _Fetch latest_ to get the latest build. These settings work for the simple example in this tutorial; if you are developing instrument-specific software, set the _Project_ field as ==hcss.dp.hifi==, ==hcss.dp.pacs== or ==hcss.dp.spire==. Click _Finish_, then _Finish_ again in the _New Java Project_ dialogue window. Right click on the project name in the _Package Explorer_ view, and choose _New --> Class_ from the context menu. The _New Java Class_ dialogue window appears. Write ==herschel.ia.tableaverage== in the _Package_ field, and ==TableAverageTask== in the _Name_ field. In the _Superclass_ field, change ==java.lang.Object== to ==herschel.ia.task.Task==. Click _Finish_. Eclipse creates the class and opens the source file with the skeleton code: <verbatim> package herschel.ia.tableaverage; import herschel.ia.task.Task; public class TableAverageTask extends Task { } </verbatim> Complete the source code as follows: <verbatim> package herschel.ia.tableaveragejava; // 1 import herschel.ia.task.Task; // 2 import herschel.ia.task.TaskParameter; import herschel.ia.dataset.TableDataset; import herschel.ia.numeric.Double1d; public class TableAverageJavaTask extends Task { // 3 public TableAverageJavaTask() { super("tableAverageJava"); setDescription("Computes the average of each row of a table dataset"); // 4 TaskParameter parameter = new TaskParameter("table", TableDataset.class); // 5 parameter.setType(TaskParameter.IN); parameter.setMandatory(true); parameter.setDescription("The table of whose rows to compute the average"); //6 addTaskParameter(parameter); parameter = new TaskParameter("average", Double1d.class); // 7 parameter.setType(TaskParameter.OUT); parameter.setDescription("The array of averages of the table's rows"); addTaskParameter(parameter); } public void execute() { // 8 TableDataset table = (TableDataset) getParameter("table").getValue(); if (table == null) { throw (new NullPointerException("Missing table value")); } int columns = table.getColumnCount(); double divider = 1.0 / columns; Double1d average = new Double1d(table.getRowCount()); for (int column = 0; column < columns; column++) { average.add((Double1d) table.getColumn(column).getData()); } setValue("average", average.multiply(divider)); } } } </verbatim> Let us examine some lines of the script. 1. This line sets the package as ==herschel.ia.tableaveragejava==. You can set whatever suits your project or organisation. It does not need to be a subpackage of ==herschel.ia==. 1. This line and the following import the needed classes: ==Task== and ==TaskParameter== are needed for any task, while ==TableDataset== and ==Double1d== are needed for the specific computations done in this task. 1. This line declares the ==TableAverageJavaTask== class. The class, like any task, inherits from ==Task==. 1. This line sets the description of the task. It is what you see in the tooltip that appears when you hover on the task name in the _Tasks_ view of HIPE. Every task must have a description. 1. This line defines a _task parameter_. This parameter is called ==table== and is of type ==TableDataset==. The next two lines state that ==table== is an _input_ parameter and is mandatory. 1. This line defines the description of the ==table== parameter. Task parameters, like the task itself, must always have a description. 1. This line defines the output parameter, called ==average==. It is better to avoid vague parameter names such as ==result==. 1. This line defines the ==execute== method. This is the heart of any task: it is the function that does the actual data processing. Save the file. Right click on the project name, and choose _Export_ from the context menu. Choose _JAR File_ from the list and click _Next_. Write a name of your liking in the _JAR file_ field (for example, ==TableAverageJavaTask.jar==) and click _Finish_. Eclipse creates the JAR file. ---++ Turning the task into a HIPE plug-in Create a Jython file named ==plugin.py==. Write the following in the file: <verbatim> from herschel.ia.tableaveragejava import TableAverageJavaTask tableAverageJava = TableAverageJavaTask() # 1 toolRegistry = TaskToolRegistry.getInstance() toolRegistry.register(tableAverageJava, [Category.GENERAL, Category.SPECTRUM]) # 2 del(toolRegistry) </verbatim> 1. This line creates an instance of the task. Note that from the task name ==TableAverageJavaTask== we obtained ==tableAverageJava== as instance name. This is a general rule. The task name is made of words with capitalised initials and ends with ==Task==. The instance name starts with a lower case letter and drops the ==Task== part. 1. The task is assigned to the ==GENERAL== and ==SPECTRUM== categories. This is not required, since the task would appear anyway in the _All_ folder of the _Tasks_ view in HIPE. The available categories are ==GENERAL==, ==CUBE==, ==IMAGE==, ==SPECTRUM==, ==HIFI==, ==PACS== and ==SPIRE==. Now create a zip file (or [[%ATTACHURL%/MyJavaAverage_0.1.zip][download]] the one we made for you) containing the ==plugin.py== file and the JAR file previously created by Eclipse, in a ==jars== directory: <verbatim> plugin.py jars/TableAverageJavaTask.jar </verbatim> The zip file name must be made by three parts: * The _plug-in name_, for example ==TableAverageJavaPlugin==. * An _underscore character_. * The plug-in _version number_, which can be any combination of digits separated by dots. For example, ==0.1==, ==1.0.0==, ==2.3.3== and so on. The plug-in is now ready. Installing, running and sharing the plug-in is done in the same way as described in the [[MyFirstJythonTask][My first Jython task]] tutorial. The only difference is that we called the Java task class ==TableAverageJavaTask== instead of ==TableAverageTask==. Correspondingly, the task instance is ==tableAverageJava== instead of ==tableAverage==. This allows you to install the plug-ins from this tutorial and from the [[MyFirstJythonTask][My first Jython task]] tutorial without name clashes. ---++ Where to go from here You are now ready to develop more complex tasks and plug-ins. You may find the following resources useful: * The [[WritingTasks][Writing tasks]] page has more details on techniques and best practices to develop HIPE tasks. * The [[DpHipeTools][Adding tools to HIPE]] page has more details on adding tasks and other components to HIPE. * The [[DpHipePluginsDeveloperManual][Plug-in developer manual]] has more details on developing HIPE plug-ins. ---++ Your comments %COMMENT% <!-- * Set ALLOWTOPICCHANGE = Main.RegisteredUsersGroup, Main.TWikiAdminGroup -->
Attachments
Attachments
Topic attachments
I
Attachment
History
Action
Size
Date
Who
Comment
zip
MyJavaAverage_0.1.zip
r1
manage
2.6 K
2011-10-14 - 09:43
DavideRizzo
Sample Java HIPE plugin
Edit
|
Attach
|
Watch
|
P
rint version
|
H
istory
:
r14
<
r13
<
r12
<
r11
<
r10
|
B
acklinks
|
V
iew topic
|
Raw edit
|
More topic actions...
Topic revision: r12 - 2011-10-14
-
DavideRizzo
Public
Public Web
Create New Topic
Index
Search
Changes
Notifications
Statistics
Preferences
Webs
Public
TWiki | http://herschel.esac.esa.int/twiki/bin/view/Public/MyFirstJavaTask?raw=on&rev=12 | CC-MAIN-2022-33 | refinedweb | 1,325 | 51.75 |
Al Danial
Author, Python for MATLAB Development, Author, cloc,
Python, C++, C, MATLAB, SQL, Perl, Fortran
Statistics
RANK
2 151
of 264 940
REPUTATION
20
CONTRIBUTIONS
1 Question
24 Answers
ANSWER ACCEPTANCE
100.0%
VOTES RECEIVED
6
RANK
16 049 of 18 100
REPUTATION
3
AVERAGE RATING
0.00
DOWNLOADS
7
ALL TIME DOWNLOADS
35
RANK
of 115 960
CONTRIBUTIONS
0 Problems
0 Solutions
SCORE
0
NUMBER OF BADGES
0
CONTRIBUTIONS
0 Posts
CONTRIBUTIONS
0 Public Channels
AVERAGE RATING
CONTRIBUTIONS
0 Highlights
AVERAGE NO. OF LIKES
Content Feed
How can I import python module properly instead of namespace package
Try assigning the import to a variable: asammdf = py.importlib.import_module('asammdf'); then MDF = asammdf.MDF();
8 jours ago | 0
How can I translate these Matlab statistics into Python?
If this 3840 x 2160 matrix is the only data in the file you could do import numpy as np Yd = np.fromfile('file_with_matrix.bin...
18 jours ago | 0
How to convert a 1x5x5600 cell in MATLAB to a python list that is (1,5,5600)
Lists, like cell arrays, may be nested so @J C's request is doable. Here's an example with a 1x3x5 cell: % create the cell arr...
environ 2 mois ago | 0
list comprehension in MATLAB using Python interface
MATLAB's cellfun() and arrayfun() act like Python list comprehensions. You can give them Python functions to act on each item i...
environ 2 mois ago | 0
Unable to resolve the name 'py.anything' when using Homebrew Python
I won't have a macOS machine handy for a while so I can't experiment. I imagine the MathWorks tech support would know... unless...
2 mois ago | 1
pyenv Library property empty. Unable to resolve the name (Linux:Pop OS 22.04, python 3.9, R2022a)
On my Ubuntu 20.04 system pyenv returns >> pyenv ans = PythonEnvironment with properties: Version: "3.9" Executable: "/usr...
2 mois ago | 1
Submitted
Python for MATLAB Development
Source Code for 'Python for MATLAB Development' by Albert Danial
3 mois ago | 7 downloads |
Matlab numpy array: AttributeError: 'array.array' object has no attribute 'fromstring'
Regarding @Ben 's initial issue, the matlab/python version compatibility table,...
3 mois ago | 0
Run pickle file in matlab
This shows how to load a pickle file into MATLAB: pickle = py.importlib.import_module('pickle'); fh = py.open('data.pkl', 'rb'...
3 mois ago | 3
From Python to Matlab
I'm willing to help. How about posting your first cut at the Python code and showing where you're stuck?
4 mois ago | 0
reshape a cell array to be row major
Your first sentence, "Maybe storing the data in a table will end up being more convenient:" proved to be the key. With that ide...
4 mois ago | 0
Question
reshape a cell array to be row major
If I load data with textscan() I'll get back a cell array holding the columns of data. For example fh = fopen('log.txt'); x =...
4 mois ago | 2 answers | 0
2answers
convert a py.str to a matlab string
For generic conversion of Python variables to MATLAB variables, try py2mat.m, and for the reverse (Python to MATLAB), mat2py.m. ...
4 mois ago | 0
Convert MATLAB Code to Python Code (.ipynb)
#!/usr/bin/env python import numpy as np import matplotlib.pyplot as plt N = 500 M = np.diag(np.ones((N-1,)),1) + np.diag(np...
4 mois ago | 0
| accepted
How do I create a 3 dimensional surface from X Y Z points
Since you're starting in Python you can make the surface plot there too if you want. This is a translation of @Star Strider's m...
4 mois ago | 0
unable to resolve the name py.function error
Couple of things to try: 1) change the execution mode to InProcess >> pyenv('ExecutionMode', 'InProcess') 2) open a terminal ...
4 mois ago | 1
Python function to Matlab
Looks like monthly_raw_data is a Pandas dataframe. Can you post a small input file (.csv or other) that loads into a representa...
5 mois ago | 0
Convert numpy ndarray to matlab mlarray in python
Looks like you're making the data load unnecessarily complicated. The file actually loads cleanly into a NumPy array: from sci...
5 mois ago | 0
| accepted
How to get frequencies of a s parameter object in python
Let's see what is in your variable sp. Make a small version of it, save it to a .mat file, and attach the .mat file here.
5 mois ago | 0
Convert numpy ndarray to matlab mlarray in python
Now I'm curious what is in your variable x. Can you make a small version of this data, write it to a .mat file, then attach the...
5 mois ago | 0
Matlab 2019a Ubuntu 20 Python 3.7.12 "Path argument does not specify a valid executable"
Does the command pyenv work in 2019a? If yes, what output does it show? 2020b uses pyenv to show what MATLAB thinks the Python...
5 mois ago | 0
System command and Unix parallelisation
Independently of MATLAB, can you run run_command_1_para.sh directly from a Unix terminal and get the behavior you want? If that...
5 mois ago | 0
Convert numpy ndarray to matlab mlarray in python
Which version of MATLAB? 2020a and newer (I don't have easy access to older versions) should just be able to do >> mx = doubl...
5 mois ago | 0
Calling python modules gives ModuleNotFoundError.
Does the MATLAB code change directories before calling the import command? If you interactively invoke the import in a fresh MA...
5 mois ago | 0
How to transfer Big Data from Python Dict to Matlab
Alternatively, write your Big Data dictionary directly to a MATLAB .mat file as described here:...
5 mois ago | 0
convert a Python tuple containing string and numerical data types to Matlab
The SciPy module has a function, savemat, which can write Python variables to MATLAB .m files. Your tuple can be written to tes...
5 mois ago | 0 | https://fr.mathworks.com/matlabcentral/profile/authors/20004760 | CC-MAIN-2022-40 | refinedweb | 1,000 | 65.93 |
Hi all!
I have some problems getting my GTPA010 (MediaTek MTK 3329 GPS 10Hz) running on Arduino Mega Rev. 3. I am using I2C connection (have an adapter for it). I tried the following code, which I found on the web:
#include <Wire.h>
#define I2C_GPS_ADDRESS 0x20
/*************** I2C GSP register definitions *********************************/
#define I2C_GPS_STATUS 0x00 //(Read only)
#define I2C_GPS_GROUND_SPEED 0x07 //GPS ground speed in m/s*100 (uint16_t) (Read Only)
#define I2C_GPS_ALTITUDE 0x09 //GPS altitude in meters (uint16_t) (Read Only)
#define I2C_GPS_TIME 0x0b //UTC Time from GPS in hhmmss.sss * 100 (uint32_t)(unneccesary precision) (Read Only)
#define I2C_GPS_DISTANCE 0x0f //Distance between current pos and internal target location register in meters (uint16_t) (Read Only)
#define I2C_GPS_DIRECTION 0x11 //direction towards interal target location reg from current position (+/- 180 degree) (read Only)
#define I2C_GPS_LOCATION 0x13 //current position (8 bytes, lat and lon, 1 degree = 10 000 000 (read only)
//**************************************************************
// Second byte will be from 'address'+1
unsigned char GPS_1_byte_read(unsigned char address)
{
Wire.beginTransmission(I2C_GPS_ADDRESS);
Wire.send(address);
Wire.endTransmission();
Wire.requestFrom(I2C_GPS_ADDRESS, 1);
while(Wire.available()<1);
Wire.endTransmission();
return Wire.receive();
}
int GPS_2_byte_read(unsigned char address)
{
unsigned char msb, lsb;
Wire.beginTransmission(I2C_GPS_ADDRESS);
Wire.send(address);
Wire.endTransmission();
Wire.requestFrom(I2C_GPS_ADDRESS, 2);
while(Wire.available()<2) ;
msb = Wire.receive();
lsb = Wire.receive();
Wire.endTransmission();
return (int) lsb8 | msb;
}
void setup()
{
Wire.begin(); // join i2c bus (address optional for master)
Serial.begin(38400); // start serial for output
Serial.println("Starting:");
}
void loop()
{
Wire.begin();
Serial.println(GPS_1_byte_read(I2C_GPS_STATUS), HEX);
Serial.println(GPS_2_byte_read(I2C_GPS_ALTITUDE), DEC);
Wire.endTransmission();
delay(500);
}
Unfortunately the code does not work as expected. I put in some outputs and found out that the code does not continue after the line while(Wire.available()<1); . Since I am new to the subject, I do not understand what this line is doing and why the code does not continue. Moreover, what value should Serial.begin() have and what about the definition of the hex addresses like I2C_GPS_STATUS 0x00? Are this addresses correct?
Can someone please help me to get the stuff working? Thank you very much to all of you.
Best regards, Michael
Views: 586
<![if !IE]>▶<![endif]> Reply to This
Wire.available() is a call that determines if any bytes are available from I2C. Documentation here:
So the code is basically waiting until the 2 bytes are sent by the slave. Those two bytes apparently don't arrive. Either because you're using the wrong I2C address, the wrong address for the field being read, some connectivity issue, or various other causes.
Serial.begin() specifies the baud rate for the Arduino's serial connection.
What is this adapter you mention? I gather it is a serial to I2C adapter of some sort? In which case I would imagine there's documentation on what I2C address to use and which data addresses correspond to what types of data (altitude, speed, status, etc)
<![if !IE]>▶<![endif]> Reply
Hi Bot Thoughts!
Thank you very much for your answer. Indeed, I was already thinking that it might be an addressing issue. The adapter I mentioned was sold by DIY Drones in 2010/2011. Unfortunately I can not find an image of it in the internet. I am currently abroad, but will return within this week. As soon as I am back I will post a picture and/or naming of the device and I will try getting the GPS running with other addresses. I found in the meanwhile several other posts were people used 0x40instead of 0x20. I will give this a try......
Best regards, Michael
<![if !IE]>▶<![endif]> Reply
Hi all,
Finally, I got it fixed. It was a stupid wiring error of mine. Since I bought the GPS two years ago I did not spend much time on thinking about the correct wiring, because I felt confident in what I was doing. Bad thinking..... I am now using serial connection and the GPS_NMEA_test.pde from the ArduPlane repository. It work fine.
Cheers, Michael
<![if !IE]>▶<![endif]> Reply | http://diydrones.com/forum/topics/problems-getting-mtk3329-gps-working-with-arduino-mega-and-i2c | CC-MAIN-2013-20 | refinedweb | 664 | 59.5 |
.4: Loops, Strings, and Elections
About This Page
Questions Answered: Could I have some more examples of
for loops?
What if I put a loop within a loop? How about having multiple obstacles
in FlappyBug?
Topics: Practice on the previous chapter’s topics. Additional
topics: looping over strings and
Ranges; nested
for loops.
What Will I Do? Write programs, for the most part.
Rough Estimate of Workload:? Four hours.
Points Available: A225.
Related Projects: ForLoops, Election (new), FlappyBug.
Introduction
So far, we’ve used
for loops for processing only buffers and vectors. However, as
briefly mentioned in Chapter 5.3, you can loop over many sorts of things. Like strings:
A
String in a Loop
For example’s sake, let’s say we want to print out a report that enumerates each
character within a string. For example, if the string is
"llama", we want to print
this:
Letter: l Letter: l Letter: a Letter: m Letter: a
Here is some Scala code that does just that:
for (character <- "llama") { println("Letter: " + character) }
charactertherefore has the type
Char.
What if we want to number the outputs? And to report the Unicode values of each character? Like this:
Index 0 stores l, which is character #108 in Unicode. Index 1 stores l, which is character #108 in Unicode. Index 2 stores a, which is character #97 in Unicode. Index 3 stores m, which is character #109 in Unicode. Index 4 stores a, which is character #97 in Unicode.
Here’s an implementation:
var index = 0 for (character <- "llama") { println("Index " + index + " stores " + character + ", which is character #" + character.toInt + " in Unicode.") index += 1 }
Charclass gives us
toInt, a method that returns the character’s Unicode number.
The above program, like many others in this chapter, is available in package
o1.looptest within the ForLoops project.
Tiny programming task: upper case only
Find the app object
o1.looptest.Task2 in the ForLoops project. Read the comments to
find out what the program is supposed to do. Write the program as requested.
A+ presents the exercise submission form here.
A wordier but short task: DNA properties
The DNA of our bodies contains four important (nucleo)bases: guanine (G), adenine (A), cytosine (C), and thymine (T). We can describe a strand of DNA by listing its bases in order, as in GATCACAGGT. Each organism has its specific DNA, but the differences between individual organisms are minor, and even the DNA from organisms of different species can by highly similar.
Certain properties of DNA can be captured by computing its GC content: the percentage of guanine and cytosine within it. For instance, the DNA slice ATGGACAT has a GC content of on 37.5%, since three of the eight bases is either guanine or cytosine. GC content is a meaningful concept in molecular biology.
Fetch
Task3. You can try running the program. Here is what it looks like in the text
console:
Enter a species file name (without .mtdna extension): human The GC content is 80.13914555929992%.
Given a species name by the user, the program appears to compute the GC content of that species’s DNA. The program accepts the inputs human, chicken, chimpanzee, mouse, opossum, pufferfish, yeast, and test; the names correspond to files that come with the project and store DNA sequences from assorted species. (More specifically, they are mitochondrial DNA or mtDNA sequences.)
The program doesn’t actually work, however. For a human, the output should be roughly 44%, not 80%.
The function
gcContent in
Task3.scala is incorrect. Your task is to study the function
(replicated below) and make a small change so that the program works as intended.
The function should return the GC content of the given string as a
Double; for example,
given the string
"ATGGACAT", it should return 37.5. In this toy program, we’ll simply
ignore any characters other than the four that stand for the nucleobases, so the input
"ATG G ACATXXYZYZ" should also produce 37.5.
def gcContent(dna: String) = { var gcCount = 0 var totalCount = 0 for (base <- dna) { if (base == 'G' || base == 'C') { gcCount += 1 } else if (base == 'A' || base == 'T') { totalCount += 1 } } 100.0 * gcCount / totalCount }
Charobjects have their own literal notation, just like numbers, Booleans, and Strings do. Notice that unlike a
Stringliteral, a
Charliteral goes in single quotation marks.
If you want to try the program on a DNA sequence of your own choosing, you can edit
test.mtdna in the folder
mkDNA_examples and enter
test when the program
prompts you for a species file.
A+ presents the exercise submission form here.
Looping over a Range of Numbers:
to,
until, and
indices
Chapter 4.5 mentioned that
Int objects have a method named
to, which we usually call
using operator notation. The method returns a reference to a
Range:
5 to 15res0: Range.Inclusive = Range 5 to 15
A
Range object represents a sequence of numbers. We can use a
for loop to iterate
over that sequence:
val upToAThousand = 1 to 1000 for (number <- upToAThousand) { println(number) }
The loop variable
number serves as a stepper that receives each positive integer in
turn: 1, 2, 3, ..., 1000. The program prints out these numbers in order.
Here is a shorter way to express the same thing:
for (number <- 1 to 1000) { println(number) }
A similar command works for covering the indices of a collection such as a
String.
This program produces a familiar output:
val myString = "llama" for (index <- 0 to myString.length - 1) { val character = myString(index) println("Index " + index + " stores " + character + ", which is character #" + character.toInt + " in Unicode.") }
to, you need to subtract one from the
myString.lengthso that you don’t go past the last element.
Practice tasks:
to,
until, etc.
The ForLoops project contains some more app objects for you to edit:
Task4.scala,
Task5.scala,
Task6.scala,
Task7.scala. (You can save
Task8 for later.)
Edit each file so that it meets the requirements set out in the comments. You can work on each file separately; the programs don’t depend on each other.
Test your programs before you submit them for grading.
A+ presents the exercise submission form here.
A+ presents the exercise submission form here.
A+ presents the exercise submission form here.
A+ presents the exercise submission form here.
Convenient
indices
The
indices method is a convenient alternative to
until and
to when you want to
loop over the indices of a string or some other collection.
val myString = "llama"myString: String = llama for (index <- myString.indices) { println("Index " + index + " stores " + myString(index)) }Index 0 stores l Index 1 stores l Index 2 stores a Index 3 stores m Index 4 stores a
That works because
indices returns exactly the sort of
Range object that we could
have created with
until.
myString.indicesres1: Range = Range 0 until 5
Assignment: Election
Task description
General instructions and hints
Districtresembles
AuctionHousefrom Chapter 5.3. Use what you learned in that chapter.
- You can use a
forloop in many of the methods.
- Look at
AuctionHousefor inspiration.
- Use the app object
ElectionTest. It calls several methods in class
District. Feel free to edit the app object as you see fit so that it covers more test cases.
- You can’t run
ElectionTestas given before you have some sort of implementation for each of the requested methods in
District.
- What you can do is “comment out” parts of
ElectionTestand test the parts of
Districtthat you already wrote. Another option is to write a quick “dummy implementation” for the missing methods (e.g., by using
???as in Chapter 3.6.)
- We recommend that you tackle the assignment in the following order:
Recommended steps
- Read the documentation. Notice that some of the methods in class
Districtwill be implemented in a much later assignment (in Chapter 9.2). Focus on the rest of the methods, which are relevant now.
- Study the given Scala code.
- Start working on
Districtas follows. Use
ElectionTestrepeatedly to test your solution as you go.
- Implement
toString.
- Implement
printCandidates.
- For inspiration, look at
nextDayin
AuctionHouse.
- Implement
candidatesFrom.
- For inspiration, look at
purchasesOfin
AuctionHouse.
- Note that the return type is
Vector[Candidate]. You can use a buffer to collect the right candidates, but you need to then use
toVectorto produce a vector.
- Implement
topCandidate.
- For inspiration, look at
priciestin
AuctionHouse.
- Can you use the vector’s
headand
tailmethods so that the
topCandidatedoesn’t needlessly compare the first element with itself? (This is not required.)
- Implement the
totalVotesmethods.
- For inspiration, look at
totalPriceand
numberOfOpenItemsin
AuctionHouse.
This is voluntary but highly recommended:
Did you get the two
totalVotes methods working? Excellent!
But are they very similar to each other? Do they have multiple
identical lines of code?
Duplicate code isn’t pretty. It can also make it harder to modify a program and invites bugs.
A good way to reduce duplication is to create an auxiliary method
that takes care of what the other methods have in common. Here, the
common part is taking a vector of candidates and summing up those
candidates’ votes; the
totalVotes methods both need to do that,
and differ from each other only in which candidates are included.
Here’s an outline for the auxiliary method:
private def countVotes(candidates: Vector[Candidate]) = { // Sum the votes in the given vector and return the result. }
Since the method is meant for
District’s internal use, it makes
sense that it’s
private.
Can you define
countVotes and implement the two
totalVotes
methods so that they call
countVotes and pass in the appropriate
vector as a parameter?
A repeating pattern
A pattern repeats across the methods of class
AuctionHouse and presumably also many
of the methods you wrote in class
District:
- We initialize a variable that we’ll use to accumulate the method’s return value.
- We then loop over a collection and perform an operation on each element:
for (element <- collection) { ... }
- Finally, we return the value of the result variable, which our loop has updated.
Since many of our methods follow this pattern, their code looks rather similar. Some of the methods contain identical lines of code.
This is a kind of duplication, too, isn’t it? Could we avoid it?
Yes. By the time we get to Chapter 6.3, you’ll have learned how to do that very elegantly.
Submission form
A+ presents the exercise submission form here.
Assignment: FlappyBug (Part 15 of 16: More Obstacles)
Task description
So far, FlappyBug has had just a single obstacle. Edit the game so that there are multiple
obstacles. The obstacles will be represented by separate instances of class
Obstacle
and stored in a
Vector. Each obstacle behaves the same way as the others, but they have
different sizes and locations.
Here’s what you need to do:
- Add the obstacles to the game: Remove the variable
obstacleof class
Gameand the single 70-pixel obstacle it refers to. Replace them with a variable named
obstaclesthat refers to a vector with three obstacles. The first obstacle should have a radius of 70 pixels, the second one a radius of 30 pixels, and the last one a radius of 20 pixels.
- Make each obstacle move: edit the
timePassesmethod in class
Gameso that it advances each of the obstacles stored in the vector.
- Hint: Use a
forloop. A simple one will do.
- Update the game-over condition: Edit the
isLostmethod in class
Gameso that it returns
trueif the bug is out of bounds or if any of the obstacles in the vector touches the bug.
- Hint: This step is a bit more complicated. One solution is to use a
varof type
Booleanto track whether your loop has already found an obstacle that touches the bug. Make the variable a local variable; define it within the method but before the loop.
- Make the obstacles visible: edit the
makePicmethod in the GUI so that all the obstacles get placed against the background image. The bug image should be added last, as before.
Submission form
A+ presents the exercise submission form here.
New role: the (one-way) flag
The hint for
isLost suggested that you use a
Boolean variable in
a particular way. This is an example of a common way to use a variable —
a role — that we haven’t discussed before.
In programmers’ parlance, a flag (lippu) is a data item that is used to indicate whether a particular situation has occurred: figuratively, we “raise the flag” to indicate that it has.
In
isLost, the flag variable is “raised” (i.e., gets the value
true)
when or if it turns out that an obstacle is being touched. To be more
precise, the variable is a one-way flag (yksisuuntainen lippu):
after its value changes once, it never changes back.
A flag has only two different states, which is why flags are often
Boolean variables.
Worried about efficiency?
As soon as
isLost works out that there is even a single obstacle that
touches the bug, it knows that it needs to return
true. Once that’s
established, the method wouldn’t actually need to look at any of the
other obstacles. However, if you implement the method as we suggested,
it always checks every obstacle in the vector.
Given that the game has just three obstacles, any computer can check all of them in the barest of instants. But what if we were operating on a vast collection of data and needed to check whether it contains at least one element that matches a specific condition? Ideally, we’d like the computer to stop checking as soon as it knows the result.
There are ways to do that. We’ll return to the matter in Chapters 6.3 and 7.1 at which point we’ll have a heftier toolkit.
Nested Loops
You can put a loop inside another. What happens is that the entire inner loop gets executed each time the outer loop’s body runs.
In the example above, the inner loop (with two cycles) runs multiple times whereas
the outer loop (with three) runs only once. The innermost
println is executed six
times in total.
Loop-reading practice
A mini-assignment on nested loops
Return to
o1.looptest and do
Task8.scala.
A+ presents the exercise submission form here.
Scopes
By now, you have seen many examples of nested code structures, but we haven’t properly discussed how nesting impacts on variable definitions and the like.
Each program component — variable, method, class, and singleton object — has a
scope (käyttöalue; skooppi) within the program: the component can be accessed only
from certain parts of the program code. This scope depends on the context where the component
is defined; you can further adjust it with access modifiers such as
private.
An attempt to use a variable or method outside its scope results in a compile-time error.
Let’s look at the scope of class members first, then consider local variables.
Class members and scope
The code of a
ChristmasTree class is given below. You don’t have to understand how the
class works (even though it is possible to work that out given what we have covered in O1).
This class is here merely as an example of nesting and scopes, and we’ll go through the code
only superficially, from that perspective. } }
heightencompasses the entire class. Moreover, the variable is accessible from outside the class, too, as long as we have an instance of the class available:
myObject.height. Similarly, we can call a public method such as
toStringanywhere within the class or outside of it. Instance variables and methods are public unless otherwise specified.
widestPoint, or a private method, such as
widthor
line, is limited to the enclosing class (and any companion object the class may have; Chapter 5.1).
Local variables and scope } }
Mouse over the boxes below to highlight the corresponding scope within the program.
characteror
howManyis defined within the entire method. You can use it anywhere within the method.
spaceAtLeft, can be used anywhere between its definition and the end of the method.
wholeLine.
lineNbr, for example, is defined only within the
forloop.
Another example
This method further highlights the scopes of local variables (but is otherwise meaningless):
def myMethod(myParam: Int) = { println("Hello") var outer = myParam * 2 val myVal = 1 for (middle <- 1 to 3) { println("Hi") val myVal = 10 outer += middle * 2 + myVal if (middle < outer + myParam) { val inner = readLine("Enter a number: ").toInt outer += inner + myVal } } outer + myVal }
myParamis the entire method.
outeris the rest of the method from the variable definition onwards.
middleis available only within the loop.
inneris available only within the
if.
What about
myVal? There are two of them, which is legal in nested structures:
myValruns from the variable definition to the end of the loop.
myValwould be available to the rest of the method. However: the inner
myValshadows the outer variable of the same name and effectively prevents it from being used within the loop. Therefore...
myVals refer to the inner variable...
The example illustrates a general principle: a local variable’s scope is limited to the
innermost block (lohko) within which the variable definition appears. Informally,
we can define a block as a section of program code that contains a sequence of commands,
such as a
for loop or a branch of an
if. Blocks can be nested in other blocks; in
well-written code, indentation highlights the program’s blocks.
Choosing a scope for a variable
A rule of thumb: Choose the narrowest scope that works.
Unless you have a good reason to make your variable an instance variable, make it a local variable instead. Define that local variable in the innermost block that works.
By keeping scopes narrow, you make your code easier to read and modify. If a variable has a narrow scope, it’s easier to tell which parts of code depend on it and you’re less likely to introduce unnecessary dependencies between parts of your program. In some cases, narrow scoping can reduce memory usage or even speed up the program somewhat.
Beginner programmers commonly overuse instance variables where local variables would do.
For that reason, here’s one more example of scopes: the
VendingMachine class from
Chapter 3.4.
class VendingMachine(var bottlePrice: Int, private var bottleCount: Int) { private var earnedCash = 0 private var insertedCash = 0 def sellBottle() = { if (this.isSoldOut || !this.enoughMoneyInserted) { None } else { this.earnedCash = this.earnedCash + this.bottlePrice this.bottleCount = this.bottleCount - 1 val changeGiven = this.insertedCash - this.bottlePrice this.insertedCash = 0 Some(changeGiven) } } def addBottles(newBottles: Int) = { this.bottleCount = this.bottleCount + newBottles } // ... }
VendingMachineobject. This value should be stored even while none of the machine’s methods is running. Therefore, it makes sense to define
earnedCashas an instance variable. The same goes for this class’s other instance variables.
sellBottletemporarily needs a variable for storing the amount of change. This is an intermediate result associated with a single invocation of the method; it’s not a persistent part the machine’s state nor is it needed by any of the other methods. It therefore makes sense to use a local variable.
To summarize: prefer local variables to instance variable unless one of the following applies.
- The variable obviously represents information that defines an object (e.g., the name of a person; the courses a student is enrolled in).
- The variable stores data that needs to be stored even while none of the object’s methods is running. Or:
- There is some other justification for the instance variable, such as a specific trick to optimize efficiency. (We won’t go into that in O1.)
Summary of Key Points
- Vectors and buffers aren’t the only thing you can loop through. You can use
foron strings and ranges of numbers, for instance.
- You can nest a loop within another. If you do, the entire inner loop can be executed multiple times as part of the outer loop’s body.
- Each variable and method has a scope: it’s accessible from some parts of the program only.
- It’s usually unwise to make the scope of a variable or method wider than necessary.
- The block structure of a method impacts on the scope of local variables.
- Links to the glossary: loop,
forloop, iteration; collection, string; scope, block; one-way.
Char(Chapter 4.5) — and you can use it in a
forloop as shown. | https://plus.cs.aalto.fi/o1/2018/w05/ch04/ | CC-MAIN-2020-24 | refinedweb | 3,380 | 65.93 |
public int mystery (int k ) { if ( k ==1 ) return 0; else return ( 1 + mystery(k/2) ); }
the answer is 4, but I really don't understand how....anybody care to explain why this int mystery (int k ) { if ( k ==1 ) return 0; else return ( 1 + mystery(k/2) ); }
the answer is 4, but I really don't understand how....anybody care to explain why this is?
Try doing some debugging. Add some println statements to print out the values of variables as they are used. The printout will show you what the code is doing.
If you don't understand my answer, don't ignore it, ask a question.
Recursion has two important parts:Recursion has two important parts:
1. An "exit case" - where a condition is reached such that no more recursive calls are made. In your example, k==1 is the exit case.
2. The recursive call - where the recursive method calls itself. In your example, mystery(k/2)
An important thing to note here is integer division. When you divide two integers in Java, any fractional remainder is ignored. So: 4/2=2, 5/2=2, 6/2=3, 7/2=3, ect.
The numbers are not rounded or anything. The decimals are just simply tossed away.
When you call a method in Java, the method does not finish until the method is exited (with a return statement, if a return value is required).
When you call: return (1 + mystery(k/2);, the method doesn't "finish" until the value of (1 + mystery(k/2)) is finished calculating. Since the method calls itself, the "instance" of the method which called mystery(k/2) has to wait until mystery(k/2) is finished evaluating.
So, you can imagine the recursive calculations unraveling like this:
int value = mystery(9);
int value = (1 + mystery(9/2)) ** 9/2 = 4
int value = (1 + (1 + mystery(4/2)) ) ** 4/2 = 2
int value = (1 + (1 + (1 + mystery(2/2)) ) ) ** 2/2 = 1
int value = (1 + (1 + (1 + (1 + (0)) ) ) )
int value = (1 + (1 + (1 + (1)) ) )
int value = (1 + (1 + (2) ) )
int value = (1 + (3) )
int value =: | http://www.javaprogrammingforums.com/object-oriented-programming/37123-confused-recursive-method-example.html | CC-MAIN-2015-18 | refinedweb | 354 | 72.46 |
Important: Please read the Qt Code of Conduct -
[SOLVED] Changing model property of ListView during animation
Hi,
I am using the PropertyAction method to change the value of my "model" property for a ListView that I am animating. Does not work. Any idea why?
Could you please provide us your code so we can check where the problem could be?
yep, sorry .. I just thought maybe not being able to change models during an animation might be normal behavior. I'll isolate the issue and post it here when I get in.
thnx
Im not sure if it has to work or it hasn't to work, but okay, isolate the problem and post us your result when you've got it.
Hi,
Okay, here is a simple example of the behavior. When I set the "model" property of the listView between the two NumberAnimations I am hoping the list contents will change to those from model2. However, when I hit the "up arrow" key there are no errors thrown and the list is not visible.
I feel like I am using PropertyAction incorrectly, but I don't know what I am doing wrong. I tried putting model2 in quotes but that did not work either.
Thanks
@
import QtQuick 2.0
import QtQuick.Window 2.1
Rectangle {
id: mainView
width: Screen.desktopAvailableWidth
height: Screen.desktopAvailableHeight
color: "black"
ListModel { id: model1 ListElement { name: "Apple" } ListElement { name: "Orange" } ListElement { name: "Banana" } } ListModel { id: model2 ListElement { name: "Avacado" } ListElement { name: "Tomato" } ListElement { name: "Onion" } } Component { id: listDelegate Column { Text { color: "white"; text: name; font.pointSize: 36 } } } ListView { id: listView spacing: 50 anchors.fill: parent model: model1 delegate: listDelegate width: Screen.desktopAvailableWidth - 50; height: Screen.desktopAvailableHeight; SequentialAnimation { id: seqAnim NumberAnimation { target: listView; property: "opacity"; to: 0; duration: 1000 } PropertyAction { target: listView; properties: "model"; value: model2 } NumberAnimation { target: listView; property: "opacity"; to: 1; duration: 1000 } } focus: true Keys.onUpPressed: { seqAnim.running = true; } }
}
@
Hey,
im sorry but i can't help you yet because i didn't work that much with animations and transitions. However, you could do it by setting @listView.model = model2@
manually, but thats not a good solution. Maybe i'll find a solution later or tomorrow.
Hi,
I removed PropertyAction line and used this instead ..
@ScriptAction { script: listView.model = model2; }@
That seems to work!
thnx | https://forum.qt.io/topic/37217/solved-changing-model-property-of-listview-during-animation | CC-MAIN-2021-04 | refinedweb | 383 | 58.28 |
SAP Solution Manager Interview Questions
SAP Solution Manager Interview Questions
Q..
Q.
Q. Explain what is SAP solution manager diagnostics?
SAP solution manager diagnostics are a group of tools to monitor and analyze SAP systems. The main tools are workload analysis, exception analysis, trace analysis and change analysis.
Q. Mention the benefits of SAP Solution Manager?
Benefits of SAP Manager Solution includes
- Automated configuration tracking
- Easy Integration
- Faster ROI
- Reduced administration effort
- Improved patch and upgrade management
- Automated Alerts
- Lowering cost
- Centralized management
- Automated Alerts
Q. Mention what are the features of Change Request Management?
Change request management features include
- Search and Monitoring
- Change documentation
- Manage project phases
- Request for Change Scope
- Enhanced Approval Process
- Transport Management
- Test Management
Q.?
Q. Explain how SAP Solution Manager helps in testing?
SAP solution manager helps in speeding up of test preparation and execution. It gives a single point of access to the complete system landscape and allows the centralized storage of testing materials and test results to support cross-component testing.
Q. List out the features of the business blueprint?
The features of business blueprint includes
- BluePrint Structure
- Business Process Group
- Associated Items
- Business Scenarios
- Blueprint document
Q. Mention what key approaches are supported by the SAP Solution Manager in the implementation phase?
Process oriented implementation approach is supported by the SAP solution manager in the implementation phase.
Q. Mention the transaction code for project administration in SAP Solution Manager?
For project administration, the transaction code is SOLAR_PROJECT_ADMIN.
Q. What is the transaction code for Business Blueprint in SAP Solution Manager?
For SAP Solution Manager, the t-code for Business Blueprint is Solar01.
Q.
Q. Which common usage scenario is missing from the list of usage scenarios below? Implement SAP solution, Monitor SAP solutions, Manage Service Desk, Link to SAP Services, Upgrade SAP solutions
Manage change Requests
Q. Identify the four main types of roadmaps reviewed.
- Global template roadmap
- Upgrade roadmap            Â
- Solution management roadmap
- Implementation roadmap   Â
Q. What tasks, contents, and importance does the Data Dictionary have in the R/3 system
- It is needed for developing R/3 software
- It is needed to run R/3 applications
- It contains, among other things, the R/3 Data Model attributes
- It contains, among other things, the rules for checking entered data
Q. What are the benefits of the testing features
- Reduce time for test preparation and execution
- Central storage of testing material and test results
- Re-use of existing testing material.
- Single point of access to complete system landscape
Q. Finding IBase and Instance by order GUID
How can I find IBase and Instance by order GUID?
A: Try to use this chain:
CRMD_LINK-GUID_HI=CRMD_ORDERADM_H-GUID, where
CRMD_LINK-OBJTYPE_SET=29
CRMD_SRV_OSSET-GUID_SET = С?RMD_LINK-GUID_SET
CRMD_SRV_REFOBJ-GUID_REF=CRMD_SRV_OSSET-GUID
IBIN-IN_GUID=CRMD_SRV_REFOBJ-IB_COMP_REF_GUID=IBINTX-
IN_GUID
IBase is in IBIN-IBASE
The instance is in IBIN-INSTANCE, while the description is in IBINTX-DESCR.
Q. How to Install CCMS AGENT in SOLMAN 4.0
How do I install CCMS Agent in solution manager 4.0?
A: You can do the following steps:
- Go to the working directory for sapccm4x (in satellite system); Create folder called ‘sapccm4x’ in directory /usr/sap/<SID>/<INSTANCE_NAME>/log
- Copy sapccm4x to exe directory (in satellite system);
- Create CSMREG User in CEN system RZ21 –> Technical Infrastructure –> Configure Central System –> Create CSMREG User;
- Create the CSMCONF Start File — in CEN system RZ21 –>Technical Infrastructure –> Configure Central System -> Create CSMCONF StartFile for Agents download and upload it to agent’s working directory (satellite system);
- Register Dialog (from sapccm4x directory) (in satellite system) sapccm4x –R pf=<profile path>;
- Dialog-Free Registration of CCMS Agents (in satellite system) sapccm4x –r -f <file name> pf=<profile path> <file name > is the CSMS Conf file. [do 5 or 6];
- After registering you have to start the agent sapccm4x –DCCMS pf=<profile path> (in satellite system);
Note: Make a copy of the configuration file before de-registering because this file will be deleted afterwards. You have to create it again or do the dialog agent registration when you register again.
Q. Linking SLFN to ZDCR
When I try to trigger ZDCR from a service desk message via “Action-> Create Change Document”, it creates SDCR instead.
How can I configure SLFN to trigger ZDCR?
A: Try this:
Use transaction CRMC_ACTION_DEF, find Action Profile AI_SDK_STANDARD (if you use a standard one).
Then in tree, go to Action Definition, find your action, select it, and in tree choose Processing Types.
There you will see method CRM_DNO_ORDER – change. Processing Parameters needs to have appropriate values.
Here’s what you have to do:
1) Go to SPRO, and follow this path:
SAP Solution Manager > Configuration > Scenario-Specific Settings > Change Request Management > Extended Configuration > Change Transaction > Transaction Types > Make Settings for Change Transaction Types
2) Click on Make Settings for Change Transaction Types, select SLFN and double-click on Copy Control Rules.
3) Change SDCR to your ZDCR.
This way, when you select “create a change document action” from the service desk message, it will create your own change request.
Q. Service Desk Smartform question
How can I include the LONG TEXT when sending mail to the user, thus requiring similar long text code?
Currently the form I have only lists the short description when I send the email via action profile. E.g.;
Short Description
&ORDERADM_H-DESCRIPTION&
A: Take a look at the SAP Smartform CRM_SLFN_ORDER_SERVICE_01
Go to the section Next Page –> Main Window –> TEXT_HEADER_LOOP
You can use this procedure/coding to include the long texts of a support message into your Smartform.
The long text only includes the system information where as the full description of the message is not system details.
All the standard templates provide the system details only.
Q. No users are shown for Service Desk functionality in SOLMAN 4.0
I would like to set up the business partners for Service Desk in SAP Solution Manager.
If I choose the solution and execute “Edit –> Create Business partner” for the SAP R/3 4.7, I get the warning “no users are shown under the system”.
Meanwhile, for SAP SRM 5.0, I get only one user: (SOLMAN<SID><CLNT>). I set the data selection to blank.
I also copied the user account of SOLMAN<SID><CLNT> to the user test with no success. I have assigned the Object S_RFC_ and S_USER_GRP. The users also have the profile SAP_ALL.
How can this problem be solved?
A: Assign the role “SAP_SM_S_USER_GRP” to the user with the RFC_READ connection, and refresh the last alert in DSWP.
Q. SolMan and SLD
I have installed SolMan 4.0 and SLD.
Is it possible to configure SMSY without previously configuring SLD?
A: Yes, technically it is possible to configure SolMan without an SLD.
- You can have separate SLD islands used for different products.
- You can even have 2 central SLDs using the same SLD Bridge.
- You can also have as many SLD as you want for this will increase administration.
SLD usage for Solution Manager is not as critical as usage for XI because SMSY in solution manager can obtain system data via STMS and you can also add data manually. It normally only benefits Solution Manager to connect to an existing SLD which already has been populated otherwise you would have to setup SLD only to populate SMSY which can be populated directly without SLD.
If SLD does not already exist with systems for other clients, the time advantage of using it with Solution Manager is removed.
You can connect Solution Manager as a client or a data supplier to any SLD, anytime after installation and is not a problem.
Just change SLDAPICUST destinations and RZ70 settings. Also check the settings in SMSY_Setup and remove the write back option in the Expert Configuration for SLD.
Q. System/IBase for DQ0 to be added in Solution Manager SM0-010
How can System/IBase for DQ0 to be added in Solution Manager SM0-010?
A: If it is an ABAP based application, check if it has already been created in transaction IB52. If yes, you might only need to go to transaction DSWP (solution_manager), go to the menu, select edit, and in the drop down menu you should have “Initial Data Transfer for IBase”.
If it is not there, you need first to check:
- RFC connections maintained for DCQ in SMSY
- Logical Component created in SMSY for DCQ
- Assignment to Logical Components in SMSY for DCQ
- DCQ added to the Solution SMSY
Then check again in IB52. IB51 is used for creating the entries manually.
If the entries are now in IB52, you need to execute the “Initial Data Transfer for IBase” again.
Q. Message exchanging by using Support Desk
Is it possible for a user to exchange messages by using only R/3?
A: You can continue using simple messaging using SBWP. Business Workplace is in your R/3.
Q. Generate Key for ERP Installation
How to generate key to use it for ERP installation?
A: After executing Tcode SMSY in Solution Manager System, you need to do the following steps:
- Create a system by right clicking on System entry and select Create new system.
- Enter the System Name i.e., SID (3 chars)
- Product = SAP ECC (select from the list)
- Product Version= ECC 5.0 (select from the list)
- Save the entries.
- Select Menu Item “System—>Other Configuration” and enter the SID which you have created earlier.
- Enter the Server Name (hostname)
- Finally click on Generate “Installation/Upgrade Key Button”
The system generates a key. Copy that key and paste it in the SAPINST screen when it prompts for SolMan Key.
Using SMSY T-code, we can create the solution manager key.
Q. Problem with copy transaction for customer namespace
I have a problem when copying transaction SLFN for customer namespace (ZLFN). ABA item details appear in t_code Crm_dno_monitor and ABA overview in transaction data disappear.
Do you have any suggestions?
A: To solve this problem follow these steps:
Execute CRMV_SSC.
- In sub screen ” Copy\Delete Screen Control data “, select transaction type SLFN and UI method as ‘ ORDER ‘ and then click on button ” COPY “.
- There you will find three sub screens.
- On the subsequent screen, there will be two sub screens:
- Template
- Object to be created
- On the sub screen ‘Object to be created’:
Enter the transaction type in customer namespace (ZLFN), and in the screen profile field put value ‘SRV_SLFN_1’ and then click on “Start Copy Transaction “.
This screen professional value is the same as that on screen profile field on the template sub screen.
Q. E2E Exception Analysis
I have configured SMD (Solution Manager Diagnostics) on SM 4.0 SP12 and I also have setup IntroScope agent and EP agent successfully.
When trying to run the program E2E Exception Analysis I get the message:
“This application is not yet configured. Please refer to the E2E post-installation steps”.
Unfortunately I cannot find any documentation about these post installation steps.
How can I resolve this?
A: E2E functionalities are very difficult to setup manually. In SPS12, you should order an E2E Diagnostics starter pack.
General availability of E2E tools is SPS13 (automatic setup).
Q. Checking System availability through Solution Manager
We are having a scenario in which we want to check System Availability of all the satellite systems that are connected to Solution Manager.
In this scenario, if the a system goes down, a mail is triggered – just as in CCMS where it is setup so when a threshold is crossed, a mail is triggered.
Can the system monitoring functionality or another functionality be used, or do we need a zee development for the same?
A: You can use CCMS in combination with CCMSPING and/or GRMG.
This will require the installation of CCMSPING on one of your servers.
Once you have CCMSPING installed, you can configure it at Transaction RZ21 -> Technical Infrastructure -> Availability Monitoring in Solution Manager.
Q. LIS or SLD
We have some development systems that are not part of our transport management domain.
Can the same Solution Manager box monitor those systems as well? If so, do they exist in the SLD, the LIS, or both? If both, where are they created first?
A: Yes, the systems can be monitored via Solution Manager. It is not a pre-requisite for monitoring that you have the systems in a particular Transport Domain.
You can choose either one of them. You can create the systems in SLD and then import the system data from SLD to Solution Manager, or you can create the systems directly in Solution Manager and then write back the data to SLD.
Q. BUP003 missing in tcode BP
We deployed SP 12 for SolMan 4.0. Now that I’m trying to manually add new business partners’ type employee to the system, I found out that BUP003 = Employee is no longer in the selection list.
I checked BUSD and the partner type is there but as I said it no longer shows up on the list when creating persons to BP.
A: If employees are replicated from HR, role employee is automatically created and cannot be maintained locally. This is the reason why if HRALX PBPHR is ON, you cannot see BUP003 in transaction BP. Change it to OFF or CREATE, and you will be able to do it.
If you are not replicating employees from an HR system, then setting HRALX PBPHR should be OFF or CREATE. Then you will be able to maintain role BUP003 Employee in transaction BP.
Q. Creating default field values using Notif_Create
Is it possible to default to a single IBASE system, including priority and category, using notif_create?
A: This can be accomplished with report RDSWP_NOTIF_CREATE_CUSTOMIZE_S.
Q. Action Profile
Can anyone explain to me in detail about action profiles and action? How do I create it or configure it?
A: For the most up-to-date information on using actions with Alert Management, see the release note titled “Using Actions to Trigger Alerts”.
All maximal allowed actions are defined for a transaction type. You also specify general conditions in the action profile for the actions contained in the profile. For example:
- The time period in which the system starts the action (for example, when saving the document)
- The way in which the system executes the action (workflow, method call or Smart Forms)
In this activity, you create an action profile and templates for actions. You can define the action templates more closely in the step “Change action profiles” and “define conditions”.
For the action profile, the class that provides the attributes for your business object must be entered. These business objects can be used for planning actions. When creating an action profile, note for which business transaction type you can use this action profile. You must assign the relevant business object type to the action profile.
The assignment of the business object type makes sure that the attributes for the relevant business transaction type (for example, sales contract) can be used for defining and processing the conditions. If, for example, you wish to make the action depend on the net value of the transaction, the profile must be assigned to a business object type that contains the attribute net value. Only one business object can be assigned for each action profile.
You can find out the business object type for the transaction type or the item category in Customizing for transactions under Define transaction types or Define item categories. If you work with time-dependent conditions, you must also assign a date profile to the action definition. This makes sure that the date rules that you use for the action definitions are also available. You can also assign the date profile to the entire action profile. It is then inherited as the default value in every action definition you create for this profile.
When defining the follow-up documents, consider the copying control for the relevant transaction types. You can also define here whether an action is to be partner-dependent.
Note also the copying control for the relevant transaction types when defining subsequent documents.
You can enter several processing types for one action definition. Under processing, choose:
Method calls;
If the action consists of one single step, for example, create subsequent document or create credit memo item.
During the method call, processing is carried out via Business-Add-Ins (BAdIs). Standard methods (BAdIs) are available.
When creating your own BAdI implementations, make sure that the method ‘get_ref_object’ is always called from the class ‘CL_ACTION_EXECUTE’, and the method ‘register_for_save’ always at the end.
You can use the implementations ‘COPY_DOCUMENT’ and ‘COPY_ITEM_LOCAL’ as a template.
If you want to use actions to trigger alerts, use processing method ‘TRIGGER_ALERT’. You should call this method from the class ‘CL_ACTION_EXECUTE’.
- Workflow:
If the action consists of a process with several steps, for example, a subsequent document with approval procedure.
- Smart Forms:
For issuing documents via fax, printing, or email
Requirements:
In order to create action profiles, you must have defined the necessary transaction types or item categories.
If you are using time-dependent conditions, you need to have defined date profiles. You define date profiles in the IMG under Basic Functions -> Date Management.
Standard settings:
SAP delivers the following standard action profiles:
For activities: ACTIVITY contains the following action definitions:
- ACTIVITY_FOLLOWUP: Creates a task for the responsible employee if a business activity is overdue.
- ACTIVITY_PRINT: Makes it possible to print the activity. | http://mindmajix.com/sap-solution-manager-interview-questions | CC-MAIN-2017-13 | refinedweb | 2,914 | 54.32 |
ZDI-CAN-3766: Mozilla Firefox Clear
Key Decryptor Heap Buffer Overflow Remote Code Execution Vulnerability
RESOLVED FIXED in Firefox 48
Status
()
P1
normal
People
(Reporter: abillings, Assigned: gerald)
Tracking
({csectype-bounds, sec-high})
Bug Flags:
Firefox Tracking Flags
(firefox46 wontfix, firefox47 wontfix, firefox48+ fixed, firefox49+ fixed, firefox-esr38 wontfix, firefox-esr4548+ fixed, firefox50+ fixed)
Details
(Whiteboard: [adv-main48+][adv-esr45.3+])
Attachments
(2 attachments)
Created attachment 8754923 [details] ZDI POC We received the following report from Trending Micro's Zero Day Initiative (ZDI): ZDI-CAN-3766: Mozilla Firefox ClearKeyDecryptor Heap Buffer Overflow Remote Code Execution Vulnerability -- CVSS ----------------------------------------- 6.8, AV:N/AC:M/Au:N/C:P/I:P/A:P -- ABSTRACT ------------------------------------- Trend Micro's Zero Day Initiative has identified a vulnerability affecting the following products: Mozilla Firefox -- VULNERABILITY DETAILS ------------------------ Tested against Firefox 45.0.2 on Windows 8.1 ``` (984.bac): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=0253170d ebx=0252effd ecx=0000270d edx=00002710 esi=0252f000 edi=05174ffb eip=672af26a esp=063efab8 ebp=05174ff8 iopl=0 nv up ei pl nz ac pe cy cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010217 clearkey!memcpy+0x2a: 672af26a f3a4 rep movs byte ptr es:[edi],byte ptr [esi] 0:004> kv ChildEBP RetAddr Args to Child 063efabc 672a36fb 05174ff8 0252effd 00002710 clearkey!memcpy+0x2a (FPO: [3,0,2]) (CONV: cdecl) [f:\dd\vctools\crt\crtw32\string\i386\memcpy.asm @ 188] 063efb04 672a366e 0252eff8 00000004 00000000 clearkey!ClearKeyDecryptor::Decrypt+0x5c (FPO: [3,10,0]) (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\media\gmp-clearkey\0.1\clearkeydecryptionmanager.cpp @ 182] 063efb20 672a5e2a 02200fa8 02200fcc 063efb90 clearkey!ClearKeyDecryptionManager::Decrypt+0x3b (FPO: [2,0,4]) (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\media\gmp-clearkey\0.1\clearkeydecryptionmanager.cpp @ 138] 063efb48 672ab4ec 02200fa8 5ca1ff99 03109450 clearkey!VideoDecoder::DecodeTask+0x84 (FPO: [Non-Fpo]) (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\media\gmp-clearkey\0.1\videodecoder.cpp @ 167] 063efb50 5ca1ff99 03109450 5b9cfd3d 063efc88 clearkey!gmp_task_args_m_1<VideoDecoder *,void (__thiscall VideoDecoder::*)(VideoDecoder::DecodeData *),VideoDecoder::DecodeData *>::Run+0xe (FPO: [0,0,0]) (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\media\gmp-clearkey\0.1\gmp-task-utils-generated.h @ 133] 063efb58 5b9cfd3d 063efc88 0311b8e0 063efbc0 xul!mozilla::gmp::Runnable::Run+0xb (FPO: [0,0,4]) (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\dom\media\gmp\gmpplatform.cpp @ 41] 063efbc0 5b9cf5d1 063efc88 063efc88 03104b08 xul!MessageLoop::DoWork+0x19c (FPO: [Non-Fpo]) (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\ipc\chromium\src\base\message_loop.cc @ 459] 063efc08 5b9cfff3 063efc88 0ff0bb96 5bca3c39 xul!base::MessagePumpDefault::Run+0x2e (FPO: [Non-Fpo]) (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\ipc\chromium\src\base\message_pump_default.cc @ 35] 063efc40 5b9d0039 03104b1c 00000001 7503ef00 xul!MessageLoop::RunHandler+0x20 (FPO: [SEH]) (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\ipc\chromium\src\base\message_loop.cc @ 228] 063efc60 5c026784 5bca3c39 063efd5c 03119610 xul!MessageLoop::Run+0x19 (FPO: [Non-Fpo]) (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\ipc\chromium\src\base\message_loop.cc @ 202] 063efd44 5bca3c42 770f4198 03104b08 770f4170 xul!base::Thread::ThreadMain+0x382b3d (CONV: thiscall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\ipc\chromium\src\base\thread.cc @ 175] 063efd48 770f4198 03104b08 770f4170 ba9c818e xul!`anonymous namespace'::ThreadFunc+0x9 (FPO: [1,0,0]) (CONV: stdcall) [c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\ipc\chromium\src\base\platform_thread_win.cc @ 27] 063efd5c 77722cb1 03104b08 100f92e3 00000000 KERNEL32!BaseThreadInitThunk+0x24 (FPO: [Non-Fpo]) 063efda4 77722c7f ffffffff 7774e75f 00000000 ntdll!__RtlUserThreadStart+0x2b (FPO: [SEH]) 063efdb4 00000000 5bca3c39 03104b08 00000000 ntdll!_RtlUserThreadStart+0x1b (FPO: [Non-Fpo]) 0:004> !lmi clearkey Loaded Module Info: [clearkey] Module: clearkey Base Address: 672a0000 Image Name: C:\Program Files\Mozilla Firefox\gmp-clearkey\0.1\clearkey.dll Machine Type: 332 (I386) Time Stamp: 57070eac Thu Apr 07 18:51:40 2016 Size: 32000 CheckSum: 33f5d Characteristics: 2122 Debug Data Dirs: Type Size VA Pointer CODEVIEW 82, 297a8, 27fa8 RSDS - GUID: {94D222EF-9D7F-45FF-81C0-C0F16ADA8872} Age: 2, Pdb: c:\builds\moz2_slave\rel-m-rel-w32_bld-000000000000\build\obj-firefox\media\gmp-clearkey\0.1\clearkey.pdb ?? 14, 2982c, 2802c [Data not mapped] CLSID 4, 29840, 28040 [Data not mapped] Image Type: MEMORY - Image read successfully from loaded memory. Symbol Type: PDB - Symbols loaded successfully from image header. z:\export\symbols\clearkey.pdb\94D222EF9D7F45FF81C0C0F16ADA88722\clearkey.pdb Compiler: Linker - front end [0.0 bld 0] - back end [12.0 bld 30723] Load Report: private symbols & lines, source indexed z:\export\symbols\clearkey.pdb\94D222EF9D7F45FF81C0C0F16ADA88722\clearkey.pdb 0:004> lmvm clearkey start end module name 672a0000 672d2000 clearkey (private pdb symbols) z:\export\symbols\clearkey.pdb\94D222EF9D7F45FF81C0C0F16ADA88722\clearkey.pdb Loaded symbol image file: C:\Program Files\Mozilla Firefox\gmp-clearkey\0.1\clearkey.dll Image path: C:\Program Files\Mozilla Firefox\gmp-clearkey\0.1\clearkey.dll Image name: clearkey.dll Timestamp: Thu Apr 07 18:51:40 2016 (57070EAC) CheckSum: 00033F5D ImageSize: 00032000 File version: 45.0.2.5941 Product version: 45.0.2.5941 File flags: 0 (Mask 3F) File OS: 4 Unknown Win32 File type: 2.0 Dll File date: 00000000.00000000 Translations: 0000.04b0 CompanyName: Mozilla Foundation ProductName: Firefox InternalName: Firefox OriginalFilename: clearkey.dll ProductVersion: 45.0.2 FileVersion: 45.0.2 FileDescription: 45.0.2 LegalCopyright: License: MPL 2 LegalTrademarks: Mozilla Comments: Mozilla 0:004> vertarget Windows 8 Version 9600 UP Free x86 compatible Product: WinNt, suite: SingleUserTS kernel32.dll version: 6.3.9600.17415 (winblue_r4.141028-1500) Machine Name: Debug session time: Mon Apr 18 12:07:56.520 2016 (UTC - 7:00) System Uptime: 0 days 2:05:09.672 Process Uptime: 0 days 0:00:57.841 Kernel time: 0 days 0:00:00.046 User time: 0 days 0:00:00.031 ``` -- CREDIT --------------------------------------- This vulnerability was discovered by: Anonymous working with Trend Micro's Zero Day Initiative ---- The readme on the POC states: Firefox 45.0.1 ClearKeyDecryptor::Decrypt() (media/gmp-clearkey/0.1/ClearKeyDecryptionManager.cpp) heap overflow. The problem is that we fully control aBufferSize, ClearBytes and CipherBytes arrays. To debug you will need to attach to plugin-container.exe (clearkey.dll should be loaded). How to test: 1) copy all files from this directory to www dir 2) open index.html 3) alert will pop up, attach to plugin-container.exe 4) back to firefox, press OK to continue
Priority: -- → P1
Created attachment 8756939 [details] [diff] [review] 1274637-detect-oob-copy-attempts-in-clearkey-decryptor.patch Detect OOB copy attempts in clearkey decryptor. This detects the issue quite late in the decryption process, on the child side. Chris, should we investigate ways to prevent this issue from the parent side? (This would protect other GMPs against this attack, in case they don't check for it either.)
Assignee: nobody → gsquelart
Attachment #8756939 - Flags: review?(cpearce)
Comment on attachment 8756939 [details] [diff] [review] 1274637-detect-oob-copy-attempts-in-clearkey-decryptor.patch [Security approval request comment] How easily could an exploit be constructed based on the patch? Not that obvious (to me). The attacker would have to trace back where the parameters come from, then tailor a video file to give tricky values there. I'm not so sure there's a remote-code-execution possibility, as the target buffer is on the heap. Also, this is in the GMP, which runs inside a sandbox, making it harder to do real harm. Do comments in the patch, the check-in comment, or tests included in the patch paint a bulls-eye on the security problem? The comments and code point at the reading-past-the-end issue, not about the writing part, so it's hiding the problem a bit. Which older supported branches are affected by this flaw? All of them (code landed in FF35, Sept 2014) Do you have backports for the affected branches? If not, how different, hard to create, and risky will they be? It applies cleanly to aurora and beta. Easy to rebase for release and ESRs, if needed. How likely is this patch to cause regressions; how much testing does it need? I'd like to say zero chance of regression, as it's a simple test followed by an early return. No special testing needed apart from checking that the POC now fails gracefully. *?
Flags: needinfo?(cpearce)
Attachment #8756939 - Flags: sec-approval?
This needs a security rating before you know if it needs sec-approval to go in. That said, it has missed the 47 window and, even if approved, wouldn't go in until June 21 (two weeks into the next cycle).
(In reply to Al Billings [:abillings] from comment #3) > This needs a security rating before you know if it needs sec-approval to go > in. That wiki page says: > For security bugs with no sec- severity rating assume the worst and follow the rules for sec-critical. > [...] > if the bug has a patch *and* is sec-high or sec-critical, the developer should set the sec-approval flag to '?' on the patch And > If you have a patch and the bug is a hidden core-security bug with no rating then either: > 1. request sec-approval (to be safe) and wait for a rating, And > If developers are unsure about a bug and it has a patch ready, just mark the sec-approval flag to '?' and move on. All this tells me I was allowed to request sec-approval before knowing the exact sec-rating. Did I misread? If I were to rate the bug, I'd probably go with sec-high: I'm not so sure about the code execution possibility, but even if it was possible, it would be contained in a plugin in a sandbox, making it harder to cause more harm.
status-firefox46: --- → affected
status-firefox47: --- → affected
status-firefox48: --- → affected
status-firefox49: --- → affected
status-firefox-esr38: --- → affected
status-firefox-esr45: --- → affected
(In reply to Gerald Squelart [:gerald] (may be slow to respond) from comment #2) > *? I think it's unlikely that the Adobe GMP is also vulnerable to this; their DRM robustness requirements should have required them to *not* use a copy of our decrypt loop here in their own decrypt code. In fact, IIRC they don't support decrypt-only mode, so that code path shouldn't be hit even if they copied it.
Flags: needinfo?(cpearce)
(In reply to Gerald Squelart [:gerald] (PTO until 2016-06-13) from comment #4) > All this tells me I was allowed to request sec-approval before knowing the > exact sec-rating. Did I misread? My bad. I'd forgotten I'd put that in there. I'll make this sec-high. This is too late for 47 in any case (since we're about to make final builds) so this won't be able to check in until June 21, two weeks into the next cycle.
Keywords: sec-high
Whiteboard: [checkin on 6/21]
status-firefox46: affected → wontfix
status-firefox47: affected → wontfix
status-firefox-esr38: affected → wontfix
tracking-firefox49: --- → +
tracking-firefox-esr45: --- → 48+
tracking-firefox48: --- → +
We'll want branch patches for affected branches once it goes into trunk.
status-firefox50: --- → affected
tracking-firefox50: --- → +
The attached patch still applies cleanly to central, aurora, beta, and esr45, so it's ready to go.
It is okay to land this now.
Keywords: checkin-needed
Keywords: checkin-needed
Whiteboard: [checkin on 6/21]
Status: NEW → RESOLVED
Last Resolved: 2 years ago
status-firefox50: affected → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla50
Group: media-core-security → core-security-release
Hi Gerald, could you please nominate the patch for uplift to Beta, Aurora and ESR45? I could do it for you but I'd like to get your thoughts on risk, test coverage and whether this is easily exploitable or not. Thanks!
Flags: needinfo?(gsquelart)
Comment on attachment 8756939 [details] [diff] [review] 1274637-detect-oob-copy-attempts-in-clearkey-decryptor.patch Approval Request Comment [Feature/regressing bug #]: Clearkey media playback. [User impact if declined]: Potential RCE in sandboxed plugin. [Describe test coverage new/current, TreeHerder]: Locally tested by running the POC; landed in Nightly for a week. [Risks and why]: I'd like to say none, this is a simple past-the-end test resulting in early function exit with an error code. [String/UUID change made/needed]: None. [ESR Approval Request Comment] If this is not a sec:{high,crit} bug, please state case for ESR consideration: User impact if declined: sec-high. Fix Landed on Version: 50. Risk to taking this patch (and alternatives if risky): No risks that I can see. String or UUID changes made by this patch: None. See for more info. (In reply to Ritu Kothari (:ritu) from comment #12) > Hi Gerald, could you please nominate the patch for uplift to Beta, Aurora > and ESR45? I could do it for you but I'd like to get your thoughts on risk, > test coverage and whether this is easily exploitable or not. Thanks! This patch applies as-is on Aurora, Beta, and ESR45. As I wrote above, I personally don't see any risk with this patch. As to the exploitability, I think it would be difficult to exploit, as the overwritten buffer is in heap space (not stack); Also it is in a plugin running in a sandbox, so should the issue be exploitable, it would (hopefully) require a lot of work to be able to cause any real harm outside of the video playback.
Flags: needinfo?(gsquelart)
Attachment #8756939 - Flags: approval-mozilla-esr45?
Attachment #8756939 - Flags: approval-mozilla-beta?
Attachment #8756939 - Flags: approval-mozilla-aurora?
Comment on attachment 8756939 [details] [diff] [review] 1274637-detect-oob-copy-attempts-in-clearkey-decryptor.patch Sec-high issue, Aurora49+, Beta48+, ESR45+
Attachment #8756939 - Flags: approval-mozilla-esr45?
Attachment #8756939 - Flags: approval-mozilla-esr45+
Attachment #8756939 - Flags: approval-mozilla-beta?
Attachment #8756939 - Flags: approval-mozilla-beta+
Attachment #8756939 - Flags: approval-mozilla-aurora?
Attachment #8756939 - Flags: approval-mozilla-aurora+
status-firefox48: affected → fixed
status-firefox49: affected → fixed
status-firefox-esr45: affected → fixed
Flags: qe-verify+
Alias: CVE-2016-2837
Whiteboard: [adv-main48+][adv-esr45.3+].
Flags: qe-verify+
Flags: needinfo?(mwobensmith)
Flags: needinfo?(gsquelart)
Mihai, you are correct. Sorry about that. Marking qe-verify- as a result. If Gerald or anyone else has a way to verify this change without debugging, let us know.
Flags: needinfo?(mwobensmith) → qe-verify-
(In reply to Mihai Boldan, QA [:mboldan] from comment #16) >. I've just tried on Mac OS X, and I can see a difference: - With an unpatched FF 47, after opening the POC's index.html and pressing OK, a drop-down notification bar says "The Clearkey plugin has crashed" (most probably due to the unchecked memory copy trying to write outside of its permitted bounds). - With patched FF 50, 49b, and 48a, there is no such notification, the plugin does not crash. The crash may be dependent on the OS and other environment factors. Would the QA team be able to try on different platforms?
Flags: needinfo?(gsquelart)
I managed to reproduce this issue following the STR from Comment 0, using Firefox 47.0.1 and on Windows 10 x64. I've tested this issue on Firefox 45.3.0 ESR, Firefox 48.0, Firefox 49.0b1, Firefox 50.0a2 (2016-08-08) and on Firefox 51.0a1 (2016-08-08), across platforms [1 ]and I confirm that the notification is no longer displayed and the plugin does not crash. [1]Windows 10 x64, Mac OS X 10.11.1, Ubuntu 16.04x64
Group: core-security-release
Keywords: csectype-bounds | https://bugzilla.mozilla.org/show_bug.cgi?id=1274637 | CC-MAIN-2018-39 | refinedweb | 2,616 | 50.53 |
Imports and modules
In Elm you import a module by using the
import keyword e.g.
import Html
This imports the
Html module. Then you can use functions and types from this module by using its fully qualified path:
Html.div [] []
You can also import a module and expose specific functions and types from it:
import Html exposing (div)
div is mixed in the current scope. So you can use it directly:
div [] []
You can even expose everything in a module:
import Html exposing (..)
Then you would be able to use every function and type in that module directly. But this is not recommended most of the time because we end up with ambiguity and possible clashes between modules.
Modules and types with the same name
Many modules export types with the same name as the module. For example, the
Html module has an
Html type and the
Task module has a
Task type.
So this function that returns an
Html element:
import Html myFunction : Html.Html myFunction = ...
Is equivalent to:
import Html exposing (Html) myFunction : Html myFunction = ...
In the first one we only import the
Html module and use the fully qualified path
Html.Html.
In the second one we expose the
Html type from the
Html module. And use the
Html type directly.
Module declarations
When you create a module in Elm, you add the
module declaration at the top:
module Main exposing (..)
Main is the name of the module.
exposing (..) means that you want to expose all functions and types in this module. Elm expects to find this module in a file called Main.elm, i.e. a file with the same name as the module.
You can have deeper file structures in an application. For example, the file Players/Utils.elm should have the declaration:
module Players.Utils exposing (..)
You will be able to import this module from anywhere in your application by:
import Players.Utils | https://www.elm-tutorial.org/en/01-foundations/04-imports-and-modules.html | CC-MAIN-2019-04 | refinedweb | 320 | 66.23 |
I'm trying to find each node distance from the starting node and the connections between the nodes is given in a dictionary
My code works well with small dictionary like this example but the problem with dictionaries with more than 20 nodes I got an error
if child != parent and child not in my_list:
RecursionError: maximum recursion depth exceeded in comparison
def compute_distance(node, dic, node_distance, count, parent, my_list):
children = dic[node]
node_distance[node].append(count)
for child in children:
if child != parent and child not in my_list:
compute_distance(child, dic, node_distance, count + 1, node, children)
node_distance_dic = {}
node_distance_dic = {k: min(v) for k, v in node_distance.items()}
return node_distance_dic
if __name__ == '__main__':
starting_node = 9
dic = {0: [1, 3], 1: [0, 3, 4], 2: [3, 5],
3: [0, 1, 2, 4, 5, 6], 4: [1, 3, 6, 7],
5: [2, 3, 6], 6: [3, 4, 5, 7, 8],
7: [4, 6, 8, 9], 8: [6, 7, 9], 9: [7, 8]}
print(compute_distance(starting_node, dic, defaultdict(list), 0, 0, []))
{0: 4, 1: 3, 2: 4, 3: 3, 4: 2, 5: 3, 6: 2, 7: 1, 8: 1, 9: 0}
I guess
my_list is here to keep track of the nodes already visited, but you never update it. Therefore, you should add the node you are processing in order not to call recursion on nodes that you already went through. Currently, your code goes in infinite loop as soon as there is a cycle in the graph. Plus, don't forget to pass it to the next level:
def compute_distance(node, dic, node_distance, count, parent, my_list): my_list.append(node) ... compute_distance(child, dic, node_distance, count + 1, node, my_list) ...
However, this method does not compute the shortest path from the starting node to every other, it just do a simple traversal of the graph (underlying algorithm is DFS).
In order to implement what you want, that is the min distance from the source to every other node, you should look into Breadth-First Search (commonly called BFS).
It will solve your problem in linear time. | https://codedump.io/share/Vawi2T91Bgwt/1/maximum-recursion-depth-exceeded-in-comparison-in-python | CC-MAIN-2017-13 | refinedweb | 341 | 61.09 |
______________________________________________________________
> Od: grahamd at dscpl.com.au
> Komu: ".: smilelover :." <smilelover at centrum.cz>
> CC: <mod_python at modpython.org>
> Datum: 11.05.2006 12:17
> Předmět: Re: [mod_python] Re: mod_python and module importing
>
>
> On 11/05/2006, at 6:15 PM, .: smilelover :. wrote:
>
> > ______________________________________________________________
> >> Od:.
>
> Add an __init__.py file into the "lib" directory doesn't make a great
> deal of sense
> unless what you were actually doing was:
>
> from lib import MyWebTools
>
> That is, "lib" was really acting like a package. If not, I don't see how
> it would make
> any difference.
>
> Graham</mod_python at modpython.org>
Yes, I want it to be a package and that's why I added __init__.py and use the "from ..." stuff. When I change the import command to
import MyWebTools
it works only sometimes (when reloading the page for several times), it's strange.
I think it shouldn't work at all, because there is no standalone module called MyWebTools in the base directory and no package called MyWebTools (just the lib pack).
Dan | http://modpython.org/pipermail/mod_python/2006-May/021100.html | CC-MAIN-2018-09 | refinedweb | 172 | 77.33 |
How to Compile Supplemental Documents for Estate Form 706
Completing the 706 may seem bad enough to you, but you probably have a pile of supporting documentation that you need to send with it. If you do, attach whichever of the following documents are applicable in your decedent’s estate to the return when you file it.
Consider preparing an index, or list, of exhibits. Attach the index directly behind the 706 and label each of the documents on its face as Exhibit A, Exhibit B, and so on. Using index tabs for each exhibit is also a terrific idea.
Also, when referring to each of the attached documents in the 706, give them specific names (such as Exhibit A, Exhibit B, and so on) for clarity. The clearer you make things for the IRS, the happier it’ll be as it reviews your tax return (and the more comfortable it’ll be that you’re disclosing everything).
Among the documents to attach to the return are
A certified copy of the decedent’s death certificate (required in all cases).
A certified copy of the will, if the decedent died with a will. You obtain certified copies of the will from the court where the will was filed for probate. If you’re unable to get a certified copy, attach an uncertified copy and explain why it’s not certified.
A certified copy of your appointment as executor or letters testamentary.
An IRS Power of Attorney (Form 2848).
Receipt for payment of state inheritance or estate taxes.
Appraisals of property.
Life insurance statements (Form 712). See the discussion of Schedule D.
Gift tax returns (Form 709).
Certificates of Payment of Foreign Death Tax (Form 706-CE).
Copies of trust documents.
For closely held businesses, earnings statements and balance sheets.
If the decedent was a U.S. citizen but not a resident, you need to attach the following additional documentation:
A copy of the property inventory and schedule of liabilities, claims against the estate, and administration expenses as filed with the foreign court, certified by the appropriate official of the foreign court
If the estate is subject to a foreign tax, a copy of the tax return filed under the foreign death tax act, whether estate, inheritance, succession, legacy, or otherwise | http://www.dummies.com/how-to/content/how-to-compile-supplemental-documents-for-estate-f.navId-323702.html | CC-MAIN-2015-14 | refinedweb | 380 | 55.13 |
,
did anyone manage to wrap an application which relies on the doc-view
architecture and MFC?
More specific: I have an app which uses pythonwin.exe as a boilerplate (just
another start-up script). run_w.c fires up the python engine and links the
zip-archive, but the startup script silently fails: among others, I have to
host win32uiHostGlue. Although it is easy to convert run_w to a MFC app, and
add the host glue object, I am wondering about the mutual dependencies of
win32HostGlue and py2exe's import mechanism.
So my questions are:
-did anyone already create such a combination?
-does anyone know about the 'additional modules' I have to add manually (at
least, py2exe does not find scintilla)?
Regards
Wolfgang Zint
Thomas Heller <theller@...> writes:
> Still, I would really like you to post a reproducible testcase for
> Windows, because I HAVE seen cases where to many careless imports fail
> in the py2exe'd app. I'm willing to install even other packages, if it
> is required ;-)
Just for reference in the archives, I followed up on this with Thomas
directly with some of my original code base, and he tracked it down to
an issue with the Python modulefinder.py library module. The missing
import inside of twisted (per Thomas' last response) was flagged as a
badmodule (as the non-qualified "resource") that then filtered out any
other non-qualified "resource" imports even if they were in different
packages and did have matching modules.
The fix was to remove the 2 checks in _safe_import_hook that skipped
over modules in self.badmodules - hopefully that will make it into the
Python source tree at some point going forward.
-- David
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/py2exe/mailman/py2exe-users/?viewmonth=200508&viewday=11 | CC-MAIN-2016-30 | refinedweb | 322 | 62.48 |
Read Smooth Streaming in Silverlight here
This post is walkthrough on enabling smooth streaming of media over IIS and then stream over Windows Phone 7 client. There are very good documentation available on IIS official site to have a better understanding of smooth streaming. So I am not touching theoretical concept behind smooth streaming in this post. However I have shown steps by steps to enable smooth streaming of media over IIS then to stream over Windows Phone 7 client.
Essentially there are four majors steps you need to do
- Enable Smooth Streaming on IIS
- Encode Media File .There is a point to be noted here that Windows Phone 7 [at time of writing this post] does not support variable bit rate of streaming. It does support only VC-IIS Smooth Streaming -480 VPR
]
- Publish Encoded file over IIS
- Play the streamed media on Windows Phone 7 using smfPlayer.
Enable Smooth streaming on IIS
You need to download and install IIS Media Services. Go to below URL to download and install IIS Media Services.
After successful installation, you will have a section of Media Services in IIS.
Encode Media File
To encode media for IIS smooth streaming you need Microsoft Expression Encoder 4.0 pro. If you don’t have installed that then download and install that. It comes as part of Microsoft Expression.
Step 1
Very first open Microsoft Encoder Pro 4.0. You will get prompted to choose Project type to be loaded. Select Transcoding Project and press Ok.
Step 2
In this step you need to choose the media file to be encoded and streamed over IIS. Click on File in menu and select Import.
Choose media file to be encoded and streamed.
Step 3
In right side tab select Preset option. If you are unable to find Preset tab, select windows from menu and check preset.
From Preset tab select Encoding for Silverlight and then select IIS Smooth Streaming. After selecting option of IIS Smooth Streaming, you can choose VBR rate of your choice.
After IIS Smooth streaming type doesn’t forget to click Apply button.
Step 4
There are many options available for you to configure like,
- Thumbnails
- Security
- Template options
- Output path
- Publish etc
You can set values for above options as per your requirement. I am going to set output path here. Make sure you have created a folder on your local derive to set as output path for the encoded media for the streaming. On my local D drive, I have created a folder called StreamDemo. So set the output path as below
Make sure to check Sub-folder by job ID. Leave Media File name as default name.
Step 5
If you have set the values for everything required then go ahead and click on the Encode button in the bottom.
You should be getting Encoding status message as below.
After successful encoding you will get encoded media playing the browser from the local derive. Now you have encoded media file.
Publish Encoded File over IIS
To stream over IIS, you need to publish encoded media over IIS. Process to publish encoded media is same as publishing any other web site over IIS.
Step 1
Open IIS as administrator and add a new web site
Step 2
In dialog box you need to provide below information
- Site name: Give any name of your choice
- Port : Choose any available port above 1024
- Host Name : Leave it empty
- Type : Http
- Check the check box start web site immediately.
- In Physical path, give the same location Encoded file is saved. In previous step while encoding media for streaming, I save file in location D:\StreamDEmo. So I am going to set Physical Path as D:\StreamDEmo.
Click Ok to create web site in IIS. So we created site StreamingMediaDemo1. Right click on that and select Manage Web Site and then browse.
On browsing most likely you will get forbidden error message as below,
Append Default.html in URL to play the media.
Media will be played as below,
Now append wildlife.ism/manifest to the URL. This manifest file would be used in Windows Phone 7 and other clients to stream media over IIS.
Playing in Windows Phone 7
To Play media on Windows Phone 7 download below player from the CodePlex.
After download extract the file on locally. You will have to add references of these dll in Windows Phone project. Since dll got downloaded from the web, so they would be locked. To use them in the project right click and unblock them.
Note : I would recommend to download msi file and install that to get all the required files.
Step 1
Create a Windows Phone Application
Choose target Windows Phone version 7.0
Step 2
Right click and add the references. If you have run msi then you will get below references at the location,
C:\Program Files (x86)\Microsoft SDKs\Microsoft Silverlight Media Framework\v2.4\Silverlight For Phone\Bin
Step 3
You need to download IIS smooth client and install from below URL.
Step 4
Right click on Windows Phone 7 project and add reference of Microsoft.web.media.smoothstreaming.dll
To locate this file on your local drive browse to C:\Program Files (x86)\Microsoft SDKs\IIS Smooth Streaming Client\v1.5\Windows Phone on 64 bit machine.
Step 5
Next we need to design Silverlight page. Open MainPage.xaml and add namespace,
And add a player on the page downloaded to play stream media as below,
Eventually xaml will be as below with a textblock to display message and player
<phone:PhoneApplicationPage x: <StackPanel Orientation="Vertical"> <TextBlock Text="Streaming Media from IIS on Silverlight" Height="22" Width="266" FontSize="12" Foreground="Blue"/> <Core:SMFPlayer </StackPanel> <>
Step 6
We need to write some code on page load to create play list of streamed media and play in the player.
using System; using Microsoft.Phone.Controls; using Microsoft.SilverlightMediaFramework.Core.Media; namespace Streaming { public partial class MainPage : PhoneApplicationPage { // Constructor public MainPage() { InitializeComponent(); PlaylistItem item = new PlaylistItem(); item.MediaSource = new Uri(""); item.DeliveryMethod = Microsoft.SilverlightMediaFramework.Plugins.Primitives.DeliveryMethods.AdaptiveStreaming; strmPlayer.Playlist.Add(item); strmPlayer.Play(); } } }
Step 7
Press F5 to run the application.
These were what all required to smooth stream media from IIS and play in Windows Phone 7 client. I hope this post was useful. I am looking very forward for your comments on the post. Thanks for reading
Follow @debug_mode
Pingback: Dew Drop – August 1, 2011 | Alvin Ashcraft's Morning Dew
Pingback: Smooth Streaming on Windows Phone 7
Pingback: Smooth Streaming on Windows Phone 7 –
Pingback: Ottimo articolo su come si configura iis streaming | Windows Phone Tips
Pingback: Monthly Report July 2011: Total Posts 16 « debug mode……
Hi Dhananjay ,
How can we change the controls and add new controls like volume etc.
Rgds,
Amit
Really super code, but, i am unable to view the video
I have my own manifest file and replaced in the code, but unable to see the video | http://debugmode.net/2011/07/31/smooth-streaming-on-windows-phone-7/ | CC-MAIN-2015-32 | refinedweb | 1,165 | 64.51 |
So me and GF (both late 20s Americans) have been talking about doing an Australian WHV sometime this year, when I start reading shit like this:
TL,DR: visa holders are about to get fucked by getting taxed at 30+% and this has us reconsidering our options.
Straya had us thinking we'd be making bank (relatively speaking), but looks like that isn't the case anymore. Now New Zealand is starting to look like the better option for a WHV, with AU as a side trip at best.
>>1072246
You got fucked when you were given that fake 50c coin
>>1072288.
tl;dr OP did not get ripped off.
>>1072293
Damn, I was just going to say that it was a real coin and I used to have a few in a box in my cupboard. You've done your research.
>>1072246
>New Zealand is starting to look like the better option for a WHV, with AU as a side trip at best.
You're probably right. Straya fucks anyone it can for tax, unless they're rich of course.
>>1072288
>>1072293
>>1072501
OP here, it's just a picture of the most Australian looking piece of currency I could find. The rest had the Queen and shit.
I'm wondering if this tax hike is changing any of /trv/'s plans. Is anyone cancelling or changing plans for a working holiday over this, or just biting the bullet and going anyway?
While we had been talking about NZ before learning about this, would it really be better? What other options might there be?
>>1072560
Have you seen the exchange rate? Your US super dollar is all you need. Just work at home and have a proper holiday
>>1072560
Suicide is always an option.
I knew that 50c was real but I'm glad I helped you coin blokes bond.
>>1072246
WHV tourists don't add much to the economy of the culture. They spend very little and turn decent towns into tourist dumps full of mouldy hostels. New Zealand is welcome to your minimal dollars.
>>1072992
*economy or the culture
>>1072288
actually that's a rare 50c piece
I had one but my dumbshit mother put it in a vending machine, I didn't talk to her for a week
>>1072246
it's a deliberate move OP, these rural areas are experiencing a spike in unemployment and working visas are hurting locals
go to new zealand
>>1072992
You are an idiot. This is an amazing program that Australia invented to get cheap labour that most Australians don't want to do. Do you not fucking realize what the alternative is? It's bringing in foreign workers from shitholes like Philippines, India, Indonesia and Pakistan. Would you rather have Germans, Canadians and Brits around or fucking Pakis?
Why does OP care if they pay 30% tax? Non Aussie residents get 100% tax back in tax returns
>>1073706
This. So what is the worry?
>>1072246
>>1073706
this. if you lazy arse backpackers actually do a tax return ( which may cost you like $100 to go to a tax agent or you can do it online for free) you'll only get taxed it you earn higher than the threshold, which most backpackers dont.
ffs | http://4archive.org/board/trv/thread/1072246 | CC-MAIN-2017-04 | refinedweb | 544 | 80.72 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
sum of minutes not calculated in customized timesheet
Byon 4/28/16, 5:34 AM • 303 views
aneesh ATEES Infomedia Pvt Ltd
cretaed customized timesheet need to get the total minutes and hour ..but minutes are not showing in total only hour caluculated accuratly.this is my code
def amount_all(self, cr, uid, ids, field, arg, context=None): res = {} for order in self.browse(cr, uid, ids, context=context): res[order.id] = {'total_line_sum': 'duration',} val = 0 for line in order.order_line: val += line.duration res[order.id] = val) | https://www.odoo.com/forum/help-1/question/sum-of-minutes-not-calculated-in-customized-timesheet-101295 | CC-MAIN-2016-50 | refinedweb | 120 | 58.28 |
Flash Player 10 and later, Adobe AIR 1.5 and
later
Using
a shader as a filter is like using any of the other filters in ActionScript.
When you use a shader as a filter, the filtered image (a display
object or BitmapData object) is passed to the shader. The shader
uses the input image to create the filter output, which is usually
a modified version of the original image. If the filtered object
is a display object the shader’s output is displayed on the screen
in place of the filtered display object. If the filtered object
is a BitmapData object, the shader’s output becomes the content
of the BitmapData object whose applyFilter() method
is called.
To use a shader as a filter, you first create the Shader object
as described in Loading or embedding a shader. Next you create a ShaderFilter object
linked to the Shader object. The ShaderFilter object is the filter
that is applied to the filtered object. You apply it to an object
in the same way that you apply any filter. You pass it to the filters property
of a display object or you call the applyFilter() method
on a BitmapData object. For example, the following code creates
a ShaderFilter object and applies the filter to a display object
named homeButton.
var myFilter:ShaderFilter = new ShaderFilter(myShader);
homeButton.filters = [myFilter];.
In some cases, a filter changes the dimensions of the original
image. For example, a typical drop shadow effect adds extra pixels
containing the shadow that’s added to the image. When you use a
shader that changes the image dimensions, set the leftExtension, rightExtension, topExtension,
and bottomExtension properties to indicate by how
much you want the image size to change.
The following example demonstrates using a shader as a filter.
The filter in this example inverts the red, green, and blue channel
values of an image. The result is the “negative” version of the
image. creates and draws the contents of an object named target.
The target object is a rectangle filled with a
linear gradient color that is red on the left, yellow-green in the
middle, and light blue on the right. The unfiltered object looks
like this:
With
the filter applied the colors are inverted, making the rectangle
look like this:
The shader that this example uses is the “invertRGB.pbk” sample
Pixel Bender kernel that is included with the Pixel Bender Toolkit.
The source code is available in the file “invertRGB.pbk” in the
Pixel Bender Toolkit installation directory. Compile the source
code and save the bytecode file with the name “invertRGB.pbj” in
the same directory as your ActionScript source code.
The following is the ActionScript code for this example. Use
this class as the main application class for an ActionScript-only
project in Flash Builder, or as the document class for the FLA file
in Flash Professional:
package
{
import flash.display.GradientType;
import flash.display.Graphics;
import flash.display.Shader;
import flash.display.Shape;
import flash.display.Sprite;
import flash.filters.ShaderFilter;
import flash.events.Event;
import flash.geom.Matrix;
import flash.net.URLLoader;
import flash.net.URLLoaderDataFormat;
import flash.net.URLRequest;
public class InvertRGB extends Sprite
{
private var shader:Shader;
private var loader:URLLoader;
public function InvertRGB()
{
init();
}
private function init():void
{
loader = new URLLoader();
loader.dataFormat = URLLoaderDataFormat.BINARY;
loader.addEventListener(Event.COMPLETE, onLoadComplete);
loader.load(new URLRequest("invertRGB.pbj"));
}
private function onLoadComplete(event:Event):void
{
shader = new Shader(loader.data);
var target:Shape = new Shape();
addChild(target);
var g:Graphics = target.graphics;
var c:Array = [0x990000, 0x445500, 0x007799];
var a:Array = [255, 255, 255];
var r:Array = [0, 127, 255];
var m:Matrix = new Matrix();
m.createGradientBox(w, h);
g.beginGradientFill(GradientType.LINEAR, c, a, r, m);
g.drawRect(10, 10, w, h);
g.endFill();
var invertFilter:ShaderFilter = new ShaderFilter(shader);
target.filters = [invertFilter];
}
}
}
For more information on applying filters, see Creating and applying filters.
Twitter™ and Facebook posts are not covered under the terms of Creative Commons. | http://help.adobe.com/en_US/as3/dev/WS6FCADA8A-C82B-4d55-89AC-63CA9DEFF9C8.html | crawl-003 | refinedweb | 664 | 50.94 |
Over the past couple of months, we've been creating and using an internal company portal site using the Microsoft Office SharePoint Server (MOSS).
For several years now we've used a simple wiki to do the job of tracking what the company is doing, or should be doing but isn't, or is doing but shouldn't, or could be doing but doesn't have the resources for, or any other combination. That's been a pretty good solution over the past few years but this year we've come to the conclusion that a wiki's very freeform-ness translates a little too quickly to the execution of our company plans and roadmaps. Hence, we made the decision to make this stuff more rigorous and have settled on SharePoint to track it all.
Of course, there was an ulterior motive to this decision: we get a lot of requests for how to embed our ASP.NET controls into MOSS and what better way to accomplish that than too actually do it and use it ourselves.
This screenshot shows one of our experiments, one that we're using in the Marketing area of our portal: replacing the standard left-hand navigation bar with ASPxNavBar. It uses our BlackGlass theme and you can see the collapsible headers and sections of the navigation bar quite clearly. And we put it in the Marketing area because they're not the developers, so we'll get some real feedback...
So how's it done? As it happens, replacing an existing navigation pane with an ASPxNavBar is pretty simple: just a matter of dropping an ASPxNavBar control onto the page and binding it to a data source that supplies the quick launch links.
First, we open our site in Microsoft Office SharePoint Designer and check out the default.master page:
We'll manually register the ASPxNavBar and its assembly at the top of the page:
1: <%@ Register assembly="DevExpress.Web.v8.2, Version=8.2.4.0, Culture=neutral,
2: PublicKeyToken=9b171c9fd64da1d1" namespace="DevExpress.Web.ASPxNavBar" tagprefix="dxnb" %>
As you can see, the v2008 vol 2 version of ASPxNavBar will work in MOSS quite nicely; no need to wait for v2008 vol 3.
Now find the default navigation pane <Sharepoint:SPNavigationManager/> named QuickLaunchNavigationManager, and replace it with our ASPxNavBar:
<Sharepoint:SPNavigationManager/>
QuickLaunchNavigationManager
1: <dxnb:ASPxNavBar
2: EnableViewState="false"
3: runat="server"
4:
5: </dxnb:ASPxNavBar>
We'll bind it to the list of quick links via a SiteMapDataSource like so:
1: <asp:SiteMapDataSource
2: SiteMapProvider ="SPNavigationProvider"
3: ShowStartingNode="False"
4: id="MyQuickLaunchSiteMap"
5:
6: <dxnb:ASPxNavBar
7:
8: </dxnb:ASPxNavBar>
And we are done with the swapping part of the exercise. However, before the control can be used, it must be deployed on the target server and registered with SharePoint as a safe control.
Deploy the following assemblies to your server:
and GAC them. Then open your SharePoint website's web.config file and register them as "safe" by adding the following in the SafeControls section (note that your assembly version may be different).
1: <SafeControl Assembly="DevExpress.Web.v8.2, Version=8.2.4.0, Culture=neutral, PublicKeyToken=9b171c9fd64da1d1"
2:
3: <SafeControl Assembly="DevExpress.Data.v8.2, Version=8.2.4.0, Culture=neutral, PublicKeyToken=9b171c9fd64da1d1"
4:
5:
Now, onto the problem of applying themes. By far the easiest way to do this is to create a simple web site in Visual Studio, drop the control on the designer, theme it, and then copy/paste the generated theme files and aspx code into your SharePoint web site.
For example drop an ASPxNavBar control on a page and invoke AutoFormat from the smart tag:
Theme it using the BlackGlass theme:
At this point all the necessary theme files will have been generated in the \BlackGlass\Web folder. Copy that folder to your site's "_themes" folder and adjust the paths of the ASPxNavBar control accordingly:
1: <dxnb:ASPxNavBar
2: EnableViewState="false" runat="server"
3: width="100%"
4: DataSourceId="MyQuickLaunchSiteMap"
5: CssFilePath="/marketing/_themes/BlackGlass/Web/styles.css"
6: CssPostfix="BlackGlass"
7:
8: <CollapseImage
9: Height="17px"
10: Url="/marketing/_themes/BlackGlass/Web/nbCollapse.gif"
11:
12: <ExpandImage
13: Height="17px"
14: Url="/marketing/_themes/BlackGlass/Web/nbExpand.gif"
15:
16: </dxnb:ASPxNavBar>
And that's it. We get the view in the first image above.
We'll be bringing out more information about using our ASP.NET controls in MOSS over the coming days and weeks so that you too will be able to improve the usability of your SharePoint sites. Stay tuned.
Last week, I was chatting to a friend who'd posted in his journal that IE6 -- yes, 6 -- had 32% of the browser market. To me that figure seemed way too high (a third of all surfers are using IE6? We're in deep trouble, guys) and I asked him where he'd got it. It turns out that he'd quoted the IE6 share for 2007 from this wikipedia page (look for the sub-heading "Market share by year and version" about half way down). Both he and I then checked our own stats to find out that IE6 over the past month or so had roughly 15% browser share. To me that still seems high, but then again it takes all sorts; I'm a Firefox user through and through.
The interesting point we both noticed was that IE, as a whole, had less than 50% browser share, with Firefox being at roughly the same spot. I could even report that nearly 5% of the visitors to my site were using Chrome.
The point here is that, if you are targeting the Internet with your web application rather than just a closed environment like a company intranet, you can no longer assume that the majority of your visitors will be using Internet Explorer. In fact, I would go even further: you should be actively monitoring your web stats to see what people are using, both in terms of OS and browser, and making sure that the vast majority of your visitors get a good experience no matter what combination they're using. Losing potential customers because they happen to be using Firefox rather than IE is a short-sighted tactic indeed.
Tim Anderson reports today that the jQuery site looks scrambled to him in IE7 (it doesn't for me), but this goes to show that the main game in the browser town these days is not only HTML/CSS rendering but also JavaScript compatibility. If you've only just logged on and are wondering what's so special about jQuery, both Nokia and Microsoft announced Friday that it would become part and parcel of their web application platform. Yes, you'll be getting the open source jQuery library with your Visual Studio, the first time Microsoft will ship an open source library with its offerings (I remember the arguments way back when about including NUnit with Visual Studio, so this is a momentous occasion).
Of course, we at DevExpress have been making sure for a very long time that our JavaScript doesn't just work in IE. We recognized early on that we had to support more than the one browser with our controls and libraries and we continue to make sure that we support IE, Firefox, Safari and now Chrome. You can rely on us and our controls to make your website as compatible as it can be.
Over the past week or so, I've been using the new Chrome browser from Google -- not because I happen to be a geek fashionista and have to have all the latest gadgets, but to check out its JavaScript interpreter performance with real-world applications, including our own ASP.NET controls.
You may not have noticed but in the arcane world of JavaScript interpreters there has been some remarkable changes in the last few months. In essence, JavaScript interpreters have been gaining just-in-time compiler features, or JITters, just like we have in .NET.
This, to me, or indeed to anyone who has dabbled in JavaScript, is nothing short of amazing. JavaScript is a dynamic, weakly-typed, interpreted (or scripting) language with some very strong functional language features, despite its name implying Java and hence being statically-typed and imperative. The thought that this "freeform" quality can be compiled, and not only that but compiled just in time, seems contradictory and reeking of magic.
Luckily, not everyone is as gobsmacked as me, and they have been working hard to improve the performance of interpreting and executing JavaScript code. After all, most of the Web 2.0 sites out there are heavily using some form of AJAX, where the "J" stands for JavaScript, so a simple way to improve everyone's website performance is to improve the execution of the code.
There have been quite a few developments in the JavaScript interpreter space:
The interesting thing about these new interpreters is that they all blow IE's interpreters out of the water. And that includes IE8 beta, as well as IE7. It's becoming clear that if you want superior JavaScript performance for your web apps, you need to specify ABIE (Anything But Internet Explorer).
Currently there are two main JavaScript execution benchmarks: SunSpider (WebKit's benchmark for pure JavaScript, that is, no DOM processing), and the Google Chrome benchmark (again for pure JavaScript with no DOM). The difference between them is that the Chrome benchmark is very recursion intensive, something that TraceMonkey cannot do well at this stage. In Google's defense, using the DOM for anything intensive is going to be very recursion-oriented. (There is a new benchmark being developed that mixes in DOM processing as well, Dromaeo, but it's in its early days yet.) All the beta JIT engines perform extreemly well with these benchmarks, with V8 doing best at Google's own benchmark. IE is a no-show in some of the results since its interpreter has a tendency to crash with some benchmarks.
I do note that the fastest JavaScript interpreters/JITters are still in beta, but the whole area certainly looks very promising. I must admit that I hope the JavaScript execution engines become standalone, so that you can use your favorite browser and plug-in the engine you prefer. Perhaps a vain hope, but a hope nevertheless.
As for Chrome, and speaking as a user, it certainly seems to be very responsive on AJAX-abundant websites, including our own website, demo pages, and community site. The rendering speed is excellent too.
I look forward to yet more improvements in JavaScript performance. Certainly, as Scott Hanselman acknowledged, Silverlight may not yet have the upper hand yet in web applications.
I. | https://community.devexpress.com/blogs/ctodx/archive/2008/09.aspx | CC-MAIN-2017-09 | refinedweb | 1,789 | 57.2 |
iMouseDriver Struct Reference
[Event handling]
Generic Mouse Driver. More...
#include <iutil/csinput.h>
Detailed Description
Generic Mouse Driver.
The mouse driver listens for mouse-related events from the event queue and records state information about recent events. It is responsible for synthesizing double-click events when it detects that two mouse-down events have occurred for the same mouse button within a short interval. Mouse button numbers start at 0. The left mouse button is 0, the right is 1, the middle 2, and so on. Typically, one instance of this object is available from the shared-object registry (iObjectRegistry) under the name "crystalspace.driver.input.generic.mouse".
Main creators of instances implementing this interface:
Main ways to get pointers to this interface:
Definition at line 185 of file csinput.h.
Member Function Documentation
Call this to add a 'mouse button down/up' event to queue.
Button numbers start at zero.
Implemented.1 by doxygen 1.6.1 | http://www.crystalspace3d.org/docs/online/api/structiMouseDriver.html | crawl-003 | refinedweb | 158 | 58.28 |
I don't understand why most of c/c++ programmer hates Java.
Java is actually son of C , isn't it? With their combined strength we can make unbeatable application. Am I right?
best regards,
Chakra
I don't understand why most of c/c++ programmer hates Java.
Java is actually son of C , isn't it? With their combined strength we can make unbeatable application. Am I right?
best regards,
Chakra
To be fair, I don't know Java very well. But a lot of programmer thinks that "The language I know is the best, all others are Sh!t". This is similar to all other forms of motivating yourself why you choose something. A lot of people in the UK think that either Ford or Vauxhall is the only car you could ever own - because they have always been buying one of those two. I personally think that both brands produce some excellent cars, and other cars that are a bit not so great.
Programming languages are often good at some things and not so good at other things.
Java uses a bit more CPU to do the same work as a C or C++ program, particularly for non-library code (library code that is found to be really inefficient is often translated to C++ in the implementation).
Both Java and C++ have similar concepts, but Java normally compiles to bytecode, rather than pure machine code. This is often less efficient, but you can also have Just-In-Time compilation that allows the code to be translated to pure machine code for that machine.
--
Mats
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.
Excellent explanation.I don't understand why most of c/c++ programmer hates Java.
Java has its place and although I've never used it I may in the future. However I would not make the claim, as some have tried, that Java is just as good at C/C++ for games. That is a load of hogwash. It's that type of mindset I think that sets people off on the wrong foot when it comes to this or that language.
I think language debates are useless b/c if my company uses language A or B then I'm going to be using language A or B if I want to eat. I may prefer language C (no pun intended) but at work I will use A or B.
And now back on track.
I highly recommend Blender for 3D creating 3D models but be warned it is not a standard GUI. In fact it's a nightmare GUI for a very powerful program that is surprisingly easy to use and get good at. Blender is not nearly as good as 3D Studio Max but if you don't have a couple grand to spend on 3DS, Blender will do just fine. As has been mentioned Milkshape is also a good tool for modelling and I'll mention another called Wings 3D. I believe we have links to free 3D modelling software packages in one of our stickies.
Math is important in 3D and 2D but not as important as it used to be since most of it is done for you. It is still important to understand what is going on albeit not as important to know how to derive the formulas you are using. No one derives a rotation matrix anymore, we just know that the Y rotation matrix works. Most of 3D is like that. IMO, the most important thing to study for 2D or 3D is vector mathematics and linear algebra. Calculus is also handy but again most of the hardcore calculus derivations have already been integrated into functions in just about every 3D and physics API available.
I appreciate your thoughts about programming languages.
thanks for information about maths and 3d softwares.
Although Matrix and Vector are interesting to learn.I know some extent of them.
But Should I need to know Trigonometry too? I doubt that as you said most of the mathemactics
is done for us.
best regards,
Chakra
Trig is important but if you remember this you will be ok:
Sin
Oscar
Had
Cos
A
Hold
Tan
On
Arthur
Arccos(x) = the angle whose cosine is x
Arcsin(x) = the angle whose sine is x
Arctan(x) = the angle whose tan is x
Sin^-1 = Inverse sin (H/O)
Cos^-1 = Inverse cos (H/A)
Tan^-1 = Inverse tan (A/O)
But more importantly these are important to know:
Vector dot product
- V1 dot V2 = (v1.x * v2.x) + (v1.y * v2.y) + (v1.z * v2.z)
- If dot < 0 then vectors are facing away from each other
- If dot > 0 then vectors are facing each other
- If dot == 0 then vectors are perpendicular to each other
- Dot product is equal to the cosine of the angle between vectors v1 and v2 provided that v1 and v2 are unit vectors.
- Used for culling, physics calculation and just about everything you can imagine in 3D graphics. Dot product is one of the most important functions in 3D graphics. It can make a completely inefficient application very efficient when used correctly.
- Dot product between light incident vector and normal of the triangle determines the brightness/darkness of the triangle surface. Used for cosine lighting equations also known as diffuse lighting.
Vector cross product of vectors v1 and v2
- v.x = v1->y * v2->z - v1->z * v2->y;
- v.y = v1->z * v2->x - v1->x * v2->z;
- v.z = v1->x * v2->y - v1->y * v2->x;
- Cross product returns a vector perpendicular to v1 and v2. v1 and v2 should be unit vectors or the resulting vector v can be normalized after the cross product is performed.
- Used to compute normals and to orient objects in 3D space
- Also used to compute different force vectors in physics
- Cross product of two unit vectors produces a normal which is very useful in lighting equations, collision detection, and collision response (angular response)
In Direct3D:
Example of cross product for rotating an object to face another object via an arbitrary axis.Example of cross product for rotating an object to face another object via an arbitrary axis.Code:... D3DXVECTOR3 v1(1.0f,0.0f,0.0f); D3DXVECTOR3 v2(0.0f,1.0f,0.0f); //Dot product float dot = D3DXVec3Dot(&v1,&v2); //Cross product D3DXVECTOR3 vecNormal; D3DXVec3Cross(&vecNormal,&v1,&v2); ...
This code will give you the final right, up and look vectors needed to face the target object. The game would then linear interpolate over time between the current right, up, and look vectors of the object and the ones just computed.This code will give you the final right, up and look vectors needed to face the target object. The game would then linear interpolate over time between the current right, up, and look vectors of the object and the ones just computed.Code:struct OrientationVectors { D3DXVECTOR3 vecRight; D3DXVECTOR3 vecUp; D3DXVECTOR3 vecLook; D3DXVECTOR3 vecPos; } OrientationVectors computeRotateToUpRightLook(const OrientationVectors &inVectors,D3DXVECTOR3 vecTargetPos) { OrientationVectors returnVectors = inVectors; //Create vector to target object and normalize D3DXVECTOR3 toTarget = vecTargetPos - inVectors.vecPos; D3DXVec3Normalize(&toTarget,&toTarget); //Compute cosine of angle of rotation D3DXVECTOR3 normLook; D3DXVec3Normalize(&normLook,&inVectors.vecLook); float cosine_angle = D3DXVec3Dot(&normLook,&toTarget); //Compute rotation axis D3DXVECTOR3 vecRotationAxis; D3DXVec3Cross(&vecRotationAxis,&normSourceLook,&toTarget); //Create axis-angle rotation matrix for rotation D3DXMATRIX matRot; D3DXMatrixRotationAxis(&matRot,&vecRotationAxis,acosf(cosine_angle)); //Transform orientation vectors by computed rotation matrix D3DXVec3TransformCoord(&returnVectors.vecRight,&inVectors.vecRight,&matRot); D3DXVec3TransformCoord(&returnVectors.vecUp,&inVectors.vecUp,&matRot); D3DXVec3TransformCoord(&returnVectors.vecLook,&inVectors.vecLook,&matRot); return returnVectors; }
So as you see most of the math is done for you and as long as you have a basic idea of what is going on you should be ok.
Last edited by VirtualAce; 09-11-2008 at 01:09 AM.
I was in the boat of disliking java before I started Uni... I'm doing SE and they teach you java from day 1, and you pretty much use it until the end of your degree (and others!).
Mainly because they used to use C up until 2 or so years ago, which isn't very good if you're trying to teach OO design with C
(infact they don't really even teach classical SE in much depth at my uni anymore...)(infact they don't really even teach classical SE in much depth at my uni anymore...)
Now after having used Java (by force too!) for almost an entire year, day in day out... I'd say it's rather nice, giving a simple (read: simple) 3D engine a go in Java -- I already did one in C. So far, my research has led me to: The main bottleneck will be calling JNI (Java Native Interface) methods, such as the OpenGL binding, which is a big downside since this will be done a lot.
If I knew C++, I probably wouldn't be using Java -- I can however read C++. Not sure how well I could write it... as far as templates, namespaces, op overloading etc goes (hard if you don't know the exact syntax). I'm hoping that keeping my hand in C and Java will mean I should be able to pick up C++ fairly easy, while holding onto my knowledge of C & Java...
Inshort, my rant was: I don't hate Java, I like it infact. Sure it's got it's downfalls, and is a bit too managed at times (there are no class destructors for example). But my goodness, it's sometimes so fast and fun to not worry about memory allocation/deallocation!
A list of perhaps why Java will never be "better" than C++ for games
* Late binding (at runtime) = slow
* JNI calls are always going to be slower than C++ calls to libraries/DLLs
* The java compiler (at least suns compiler) is poor in optimizing in comparison to say gcc/g++
But it does offer, jit compiling so it can sometimes be faster than C++ in (very) limited circumstances.
Oh! Mine , Bubba you are monster but the good one.
your last reply worth $100,000 ,I am really ecstatic.
thanks for your reply zacs. | http://cboard.cprogramming.com/game-programming/106889-how-can-i-draw-what-i-want-2.html | CC-MAIN-2016-40 | refinedweb | 1,713 | 61.67 |
Python has
enumerate()
It provides all iterables with the same advantage that iteritems() affords to dictionaries -- a compact, readable, reliable index notation.
enumerate() is an iterator; it only produces the index
int value on the fly; it does not produce them all up front.
You can try to read the
enumobject.c source code, but it basically can be translated to Python like this:
def enumerate(iterable, start=0): count = start for elem in iterable: yield count, elem count += 1
The
yield keyword makes this a generator function, and you need to loop over the generator (or call
next() on it) to advance the function to produce data, one
yield call at a time.
Python also interns
int values, all values between -5 and 256 (inclusive) are singletons, so the above code doesn't even produce new
int objects until you reach 257. | https://codedump.io/share/oivea1ed8J5X/1/what-is-the-implementation-detail-for-enumerate | CC-MAIN-2017-47 | refinedweb | 143 | 51.72 |
score:24
Place this in the constructor:
if (window.performance) { if (performance.navigation.type == 1) { alert( "This page is reloaded" ); } else { alert( "This page is not reloaded"); } }
It will work, please see this example on stackblitz.
score:4
If you are using either REDUX or CONTEXT API then its quite easy. You can check the REDUX or CONTEXT state variables. When the user refreshes the page it reset the CONTEXT or REDUX state and you have to set them manually again. So if they are not set or equal to the initial value which you have given then you can assume that the page is refreshed.
score:9
It is actually quite straight-forward, this will add the default alert whenever you reload your page.
Functional Component
useEffect(() => { window.onbeforeunload = function() { return true; }; return () => { window.onbeforeunload = null; }; }, []);
Class Component
componentDidMount(){ window.onbeforeunload = function() { return true; }; } componentDidUnmount(){ window.onbeforeunload = null; }
You can put validation to only add alert whenever the condition is
true.
Functional Component
useEffect(() => { if (condition) { window.onbeforeunload = function() { return true; }; } return () => { window.onbeforeunload = null; }; }, [condition]);
Class Component
componentDidMount(){ if (condition) { window.onbeforeunload = function() { return true; }; } } componentDidUnmount(){ window.onbeforeunload = null; }
score:10
Unfortunately currently accepted answer cannot be more considered as acceptable since performance.navigation.type is deprecated
The newest API for that is experimental ATM. As a workaround I can only suggest to save some value in redux (or whatever you use) store to indicate state after reload and on first route change update it to indicate that route was changed not because of refresh.
score:11
Your code seems to be working just fine, your alert won't work because you aren't stopping the refresh. If you
console.log('hello') the output is shown.
UPDATE ---
This should stop the user refreshing but it depends on what you want to happen.
componentDidMount() { window.onbeforeunload = function() { this.onUnload(); return ""; }.bind(this); }
score:21
If you're using React Hook, UseEffect you can put the below changes in your component. It worked for me
useEffect(() => { window.addEventListener("beforeunload", alertUser); return () => { window.removeEventListener("beforeunload", alertUser); }; }, []); const alertUser = (e) => { e.preventDefault(); e.returnValue = ""; };
Source: stackoverflow.com
Related Query
- React | How to detect Page Refresh (F5)
- React - How to detect Page Refresh and Redirect user
- How to detect page refresh on react
- React Flash Message: How to make the message show without refreshing the page but refresh on 200
- How to detect keydown anywhere on page in a React app?
- react router, how to make page refresh on url change?
- How to make Meteor React inline svg element click link to page without refresh whole page
- react js : detect page refresh
- React how to run a function only once, after page enter or refresh
- How can I detect refresh page and routes change in react?
- How can i use localStorage to maintain state after a page refresh in React
- How can I get my product page to work on refresh with React -->Navigate?
- How to display value in real time without refresh page with React and SocketIO?
- How to refresh react page after error handling
- How to stop page refresh when function is called - React
- how add item in the "todo app" without page refresh from api in react
- In React typescript how to pass checkbox status from Sidebar component to MainContent component to refresh the page
- How to persist data selection on page refresh in react js
- How show the loader/spinner when page refresh and need to wait the authUser result in React JS?
- How do I make my parent component re-run an api call on page refresh using React Router?
- State stored in localStorage gets overwritten on page refresh in React redux? How to persist state?
- How to refresh another page in react JS
- react how to redirect to a page if the user refresh (using f5) in a function component
- how do i bring the root url when i refresh the page in react
- React JS - How to page refresh after delete data from the localstorage?
- i'm using react hook useEffect to fetch an advice api. it changed automatically after every page refresh how can i do it automaticly using a button
- How to stay login after refresh the page in react js
- How to detect Esc Key Press in React and how to handle it
- How to refresh a Page using react-route Link
- How to maintain state after a page refresh in React.js?
More Query from same tag
- ReactJS - Checkbox onChange event not firing
- Code works on react-router-dom 4.3.1 but not on 5.2.0
- Facebook/instagram link from webpage returns 404 on ios devices
- type onClick function for html components
- Cannot import SCSS file while running mocha test on isomorphic app
- Setting Django CSRF Cookie
- How to create routes from anchor tags from react router without the # in the url
- how do I switch from mongoDB & Express to local state?
- React useState rerender trigger behaves different depending of when its called
- What's the cleanest way to export React components as an array?
- Background size cover not working on longer pages on mobile
- function does not return value React
- Skip first useEffect render custom hook
- PostForm.js:36 POST 404 (Not Found)
- React app for cordova shows white screen for android
- How to Initialise leaf/child stores of MobX State Tree
- How to test React component without Browserify
- How to override ionic-scripts in order to include react on build?
- Mailgun Attachment + Axios
- How to animate active tab when it changes
- MUI Autocomplete with Select All
- How to mock onPast event with React and Jest?
- HTML input autoFocus property not rendering with React.renderToStaticMarkup
- How to get "key" prop from React element (on change)?
- Why react new page render from the bottom of the screen?
- Date object from react-datepicker not getting saved
- Troubles making this async/Await logic work
- React: Setting useState() hook clears out input fields?
- how i can render directly my menu when my local storage is not null?
- render list from json data from url on ReactJS | https://www.appsloveworld.com/reactjs/100/5/react-how-to-detect-page-refresh-f5 | CC-MAIN-2022-40 | refinedweb | 1,013 | 53.81 |
Objective
This article will give an explanation on; how to create a XML tree using Functional Construction method of LINQ to XML.
What is Functional Construction?
Functional Construction is ability to create a XML tree in a single statement. LINQ to XML is being used to create XML tree in a single statement.
The features of LINQ to XML enables functional construction are as follows
- The XElement class constructor takes various types of arguments
- For child element takes another XElement as argument.
- For attribute of element takes XAttribute as argument.
- For text content of element takes simple string as argument.
- For complex type of content pass parameter as Array of Objects.
- If an object implements IEnumerable(T) , then the collection is enumerated. If the collection contains XElement or XAttributes objects then result of LINQ query can be passing as parameter to XElement constructor.
Sample #1
Here we are creating a XML tree by passing XElement as child element and XAttribute as attribute to one of the element.
Please do not forget to add System.XML.LINQ namespace. Output in a console print may look like
Sample # 2
Now we will try to use feature #3 (Discussed above) that if object implements IEnumerable(T) we can pass result of a LINQ query as parameter .
So let us say, that in above XML Tree (Created as sample1), we are retrieving child elements with content 3 and 4 and passing the result as parameter of constructor of XML element to create XML tree.
If you see the above code, the second XML tree, we are passing LINQ query result as parameter of XElement. We will get expected output as below.
Conclusion
In this article, I explained about FUNCTIONAL CONSTRUCTION way of creating XML TREE. Thanks for reading.
One thought on “LINQ to XML Part # 3: Functional Construction of XML Tree” | https://debugmode.net/2010/02/21/linq-to-xml-part-3-functional-construction-of-xml-tree/?shared=email&msg=fail | CC-MAIN-2019-22 | refinedweb | 307 | 55.34 |
ASP.
The out-of-the-box ASP.NET migration tool in Visual Studio 2005 is the Web Site Project, which is based on the dynamic compilation model. Microsoft learned shortly after the release of VS 2005 that the Web Site project required some manual code fixes and was a bit cumbersome for large Web projects.
To that end, Microsoft released the Web Application Project in May 2006. This is based on the client project model, and it compiles into a single assembly while allowing multiple configurations, said Omar Khan, group program manager for Visual Studio Web Tools.
The Web Application Project, or WAP, was included in Visual Studio 2005 Service Pack 1. Microsoft's Visual Studio 2005 Web Application Projects page provides information on what developers should do if they have not installed VS 2005 SP 1 or still have the separate WAP add-on.
Though developers could update a Web application's script map to ASP.NET 2.0 and still develop using Visual Studio 2003, Khan said Microsoft recommends migrating using WAP. "We definitely feel this is the path of least resistance, and it gives you all the benefits of ASP.NET 2.0 within the IDE," he continued.
At Tech Ed 2006, Khan provided a five-step process for migrating Web projects from ASP.NET 1.x to ASP.NET 2.0. If a project is error-free in Visual Studio 2003, then a developer can migrate in less than 30 minutes with "very little manual steps required," Khan said.
Make sure the Web Application Project is available. As stated, this was included in Visual Studio 2005 Service Pack 1.
Validate the application in Visual Studio 2003. Open the application in VS 2003, perform a build and validate all projects and page functions. Khan also suggested running the app in the browser to check for errors in ASPX files.This step should solve a common problem on the MSDN forums, where many developers report issues with a migration, only to discover that the cause is, in fact, a flaw in the original code.
Open Visual Studio 2005 and launch the Conversion Wizard. It's best to migrate an entire solution at once, but if you plan to do individual projects, then start with "bottom of the food chain" items like class code and shared libraries, Khan said. Once the wizard has run its course, perform a build.
Errors that may need fixing at this point include in-line VB code in ASPX pages and name collisions. The latter occur when VS 2003 types don't quite fit into the corresponding default namespace in Visual Studio 2005, and they can be remedied by fully qualifying the namespace, Khan said. Once errors have been cleaned up, run the application and validate its functionality.
Convert to partial classes. One of Visual Studio 2005's many new features is partial classes, which separates a developer's code-behind file from the designer file that Visual Studio creates. (These files come together at compile time.) To convert the pages in your application to partial classes, right-click on the root mode of a Web project and select "Convert to Web Application. This will move generated code into a designer.cs or designer.vb file, depending on the language you are using.
As in Step 3, the next task is performing a build and correcting the errors that arise. In this case, Khan said, the error is likely to be a missing control declaration. This can be added to either the designer file or the code-behind file; Khan recommended the code-behind file, since it changes only at a developer's discretion.
Fix up XHTML errors, if desired. In VS 2003, the default validation for a Web application is the IE 6 schema. Developers can leave validation for a migrated application on IE 6 or, through the Tools option, change the validation to XHTML, open the app's individual pages and correct and errors. Khan cautioned that this can be a lengthy process.
As an aside, Khan offered developers one additional consideration. Some types have been deprecated in Visual Studio 2005, meaning the types are fully supported now but will not be in future versions of Visual Studio. During a migration from ASP.NET 1.x to 2.0, a developer may see a set of errors for deprecated classes and recommendations for alternate classes. A fix is not necessary for ASP.NET 2.0, but, Khan said, "you'll definitely run into a point where it won't compile."
Start the conversation | https://searchwindevelopment.techtarget.com/tip/ASPNET-1x-to-ASPNET-20-migration-in-five-steps | CC-MAIN-2019-47 | refinedweb | 761 | 64.61 |
User talk:PuppyOnTheRadio/Archive 3
From Uncyclopedia, the content-free encyclopedia
edit Mastubration
Is a venial sin, I think, but only if you do it to yourself. --Pleb SYNDROME CUN medicate (butt poop!!!!) 23:10, October 7, 2009 (UTC)
- I thought it was a manual sin. Pup t 23:23, 7/10/2009
- Although I've now been considering it... and you're relating to the shortness of the page aren't you? After all Mastubration comes from Ma + stub + ratio. I obviously have created the mother of all stubs by this ratio! Pup t 23:31, 7/10/2009
- I was going to raep you as tradition dictates, but chickened out and gave you a hand job instead. Also, that stub thing. --Pleb SYNDROME CUN medicate (butt poop!!!!) 02:04, October 8, 2009 (UTC)
- You gave me a stub job? Pup t 02:06, 8/10/2009
- Okay, you admitted it's a stub, not me. --Pleb SYNDROME CUN medicate (butt poop!!!!) 02:18, October 8, 2009 (UTC)
- lolfags. Orian57 Talk
02:45 8 October 2009
- Size isn't important
Pup t 09:19, 8/10/2009
- But I am Hung like a fucking BULL!:46 8 October 2009
- Mastubration is a "Manual sin"? Why blame a Mexican dude??? Thank him!!--Funnybony 22:12, October 9, 2009 (UTC)
edit Baby Boomers is back in black
Dude! I got a very useful Pee on Baby Boomers, worked my ass off, and did everything advised, plus more. If it didn't get featured last time then it should fly now. Could you take a look, and if it looks much better, or feature worthy, could you renom Baby Boomers again? That would be progress. thanks, really!--Funnybony 22:08, October 9, 2009 (UTC)
edit Goodbye Facebook screen
How's the audio coming? • • • Necropaxx (T) {~} Saturday, 06:21, Oct 10
- Ummm... exceedingly slowly, given I have sound issues with my PC. I'll have to download karaoke version of the backing I think.Pup t 23:32, 10/10/2009
- Where the heck did my Goodbye Facebook screen Nomination go?--Funnybony 17:42, October 10, 2009 (UTC)
- How are you going to make the Audio? Record the song? How about that?--Funnybony 14:14, October 11, 2009 (UTC)
edit Unmessifying
- It's cold but my cock is still huge!:45 10 October 2009
- It's good to hear. Maybe you should put it somewhere warm. National coming out day is tomorrow where we are supposed to have homosexual, lesbian, bisexual and transgender equality. Which sucks, because I really don't want to lower myself to a hetero level.Pup t 23:32, 10/10/2009
- What I don't get is why we're constantly balled in with the dykes and trannies. I mean dykes are just ugly women that don't like men because their dad was a jerk and the transexuality isn't a sexuality it's because you're brain doesn't match that gash between your legs. It's crazy! Also woot! I'm here/ I'm queer/ and I'm gonna rape your ass! like, with no lube. And not your's specifically, you'd enjoy:39 10 October 2009
- Apparently transgender is something that you come out for. I would have thought the fact that you had changed for ne gender to the other would have been obvious to your nearest and dearest... And this thread is getting really messy. Pup t 00:24, 11/10/2009
- So's your face!:31 11 October 2009
- ♩♪Cream dealer, I believe you can get me through the night!♬♫ For those too young to know trashy music of years gone by, the original was dream weaver. Pup t 01:00, 11/10/2009
- Yeah wow so are you gonna come out of your closet tomorrow and just stop kidding yoruself, with the wife and gay children -- Gay Children! That should tell you something about your cut off genes. Not that it's generic obviously.:40 11 October 2009
- Ummm... I'm as about as out can be without having a tattoo across my forehead that says "I'm anyones!" Pup t 01:46, 11/10/2009
- You haven't even got the tattoo yet? They're mandatory aren't they? You've been living a lie.:59 11 October 2009
- I have the tattoo, just not across my head! Pup t 02:01, 11/10/2009
- Love the new sig darling! Not at all similar to what I did before you! Faggot1 Have Fun! MuCal. BFF Sir Orian57!Talk!PEE!Read!UnProvise!Awards! 21:18 11 October 2009
edit UN:HS
Did you know about this? It's a place where you can track your features and compare yourself to others in order to compel yourself to do better (or just end it all because you'll never amount to anything). I added your current features. Also: YOU'RE GAY?!?! • • • Necropaxx (T) {~} Sunday, 07:30, Oct 11
- I had seen it before at some stage along the line. And as for the second... why are you asking, and what's in it for me? Pup t 08:25, 11/10/2009
- But... but... hot psych majors? I assumed you were talking about hot female psych majors! I was, anyway. • • • Necropaxx (T) {~} Sunday, 22:41, Oct 11
- Hey, better be careful you might catch gay. Have Fun! MuCal. BFF Sir Orian57!Talk!PEE!Read!UnProvise!Awards! 23:01 11 October 2009
- Actually Orian, being that you're the only guy here I know who is openly gay, and I'm the only guy here I know who is openly a promiscuous slut, it could be you and I catching straight. Quickly, what's your opinion on Boobs?Pup t 00:14, 12/10/2009
- Ugh, don't be so stupid. Everyone knows being straight is genetic. Have Fun! MuCal. BFF Sir Orian57!Talk!PEE!Read!UnProvise!Awards! 00:18 12 October 2009
- That's right. Being straight is genetic, being gay is genetic, but being bi is just queer! Pup t 00:32, 12/10/2009
- No being queer is queer, being bi is a mental disorder. You should be locked up before you give everyone crabs. Fucking sluts. Have Fun! MuCal. BFF Sir Orian57!Talk!PEE!Read!UnProvise!Awards! 01:48 12 October 2009
- There are a lot of very similar looking sigs on my talk page all of a sudden... Pup t 01:58, 12/10/2009
- Actually, Methamphetamine! is openly bisexual, but he doesn't come on here that often. • • • Necropaxx (T) {~} Monday, 03:50, Oct 12
- Methy is? There you go, we obviously do need that register of everyone's sexual preference that we were discussing earlier. Pup t 04:35, 12/10/2009
If anyone cares, the following users are known to like pregnancy erotica: OptyC (maybe), Socky, User:CheddarBBQ, POTR, and myself. --Mn-z 14:27, October 12, 2009 (UTC)
edit Thanks for Vandalising?
Thanks for vandalising my user page--no wait, should I be thanking you for that? (Seriously, thanks). WHY???PuppyOnTheRadio 01:01, October 12, 2009 (UTC)
- It's cool. Personally I don't really like templates, but at least they are neater.
- On an unrelated note, I came across the Poo Lit competition today for the first time, and for the life of me I cannot think of anything to write. If only I hadn't gotten PEE on myface. Pup t 01:51, 12/10/2009
edit PLS
Your article is supposed to remain in user space until after judging. You can fix this any way you want, but I'd suggest moving it back and adding the redirect (just the redirect) to QVFD with a note that the article's for PLS. I don't mean to be nosy or anything but I just happened to notice this and I don't want to see you (not you personally, you generally) disqualified. --monika 03:50, October 12, 2009 (UTC)
- That sounds like it takes energy... ah well, if it gets disqualified on a technical note then it gets disqualified. I hate leaving stuff in my user space. Pup t 04:12, 12/10/2009
- Yeah, but this one, like, has a chance, and those are, like, the rules. --monika 04:42, October 12, 2009 (UTC)
- And... well, if there was money riding on the outcome, then I'd stress, but in this situation I don't see why I would. I write what I write because I enjoy the writing. Outside of the Uncyclopedia community I doubt anyone will be too concerned, and within the community... people will see me as they always have whether I win or not. I've entered simply because it's a competition that I have written one article that fits into the criterion, and it encourages other's to write better as well. I'm all for improving the overall standard of writing and to keep this growing. Pup t 04:46, 12/10/2009
- It's up to you, but I'm a little sadder than I was a second ago. Which is strange because I just took all my antipsychotics for the night so I should be happier than I was a second ago. (On a slightly different node now that I'm in weird confessional mode after having drugged up, it may seem like I am a prizewhore for only really writing articles during PLS - the real reason is I know I generally take a median five months to write an article unless there's a deadline.) Anyway, yeah, okay. Okay. Of course, the best solution would be to change the rules so at the very least we don't have to wait so long for results and stuff. --monika 04:54, October 12, 2009 (UTC)
edit Some Clarfication?
When you said what you said near the end of that discussion on Mordillo's talk page, was that directed at me? Sorry if it seems like I'm not doing much with the article if it was, I'm just out of ideas of what to do. Also, I thought what I said counted pretty okay. D: -- Hanyouman 04:25, October 12, 2009 (UTC)
- I was directing it at no-one in particular. Just thinking that it was all a storm in a teacup. Pup t 04:33, 12/10/2009
edit More clarificationz
That thing you said on Modus's talk page, are you coming out of the closet again? --Mn-z 04:48, October 12, 2009 (UTC)
- Am I in a closet? Pup t 04:58, 12/10/2009
- maybe. --Mn-z 05:00, October 12, 2009 (UTC)
- Oh... what did I say on Modus' page? Pup t 05:02, 12/10/2009
- "Not everyone dislikes it." In reference to that one category --Mn-z 05:04, October 12, 2009 (UTC)
- Oh that! I didn't even realise that there was a closet to come out of there. I like men and women as long as they're attractive. I don't really like midgets though. Pup t 05:09, 12/10/2009
- That issue is very Serious Business. --Mn-z 05:21, October 12, 2009 (UTC)
- Puppy, you don't like toy breeds? Or you just don't like teacup bitches? WHY???PuppyOnTheRadio 05:43, October 12, 2009 (UTC)
- Oh fuck you're not a preggo fucker are you? Bisexuality I can understand but that? Have Fun! MuCal. BFF Sir Orian57!Talk!PEE!Read!UnProvise!Awards! 06:11 12 October 2009
- It seems like your kind is being outnumbered. --Mn-z 14:21, October 12, 2009 (UTC)
- I think I'd better clarify. I like erotica, and I like sex. I'm very choosy, but not about gender or if somebody is pregnant. I will turn and look at a yummy mummy as quickly as I would the coke delivery guy. (Who unfortunately in my office is my ugly female team leader - bleargh!) In short, I'm a slut with standards! (Put it on my tombstone. But I'm not into passive necrophilia!) Pup t 23:21, 12/10/2009
edit steal Bank Pee Review
I Pee Reviewed UnScripts:steal Bank Customer Service training video based on version 4146042 Revision as of 02:13, October 8, 2009. I hope my review helps! (I'll probably try your template next time I do one of these). WHY???PuppyOnTheRadio 02:22, October 13, 2009 (UTC)
edit Please change your signature
I've been having a weird problem with this site for the last couple days. When I try to click on a link in a discussion page, the link keeps jumping away from my cursor, then I when I move the cursor, it jumps back. Sometimes I have to really work to click on a user link. Sometimes I can't get it to work at all, and have to type in their user name. I didn't know if it was a problem with the website, my computer, my connection, etc. But I just realized what it is--it's your signature. When I move my cursor over the area of a discussion, the longer version of your sig pops out (I guess this is some sort of a mouse over thing). Then when I try to move it again, the sig suddenly goes back to the short version. It's been irritating me, but I didn't know what it was until now. I don't imagine you intended this to be a problem (and I think someone else has a similar signature.) But could you fix it please? Thanks. WHY???PuppyOnTheRadio 05:22, October 13, 2009 (UTC)
- Okay... of course this means that I just have to work on a more clever and offensive sig going forward. Pup t 05:39, 13/10/2009
- You could wrap the thing in a div that's got a set width as large as it ever gets. This way, nothing outside the div changes, and since sigs go at ends of lines, nothing would look all that different. --monika 05:47, October 13, 2009 (UTC)
- Whatever one or both of you come up with to fix it would be very helpful to me, and likely to others who've been having the same problem. I caused a problem with an early version of my sig that someone said was code spewing, but I got help to fix it. Again, thanks! WHY???PuppyOnTheRadio 06:05, October 13, 2009 (UTC)
- Wrapping it in a div is a really good idea, but the other thing I would like to know is what does it actually appear like on your monitor, Why? My concern is that there is a lot there that is class="sigexpand" and if it's all expanding at once that would be fairly ugly. Pup t 06:14, 13/10/2009
- Well, you've fixed it, so I'm going on memory. The main thing I remember is when it expanded, the line would jump (that happens with others too, where the expanded version continues to another line). But with yours as soon as I moved the cursor a little, it would contract again. I think maybe it did expand all at once, but I don't remember for sure. What were we talking about again? WHY???PuppyOnTheRadio 00:22, October 14, 2009 (UTC)
edit UnNews:Australia says "You just don't understand our humour!"
By way of warning, don't mess with me; I'm an admin and UnNews Editor. It's not all bad, though... I loved the article.
When you had MO move the article, it caused a broken link to the Main UnNews page. You see, this article was featured as the lead article, and when it was moved, it appeared as if the article no longer existed ie. a broken link on the main page. I've moved it back to its proper place, and locked it.
I know why you moved it, but you must consider that the article does not belong to you in any way, and the creation and attachment of my audio was a separate file you've indirectly fucked with. Let me know if you'd like to do an audio for this article, and I'll listen to it and we'll go from there.
Realise the fact that "No, I don't normally sound like this. I have a cold, and an edumacation. I speak English goodly." is not a good reason for your actions., October 13, 2009 (UTC) Here's my welcoming drivel
Reverend Zim_ulator says: "There are coffee cup stains on this copy, damnit! Now that's good UnJournalism."
Welcome to UnNews, PuppyOnTheRadio, your sig
is nice and all, but doesn't have a timestamp. you may want to double check that unless you've recently fixed that issue. -- Soldat Teh PWNerator (pwnt!) 18:13, Oct 13
- My sig was leaking code, which somehow buggered up the sigexpand on the timestamp. It was there but not expanding. fixed now. Thanks for pointing it out. Pup t 22:48, 13/10/2009
edit UnNews and on why reverend zim_ulator is a dick
Please ignore the dickish parts of my message about the move of your hilarious Ozzie article. Assumptions made on my part were proven unsupportable. I can add your audio as an alternative selection on the articles and main UnNews pages; I've done that before when there was more than one acceptable version, seems only fair. I must say, I hope you'll continue to contribute to UnNews with stuff in the same vein, or perhaps an artery Praise be unto:23, October 13, 2009 (UTC)
- It's all cool. I haven't looked at it yet today (Different time zone and I just woke) but I had left yours hidden and had my version of the audio up there. I definitely want both versions of the audio up there. To everyone outside of Australia there is a need to hear it in an Ocker accent. Anyone in Australia has to hear it in an un-Ocker accent. And with the editorial. Pup t 22:44, 13/10/2009
- Cool! I think I like being an Ocker. I agree we Merikans are just a bit uncultured. Hm... maybe I'll use the pseudonym "Dick Ocker" on UnRadio.:47, October 14, 2009 (UTC)
edit Is this a problem
Syndrome and I were working on Archery (actually, Syndrome was working on it; I had given up) when it unexpectedly got nommed for VFH. We made a few small fixes to it after it was nommed, and hope that didn't cause a problem. What do you think? I thought I'd ask a non-admin first, although of course one of those admin types is liable to come sneaking by. They often do. Oh, and, er, by the way, did I mention Sun Bee is also nominated for VFH? Not that I'm vote whoring or anything. WHY???PuppyOnTheRadio 05:45, October 14, 2009 (UTC)
- It's one of the vagaries of being part of this community is that there are a shite load of policies, and only about three of them written down. The main issue is that you don't change the overall content or makes significant changes. What you've done is minor, minor stuff - a couple of links added, a few words changed in order, general proof-reading. I'm not an admin, but you'd have to get an admin with an axe to grind to have any fall out from this. And I used to have something saying this was a welcome whore zone... I haven't voted for either sun bee or archery as I've been focused on getting my next article up and running, and it's a bit of a technical nightmare. However feel free to vote for Lateral Thinking or for WOTM at any stage. Pup t 05:56, 14/10/2009
- Hmm, did you just give me a subtle hint? (Actually, on WOTM I'm waiting to vote as it would be nice if there was another nominee, although I certainly think you deserve it). WHY???PuppyOnTheRadio 06:01, October 14, 2009 (UTC)
- On Lateral Thinking I see several misspellings and grammatical errors that I suspect aren't intentional. But I don't want to make several edits to someone's else's VFH without permission, even if minor, especially as some of the errors in there are apparently intentional. Would you mind if I copied this to User:Why do I need to provide this?/Lateral Thinking or to User:PuppyOnTheRadio/Lateral Thinking and made the edits there? Then you could see if you wanted it that way on mainspace. Let me know! WHY???PuppyOnTheRadio 06:11, October 14, 2009 (UTC)
- Up to you, but I'd suggest just make the amendments on the mainspace and I'll come back and veto them once you're done. (I should really start proofreading my own work before I put it up for VFH.) It's less hassle that way. (Oh, and keep in mind I do use Australian English, which is not quite UK English but closer than US English.) Pup t 06:17, 14/10/2009
- As Australia is called (in America) "the land down under," does that mean you write everything upside down? Seriously, I'd prefer to do it in user space--I've made a very obvious and very minor fix or two to a VFH, but if I make several fixes and you revert them, it will likely make me look like a bad guy. I'll check it within the next 24 hours (probably sooner). WHY???PuppyOnTheRadio 06:27, October 14, 2009 (UTC)
- Your first sentence has an unneeded comma, but when I fixed it I realized I'm not sure what you want your first sentence to say. Lateral thinking is a method of creating problems? But that doesn't fit the "by not being able to see" part. I'll work on the rest of it in the meantime (after I stop messing around with the girl I just met in some bar). WHY???PuppyOnTheRadio 22:47, October 14, 2009 (UTC)
- I've gone through it. After I hear what you want with the first sentence, I'll "finalize" it (on User:Why do I need to provide this?/Lateral Thinking anyway). Also I hope this doesn't come across wrong (like I'm on some kind of power trip) because I really like your writing. But with the fixes, I will vote for it for VFH because I think it deserves it. But I wouldn't feel comfortable voting for it to be on the front page with all those errors. Just call me Mr. Nitpicky and I won't deny it. WHY???PuppyOnTheRadio 00:19, October 15, 2009 (UTC)
- I saw your summary on the page. I hope you're doing OK. I'll do one more quick run through, then post a note here when I'm done. If you like it, feel free to copy it to mainspace, or I can do it if you want me to (note that I did a nowiki on the VFH and categories, so that will have to be taken off). And by the way, I had signed up for the proofreading service, and realized I could get credit for doing this which is why I added the tag. WHY???PuppyOnTheRadio 01:44, October 15, 2009 (UTC)
- All right, I are done did fixxing speeling and what is to fixed the grammar. WHY???PuppyOnTheRadio 02:20, October 15, 2009 (UTC)
- I is happy with it - and it displays the blindness that I have to my own work that I missed so many of those issues. I'm eating now, as well, so thanks for the kind thoughts. Feel free to put back into mainspace. Pup t 02:24, 15/10/2009
- Eat away, Uncle Puppy! I'm moving it to mainspace now and then voting for it. WHY???PuppyOnTheRadio 02:29, October 15, 2009 (UTC)
edit First meet with Your
Hello i'm DarkABC is your first meet to Me.if you answer to me talk to my talk Page. User:DarkABC 9:59, October 16, 2009 (UTC)
edit UnSignpost Sometime October 2009
Now with 20% more ninjas!:24, Oct 16
edit Upload Image
How to Check Image and Upload Sucess I try but Upload not work. User:DarkABC 11:36 Oct 16 2009 (UTC)
- Sorry, your English is fairly fractured. What? Pup t 05:01, 16/10/2009
edit UnNews audio
Please do a couple of more UnNews audios so I can nom you for:58, October 16, 2009 (UTC)
- There is an abundance of nominees, isn't there? I'll see how I go, but unfortunately I'm limited with my recording equipment as you know. (In short, this was recorded with my mobile, then e-mailed to me, downloaded onto a computer with no working sound card, converted from .wav to .mp3, copied to the memory stick, plugged into my dvd player with usb capability, and then played back through decent stereo so I could hear it properly. So if I don't get it right the first take, then it's a bastard to fix it.) Pup t 23:21, 16/10/2009
- My policy is to do them in one take, whatever happens. As for quality: you've listened to my dreadful work... what more is there to say? I appreciate your efforts, and I will forgive you if you try and write some more UnNews.:44, October 18, 2009 (UTC)
edit I C WAT U DID THAR WITH UR SIG
Vary vary clevar. Where the Wild Colins Are - LET THE WILD RUMPUS START! 20:02, October 16, 2009 (UTC)
edit Upload for some Article
can you upload File:Siam Sithflag.png,File:Map of SiamSith.jpg,File:Coat arm of Siam Sith.png for me after upload add image article name Siam Sith Empire.if you want upload file and talk to me thank you.
User:DarkABC 12:25PM Oct 17 2009 (UTC)
- Okay, I don't see how I am supposed to upload images that I don't have access to, so, what I will get you to do is this -
- upload one file to flickr or some other file sharing site.
- Post the address of the file here - the full url.
- Give me all the details that you want attached to the file.
- What I will then do is -
- Upload the file, taking screenshots of every step along the way
- Put these together in some format - don't ask me what yet.
- After you see the full process, from the point where I grab your file, to the point where I upload it, this will be the full extent of the help I will give. Beyond that I cannot hold your hand on this any further. Given that you are asking for my time in the middle of PLS, and my time is extremely valuable to me, this is being more than fair.
- If anyone else reading feels I should do more, please feel free to volunteer to do it in my place. Pup t 12:52, 17/10/2009
edit UnScripts:steal Bank Customer Service training video Half-assed Followup Pee Review
I done did comments on your latest version, and stuck them under my previous review. Hope it helps. Also your new signature How??? is so totally cool. Why??? 18:48, October 19, 2009 (UTC)
Also I've been subtle in a review or two before, so this time I will not be subtle. DAMNIT LET ME KNOW WHEN YOU'RE FINISHED WITH THE DAMN THING SO I CAN NOMINATE FOR VFH DAMNIT!!!!!!!!!!!!!!!!! I hope that wasn't too subtle for you. WHY???PuppyOnTheRadio 19:16, October 19, 2009 (UTC)
Also Also I intend to link the word homeless to HowTo:Be Homeless in America as soon as my user space article is moved to main space by The Committee. Unless you object. WHY???PuppyOnTheRadio 21:36, October 19, 2009 (UTC)
- No objections on the latter, and I'll make those last few changes now. I'm not 100% if I want to start with mean Greg and flick back to "nice" Greg. I'll work on it. Pup t 01:53, 20/10/2009
- On second thought, the article may flow smoother with Greg being "nice" until the ending--that might actually make Greg's sarcasm at the end less expected. I still like the idea about "no, that's not how it's done" (however you worded it). Maybe you could use that for a different article. WHY???PuppyOnTheRadio 02:16, October 20, 2009 (UTC)
- You shouldn't yell at your Uncle Puppy, Whyner. Mommy might have to spank you! DAP Dame Pleb Com. Miley Spears (talk) 02:03, October 20, 2009 (UTC)
- I was the one yelled at. Can I get a spanking to? Pup t 02:08, 20/10/2009
- Miley has to spank me first! WHY???PuppyOnTheRadio 02:13, October 20, 2009 (UTC)
- "Greg: (also bright and chirpy) Of course, please, make yourself comfortable. Not on this side, where sits my high backed, comfortable chair with the built in massager and the leather finish. Over there, on that nice plastic chair with the metal armrests [note at bottom--The ex high school surplus hard plastic chair with the cold metal armrests and that horrible pokey bit in your back that you'll only realise that is there once half of your body is painfully paralysed.] My name is Greg...."
- You can change it if you like, but of course that's up to you. (And I don't think that would be a major enough edit to be a problem even if it was after being nommed for VFH).
- I like the way you made Greg's photo match Candy's (bust shot of each--no, I don't mean boobs, I mean head and shoulders).
- I'm nomming for VFH as soon as I do the link I talked about and fix one of my editing suggestions (I forgot to mention a comma; sorry about that). WHY???PuppyOnTheRadio 02:38, October 20, 2009 (UTC)
- Also I didn't think of this before but it could use more links (again, you can add these after VFH I'm sure). WHY???PuppyOnTheRadio 02:44, October 20, 2009 (UTC)
- That's why I didn't footnoe the plastic chair bit - the footnotes as they are have this nice official ring to it all, adding that in there spoiled the flow slightly. And as for the extra links, I know that there is room for more. They can be added later though - in fact I'd rather leave is as much a blank canvas as possibe for people to link-up as they see fit. Pup t 02:46, 20/10/2009
- I see your point about the footnotes. Maybe if the plastic chair bit were written a bit more subtly. I think that having Greg obviously mean at the beginning takes away from your strong, mean ending Greg, which I like. Maybe something like:
- "Greg: (also bright and chirpy) Of course, please, make yourself comfortable. Not on this side, where sits my high backed, comfortable chair with the built-in massager and the leather finish. Over there, on that nice colorful plastic chair with the refreshingly cold metal armrests and the extra support for your spine that you can be assured will not cause half of your body to become painfully paralysed. My name is Greg...."
- Let me know on my talk page when you're ready, and I'll nom it! WHY???PuppyOnTheRadio 02:58, October 20, 2009 (UTC)
It's been nommed! WHY???PuppyOnTheRadio 03:18, October 20, 2009 (UTC)
edit Thanks for the Lateral Thinking thanks, but...
...your code is spewing. It's covering up the bottom of my page and I can't read those helpful copyright notices and those wonderful ads at the bottom (we can't offend our sponsors, can we?) Congrats on winning! WHY???PuppyOnTheRadio 04:06, October 20, 2009 (UTC)
edit Puppy's bee says
WHY???PuppyOnTheRadio 05:36, October 20, 2009 (UTC)
edit Your new sigmund
What's the image of? It looks vaguely... Satanic. WHAT now??? Wednesday, 00:56, Oct 21 2009
This is the way the sig should have looked except that imagemaps don't display inline - unless someone here know something that I don't - so I've settled for a click inline at the moment, and will probably split this into two different images so I can have user page and talk linked. Pup t 01:50, 21/10/2009
- The way it is above is the way I saw it, only much smaller. It's my favorite of Puppy's sigs (other than when he's imitating me, of course). WHY???PuppyOnTheRadio 02:42, October 21, 2009 (UTC)
- Oh, OK. Whoa! That is a huge mass of code. Where's the bestiality link? WHAT now??? Wednesday, 04:12, Oct 21 2009
- Found it. You're a
sickthorough person, you know that? WHAT now??? Wednesday, 04:13, Oct 21 2009
- Big mass of code? Ha! Back in my day, we did all our programming with 0's and 1's. 1 0 1 0 0 1 1 0 1 0--see that? That's binary for 666! Ha, bet you didn't know that, you young whipper snappers. WHY???PuppyOnTheRadio 04:32, October 21, 2009 (UTC)
1 0 1 0 0 1 1 0 1 0 * 512 256 128 64 32 16 8 4 2 1 ---- --- --- --- --- --- --- --- --- --- + 512 0 128 0 0 16 8 0 2 0 = 666
Bugger me, he's right. Oh well, for your efforts, here's an image of me showing you the number four in binary. Pup t 04:41, 21/10/2009
- You know, I was just going to make a joke about that not being the number four until I realized you're right! And it works out whether you start with the pinky or the thumb. I literally lol'ed. WHY???PuppyOnTheRadio 04:47, October 21, 2009 (UTC)
- From now on, you'll know what I mean when I write 0 0 1. WHY???PuppyOnTheRadio 04:48, October 21, 2009 (UTC)
edit Thank you
Don't know Wikus? Go watch District 9 loser
--BlueSpiritGuy 18:07, October 22, 2009 (UTC)
edit UnSignpost
22nd 23rd October 2009
In Pure Russian Fashion, The Newspaper That Reads:16, Oct 23
edit Same here
Yea. me 2. Anyway, do u think u can help me with the ExOps page and make it into a "Mini-site", like Uncyclopedia Health Service? I'll pay you 50 cookies. Thnx. User:Thomasfan666
- Bloody Hellfire! It is a fairly significant ask, and I can't even begin to envisage how you would do that. I don't know. Give me a better idea of how you want it set out, and I might be able to help, but that is a huge task. And given I've already done this, the test section in this, the MSN section in this and this, I know how much work these sorts of things take. Pup t 07:07, 24/10/2009
edit Idea
I understand what u mean, and i know its a fairly signifigant task, but i want/need it to look as realistic as posible. Even though ExOps is a direct parody of Executive Outcomes, you could make it similar to, and have such sections as "Employees", "What we do", "Fill-out job request (Where do you want bombs to be dropped,etc" and so fourth. User:Thomasfan666 PS. remember the original "Mignight Club" game? they had a good soundtrack...
edit You're my kinda dog
That's cool what you did, checking out the details of Spike's Fortran article because you thought it shouldn't have been disqualified. I salute you! (I'd give you a dog bisquit, but is there a template for that?) WHY???PuppyOnTheRadio 04:46, October 24, 2009 (UTC)
- I'm establishing a power base as part of the lead up to my coup d'état. Pup t 06:09, 24/10/2009
- Is that so? I'll be placing you under house arrest. As soon as I shake off last night's hang over. ~
10:06, October 24, 2009 (UTC)
- Can I be in your coup de thingie? I want Mordillo to get all authoritarian with me too!:09 24 October 2009
- Am I getting house arrest too? If so, do I get to choose the type of house? WHY???PuppyOnTheRadio 18:13, October 24, 2009 (UTC)
- I think you just get kept in your own house. :| Orian57 Talk 18:55 24 October 2009
edit A Little Assisstance
Forgive me if this is the wrong place to put this, but I would like to ask how to post a picture.Cthulu95 20:38, October 24, 2009 (UTC)
- Sign your posts first and foremost with the tildes ~~~~
- As for pictures. use code similar to this:
[[File:Orian57SP2.jpg|200px|right|thumb|This is a caption.]]
And it comes out like this.
- the first part is the file, use the exact header from the image you want to use. then size, 200-300px is about standard, although if it's still comming out small feel free to use bigger numbers (I think that has to do with wither or not it's a JPEG or something). then location, left and center (has to be american spelling by the way, much to my constant agony). Then if you want a caption you must put thumb then a pipe (|) and then the content of desired caption. Hope I've helped.
- What he said. Pup t 00:13, 25/10/2009
- We can post pics just like that? Damn. Here I thought we first had to appease one of the elder gods by sacrificing a soul. WHY???PuppyOnTheRadio 16:56, October 26, 2009 (UTC)
edit PLS Scoring
I'll be sure to let you know when I put the comments up. Unfortunately, I'm fairly busy in life right now n shit so it might be a while. Your article was very well-written and I'm going to nominate it to the VFH. Tough competition this year I must say, but good job. --Hotadmin4u69 [TALK] 01:37 Oct 25 2009
- Congrats, Puppy! WHY???PuppyOnTheRadio 00:55, October 26, 2009 (UTC)
edit Al Pacino
The Al Pacino Academy of Shouting UnScripts:You Don't Know Me, Motherfucker
Moved the script to its own page, and hopefully reduced most of the racism. Hope you like the changes. Did the same with the Christian Bale article as well.
edit Thanks for your vote!
--Andorin Kato 17:54, October 25, 2009 (UTC)
edit Quality Control you VFH before using it for QC
- Hi there, HMV. Thanks for the info. You have some nice guys, some flaming ass holes, and some envious whackos = say, 20 ?? Man, thats sounds more like a tiny nut house than democracy. I have twice that many employees.
- One solution is people should state-admit their bias, and be not allowed to vote in those categories. The guy who always votes against your handle is doing just that. And he/she is just an crazy ass-hole who is envious of your good work - you should have some way to accuse him!!!
- Somehow you must have QUALITY CONTROL IN YOUR GROUP, RATHER THAN BY THE GROUP. Otherwise a lot of people, with much less patience than me, will say, "FUCK UNCYCLOPEDIA" after a couple bad experiences, and your tiny group will never grow. At is kind of a RUDE EXPERIENCE. Don't burn people out. Aren't you supposed to be an attractive web site, and not a repulsive one? Personally, getting a feature is NOT going to pay for my goddamn rent.--Funnybony 09:07, October 26, 2009 (UTC)
- What's with the double indent
- Also I disagree, as much as I hate against votes with skimpy reasons or that guy who invaribly votes against my "gay" stuff. Everyone is entitled to vote how they please. other wise it doesn't work. Or worse requires admins to do further
- So why do the Courts have Jury Duty screening? To weed out the ass holes with bias. VFH should have some standards. There was one article I put, Jack Bauer Facts, and in spite of it being a good article on the genre of Bauer Jokes, which are loved by millions, this one-single person hated the genre (nothing to do with the article), and on THAT FUCKING BASIS they wanted it deleted. Like, if you don't like AC/DC therefore no one can hear them!?? Fortunately this person was fair and open-minded and agreed that their personal bias should not deprive millions of their favorite humor. And so it's OK, I guess, or at least until some jerk tries to delete it again for the same reason. So consider Courtroom "Jury Duty" guidelines, and try to employ those to make VFH unbiased.--Funnybony 12:51, October 26, 2009 (UTC)
- Err, funnyboy, you need to chill out a bit here. First of all, we're not going to bury VFH with further red tape than it already has. We all need to live with the current system that is based on the sad fact that humor is in the eye of the beholder. So, while you might think your articles are excellent, others might have other opinions (off topic: I don't have any, I didn't get the chance to read any of yours, I'm generalizing here). Also, I think comparing Jury Duty (which I believe is an idiotic system anyway, but that's bedside the point) to voting on featured articles on a humor wiki is hardly a valid analogy in my honest opinion. And last, my ever repeating points - featuring is not the epitome of Uncyclopedia and your experience here should not be derived from VFH. It you think that voters are voting against you for spite, that's another issue and you're welcome to have a chat with an admin. But I've looked over some of the voting pages and it doesn't seem to be the case. You just need to relax a bit, we all have shitty days in VFH, I had a fair share myself. ~
12:57, October 26, 2009 (UTC)
- we have all had work crucified on vfh, but it only goes to make it stronger. One day i will go back and redo vitiligo, and it will be an article I can be proud of. In the meantime I'm happy to create things like the sat in Baby Boomers because I recognise that your style, while very different to mine, is very funny. And no-one here wants more to see you sparkle on VFH. It's just patience, dude. Pup 13:22, 26/10/2009
edit Also, what I came here for: You're a big cock.
And congrats on your poo lit wins and things. My last minute article came thrid, so I'm quite pleased. Faggot. Yeah I said
- As usual, you're only here for the cock. Hell, my last minute entry, Jesus Lites™ didn't even contend. So grats back to yourself. Ya horses hoof. Pup 13:16, 26/10/2009
- 0_0 What did you just call me!? Also are you aware that "User:POTR" isn't your userspace? It has to be your full name, punishing those who ridiculous long names.:15 26 October 2009
- User:POTR was a sockpuppet who got banned, which means that user space is available for me to dump stuff into - which was also why I created that 'puppet in the first place. And don't tell me you haven't heard the rhyming slang "horses hoof". Pup 01:06, 27/10/2009
- Homophobe!:01 28 October 2009
edit On A Similar Note...
...Congrats on your win in the Alt. Namespace section. And, in my opinion, you were totally screwed out of the Best Image category, too. —Unführer Guildy Ritter von Guildensternenstein 19:19, October 26, 2009 (UTC)
- Dunno, know that I think about it Sonje had a point. Orian57 Talk 21:43 26 October 2009
- I think we can all agree that I deserved to win in every category... right guys? Guys? • • • Necropaxx (T) {~} Monday, 23:38, Oct 26 2009
- Sonje had a point, and I always knew that a major hurdle that I would have to get over was whether they would accept the image as an illustration being that it was very text based. If it was best image category, I would agree. And by the same token, I think the scores on the best alt namespace don't really reflect how close that really came, but like so many other things we do here, humour is largely subjective. Pup 01:06, 27/10/2009
- Sad but true. • • • Necropaxx (T) {~} Tuesday, 03:52, Oct 27 2009
- Indeed. Maybe this is just haughty pretentious old me talking, but I thought POTR's article was by far the most original (even innovative) use of images, even if they are text-based images. Made props to Monika for doing something similarly different. —Unführer Guildy Ritter von Guildensternenstein 04:40, October 27, 2009 (UTC)
edit On a not similar note...
You're weird. • • • Necropaxx (T) {~} Tuesday, 06:46, Oct 27 2009
- I'm wired? Like wired for sound? Pup 07:16, 27/10/2009
edit UnNews:Man who claimed to have found God arrested for wasting police time
I'll date it today Oct 28, and do a couple of minor edits for housekeeping; UnNews stuff which I like to:35, October 28, 2009 (UTC)
edit QVFD
Thanks for posting the cyberbullying articles on the QVFD. However, in the future, if you could refrain from putting that little note on the pages and simply leave them be, that would be much appreciated. It's not a bad thing to do in theory, but it adds an unnecessary step to the process of deletion, which I'd prefer not happen. If the other admins have been conspiring against me and have all already told you to do so, then kindly disregard my presence. I'm not even here right now O_O - Also, penis. 03:39, October 30, 2009 (UTC)
edit Thanks
But I'm afraid I'm a tad inexperienced. How exactly do I put it up for Pee Review? --Ozymandiaz 12:55, October 30, 2009 (UTC)
--Ozymandiaz 03:20, November 1, 2009 (UTC)
Review now
{{Uncyclopedia:Pee Review/Menu}} ==={{subst:PRTitle|{{SUBPAGENAME}}}}=== <!-- ^ No need to change the above line unless it looks terribly wrong when you save. Put comments, requests, etc below this line - or even delete this whole part. --> ~~~~ {{Pee Review Table |Hscore= |Hcomment= |Cscore= |Ccomment= |Pscore= |Pcomment= |Iscore= |Icomment= |Mscore= |Mcomment= |Fcomment= |Signature= }} [[Category:Pee Review]]
- 4. Simply save the page and leave it at that. Keep an eye on your watchlist as somebody will (usually) tag it when they start the review, but they may not drop a note on your talk-page when it's done.
edit Congrats
on your WotM win. • • • Necropaxx (T) {~} Sunday, 02:45, Nov 1 2009
- Thanks. I don't know if it's official yet or not, but much like FOX news, I'm calling it! Pup
- You're still a complete noob, you know. --Pleb SYNDROME CUN medicate (butt poop!!!!) 03:33, November 1, 2009 (UTC)
- But I'm a n00b with panache. Pup
- Hah! Noobs don't have panache! They don't get panache rights until they've been on Uncyc for at least a year. Oh, and Puppy? Your coding is, how the French say, terrible. • • • Necropaxx (T) {~} Sunday, 03:53, Nov 1 2009
- Sorry, I don't speak French. Merde! Pup
- Jai habit un batton de col.:23 1 November 2009
edit Hello?
I haven't heard from u in a while, so i'll ask you the question again: "Can you make my ExOps page look like an actual website"? User:Thomasfan666
- Sorry, I thought I'd responded already. I can have a go at it later, but it is a huge amount of work that needs to be done to make it look like a website. What I can do is give you a structure, but I'll leave it to you to finalise the content. Pup
edit What happened to the puppy talking into the Victrola?
The old signature was vommitously cute - which made everything horrible that was said that much more horrible.
- I absolutely agree. Untill it's return I propose a strike on all signitures.
- agree as well, if only I knew what on earth we were talking about.
- This annonimity is soo cool! I could vote against right now and nobody would know I was contradicting myself.
- ChiefjusticeDS is teh sux0rz! He is such a fag.
- Against.
- Whoa, I didn't know that was a real template.
- I'm pretty sure it's also broken. Puppy's page is all fucked up now.
- It was broken anyway, in fact, I AM POTR now, and nobody can prove otherwise. I can be anyone I want to be now.
- No! I am POTR!
- Well that doesn't matter because I am Mordillo. Kneel primitive being.
- No I am Mordillo! How dare you use my name in vain! banbanbanabanabanbanabnananananabnanbabnabnabnanbabn!
- I'm Mordillo and so is my wife.
- I am not Mordillo! Are you confusing me with your mistress again? I was so right earlier, this contradiction is fun!
- That's all very well but I'm Orian57.
- Fuck off Orian, no one likes you. Everybody stop talking and shake your head in disgust.
- I, Mordillo, shake my head in disgust.
- Against..
- Cheif would like to know who Orian57 is.
- :(
- Hang on, who are you?
- Your mother, cheif, I am very dissapointed in this bullying behaviour. I think you should gently fuck Orian as an apology.
- So either POTR has to watch chief gently fucking Orian, or change his sig back?
- No she said He should make love to me because he was mean about me. :( Puppy can do whatever he likes with his sig..
- For.
- Broken template? What broken template?
edit So what happened...
...from my end is that I got an e-mail saying that Luvvy had commented on her talk page (amongst a few other things.) The next thing I know is that I can't get onto Uncyclopedia for the majority of the day. Now that is classy vandalism! Pup
- Even classier is the fact that I couldn't get on, either, even though I was totally unrelated to the issue! • • • Necropaxx (T) {~} Monday, 06:13, Nov 2 2009
edit After reading your "rulez"...
I am here to tell you that The Tragic Lumberjack Drownings of Ought Nine is on VFP and you should totally take a look. =D • • • Necropaxx (T) {~} Monday, 06:13, Nov 2 2009
- Well, rats. I kind of asked for it though... • • • Necropaxx (T) {~} Monday, 06:43, Nov 2 2009
edit Christian Bale
its reeeeaaaddddddyyyyy... i think. You might want to give a quick glance.--Matfen815 13:59, November 2, 2009 (UTC)
- It's Melbourne Cup day, which means public holiday here, so I won't get a chance to look at this until tomorrow most likely. Pup
edit Salty latex
Thanks for the Nom! And yay for the puppy signature being back. --Puffskein 17:38, November 2, 2009 (UTC)
- You deserved the nom. And I had to bow to public pressure on the sig. Pup
- Yay Puppy on the radio! DAP Dame Pleb Com. Miley Spears (talk) 23:16, November 2, 2009 (UTC)
edit Hi
I've submitted Megalomaniac for Pee Review. You can voice your thoughts here. --Ozymandiaz 20:14, November 2, 2009 (UTC)
- I'm getting there, I think there are still a few articles in front on my list though. --ChiefjusticeDS 21:40, November 2, 2009 (UTC)
- I'm happier for chief to go through the review first. He's better at reviewing then I am, and I'd rather actually help with any work that needs to be done with it. Pup
- /Chief notes this down in his little book of compliments/ --ChiefjusticeDS 23:22, November 2, 2009 (UTC)
edit WotM
Congrats and stuff! WHY???PuppyOnTheRadio 00:57, November 3, 2009 (UTC)
I second that Congrats. --Puffskein 15:09, November 3, 2009 (UTC)
edit UnSignpost 29-10-2009
The Newspaper That Openly Admits Its Liberal And Conservative Biases!:39, Nov 3
edit Thanks!
edit CW
Nice work on Street Fighter for CW, there. Made me laugh several times. -- 21:22, November 3, 2009 (UTC)
- Thank you. Please continue to tell me how good I am! Pup
- Oh, Puppy, you're so fine, you're so fine you blow my mind! Hey Puppy. Hey Puppy. WHY???PuppyOnTheRadio 03:25, November 4, 2009 (UTC)
- Oh Whyner, what a pity you don't understand. You're playing with my heart when you grab me by the glans. Pup
edit Street Fighter)
edit Puppy--or anybody--I can use your help ASAP
All I'm trying to do is something very simple. User:Puffskein wants me to adopt her, so I'm trying to add {{Adoptee|Why do I need to provide this?}} to her user page so it shows up centered and below {{VoteNOTM}} but doesn't change anything else on her user page. Simple, huh? Except I'm trying it on a test page and keep screwing it up. Can anybody either tell me how to do it, or just do it for me and then I can look at your code and see what to do next time? Thanks! WHY???PuppyOnTheRadio 01:59, November 5, 2009 (UTC)
- Dude, before you get too involved, you know there are no girls on the internet, right? --Pleb SYNDROME CUN medicate (butt poop!!!!) 02:03, November 5, 2009 (UTC)
- Oh, don't give me that crap just fix it or tell me how to fix it if you know how so I don't look like an idiot. I don't care anyway if it's a boy or a girl, as long as it's not a vandal. WHY???PuppyOnTheRadio 02:05, November 5, 2009 (UTC)
- Sorry, you're my stepdad I shouldn't talk to you like that. I'm just trying to be a responsible parent by throwing a temper tantrum. That's how it works, right? WHY???PuppyOnTheRadio 02:06,)
- We are somewhat incestuous today, aren't we? Pup
- What's that got to do with it? As I understand it, Cainad and Eris Discordia are Miley's parents, Miley's my mother, I'm Puffskein's father, Puppy is Miley's brother, and Syndrome is my stepdad. Where's the incest? WHY???PuppyOnTheRadio 02:17, November 5, 2009 (UTC)
- Might have something to do with how I found Eris... --Pleb SYNDROME CUN medicate (butt poop!!!!) 02:23,)
- You fixed it! Way to go, StepDad! I knew it was simple, but I keep getting my versions of HTML mixed up. I still keep trying to design on a wiki like I do on a website. Ah, I don't know what the hell I'm doing. But thanks! And I guess you're my stepdad--you nommed me for NotM, Pee Reviewed my first article Sun Bee, fixed Archery when I threw a tantrum and abandoned it, and hung around with Mommy Miley a lot. (By the way the top 10 for October voting is going on now). WHY???PuppyOnTheRadio 02:13, November 5, 2009 (UTC)
- Aw, you're welcome. (I usually wait until the end before voting on the top 3 because I haven't read all 30 articles, and then I usually miss my chance to vote because I forget.) --Pleb SYNDROME CUN medicate (butt poop!!!!) 02:23, November 5, 2009 (UTC)
- Since it's my talk page, I'll just quickly suggest that as you can vote for 10, Lateral Thinking and UnScripts:steal Bank Customer Service training video are both very good articles. Pup
- If they're such good articles, why can't they find me true love? --Pleb SYNDROME CUN medicate (butt poop!!!!) 02:36, November 5, 2009 (UTC)
edit Congratulations! You're a Great Uncle!
It's a baby and her name is Puffskein! And look, she's already learning sign language!
I'm the adoptive Daddy of Puffskein; I was adopted by Miley; Miley was adopted by Cainad; you're Miley's brother, so that makes you a Great Uncle. Congratulations, Uncle! Be sure to go see your new niece! WHY???PuppyOnTheRadio 20:12, November 5, 2009 (UTC)
- Why doesn't she come and see me? I'm an old man and I don't get around much any more, what with this mad cow disease. I was trying to think of the word nepotism the other day and I sais incestuous in its place. And I keep putting apostrophes in its when denoting ownership. And you just want to send me to a home and take all my money. Pup
edit John Paul Puppy the I, please tell me how. . .
. . .to change out the Globe insign. in the left hand corner of your user and talk page. If you tell me I promise not to put any goddessforsakenboobpics on it. Thanketh, Aleister in Chains 01:45, November 6, 2009 (UTC)
- {{Imageaslogo}} - although I have it attached to the template at the top of the screen there which is my nav bar. The code I've got is {{Imageaslogo|logo=PuppyOnTheRadio.gif|Align=-25px}}. The align statement determines where the picture is height-wise.
- And alternative though is what I did to the street fighter article -
{{nologo}} <span style="position: absolute; left: -157px; top: -30px;"> {{click-inline |image = Streetfighteriiturbo21 2.gif |width =180px |link = Main Page |title = Hadōken }} </span>
- Which allows you to also change the link and the title (which is what comes up when you mouse hover). It also gives you a bit more flexibility where you can adjust the size of the image to suit as well.
- Another option is doing it this way.
{{nologo}} <span style="position: absolute; left: -157px; top: -30px;"> [] </span>
- If your image is already the right size this will give you the image in the right place with a link to another page - and it is an external link, so it can redirect to anywhere.
- And one other option is to have it as a fixed point image, like this.
<div style="position: fixed; left: -11px; top: 0px;"> [[Image:Animated-Flag-Australia.gif|170px]] </div>
- Which means even if you scroll the page, that image remains. The Australian flag in this page has been done using fixed positioning, as has the error messages in this page.
- Hope that helps Pup
- Way over my head, but I'll give it a try within a day or two. Thanks very much. The two examples you gave, your Microsoft and Austalian pages, amazing, esp. the Microsoft. If there's a lifetime achievement award for Uncy, I'd say that article deserves it. I'm so overwhelmed I need to go and pull the covers over my head. Aleister in Chains 02:31, November 6, 2009 (UTC)
- Tell me what you want and I'll set it up if you prefer. I've done it a few times a few different ways so I have a bit of skill at getting it exactly the way I want it. I might even fix your talk page while I'm there. Pup
- Thanks, when I find something fun I'll give you a ping. The talk page is fine, I like what you did, and it's a quick way to get in touch when I'd like to get back here. I went back and looked at the Microsoft page again. Damn good. And shows me a little bit more how all those pop ups appear on internet pages. Arthur Clarke's datum comes to mind, seem like magick to those of me who don't have the tech knowledge. Twanks again, Aleister in Chains 02:55, November 6, 2009 (UTC)
- Actually, most of those pop-ups are through VBS, javascript, flash, or some other form of scripting. What I've done is a very crude faux VBS pop-up which uses an animated gif along with fixed positioning to emulate the result I was after. If I had access to real scripting like that I'd be much more effective. If only I used my powers for evil instead of good. Pup | http://uncyclopedia.wikia.com/wiki/User_talk:PuppyOnTheRadio/Archive_3 | CC-MAIN-2015-48 | refinedweb | 10,157 | 82.04 |
Hi,.
import org.apache.ant.*; ~= <import library="org.apache.ant"/>
The reason for this is it is much easier to validate the whole build file
before it is run (think of lint for ant). If instead you loaded types when
you got to a specific task then it would be next to impossible to do any
pre-validation.
Sometimes a build file will need to load a type dynamically - the canonical
example being when you compile a task and then use it in the same build file.
In these cases it will still be impossible to validate the build file but in
all/most other cases it should be possible to validate all/most of the build
file.
This also makes it much easier to have portable build files. For instance if
you have the type library on Z:\ant-support\downloaded-libs\foo.ati on one
machine and /opt/ant/extra-libs/foo.ati on another machine and
~/antlib/foo.ati on a third machine then it becomes extremely difficult to
have portable build files except by requiring users enter in paths into
property files or similar.
However if instead you had a ANTLIB_PATH env variable that every user set
once then that woul dbe much easier (assuming library names are unique). So
rather than specifying it in 10 different build files (assuming they are well
written build files) which may all use different property names to indicate
the location, you specify it once by setting ANTLIB_PATH
At least thats the idea - I think we will need to experiment with things and
see if they end up turning out like that though ;)
I think some of that stuff also needs to be redone because specifying single
imports will load a separate ClassLoader for each import which is probably
overkill ;)
> How about plans for the DataType interface? Is it going to stay a marker
> interface, or were you going to add stuff to it?
DataType was just a generic marker interface an object could implement if it
didn't belong to any other role. If a type doesn't have a role then it
becomes extremely difficult (ie impossible) to place it in the registry. It
may be possible to delete it in the future .. maybe ... if we can think of a
nice clean way of doing it ;)
> You were talking about
> having more than one data type role - want to elaborate on that a little
> more?
Well currently ant1.x has a few sets of subtypes. For instance we have a
bunch of conditions that extend ConditionBase, such as; And, Or, Socket,
IsSet, Os, Equals etc.
So in ant2 we will instead have one role
org.apache.ant.Condition
and a few implementations that are registered for that role such as
<condition name="or" class="org.apache.ant.Or"/>
<condition name="and" class="org.apache.ant.And"/>
<condition name="equals" class="org.apache.ant.Equals"/>
<condition name="is-set" class="org.apache.ant.IsSet"/>
<condition name="os" class="org.apache.ant.Os"/>
Now currently our ConditionTask has a bunch of methods like
addOr( Or or )
addAnd( And and )
addEquals( Equals equals )
addIsSet( IsSet isSet )
In ant2 we can replace that with
add( Condition condition )
And thus any type registered in the Condition role could be added. So if Fred
was to come along later and add the IsServerUp type into the Condition Role
then they could now use that inside ConditionTask.
Now other candidate Roles would be Mapper, Scanner, FileSet etc
> In particular, how would this affect the configurer, if it wanted to
> do type substitution?
Does the above cover that?
> I'd like to automate the process of defining a TypeInstanceTask task for
> each data type. Looks like it would be trivial to use the XDoclet stuff to
> add a <task> declaration in the descriptor for each data type. However, I
> think it would be good to keep the descriptor as normalised as possible,
> and avoid putting the <task> declaration in there. This would give the
> container plenty of freedom to do what it wants at runtime. What I was
> thinking of doing was add a TypeManager implementation that took care of
> either registering the task when the data type is registered, or that took
> care of instantiating a TypeInstanceTask when a "task" with the name of a
> data-type is requested. Thoughts?
I don't like the last case but the other two (xdoclet and a "smart"
registering TypeManager) could be interesting to work with. XDoclet would be
by far the most flexible solution and if we end up going with that approach
then we could almost completely automate it (have a rule that said anything
implementing interface X that has attribute Y is registered like so).
A "smart" TypeManager could also be a good idea. Not sure - it would mean
that everything that was a DataType would automatically be registered as a
"task". Is that a good idea? I don't know. It is kinda neat but probably a
bit more work and has icky magic code ala
if( o instanceof DataType )
{
doMagic()
}
I would maybe wait till we see how much effort/flexability XDoclet gives us.
It may be that is easier (fingers crossed) otherwise a smart TyeManager could
definetly work.
--
Cheers,
Pete
-----------------------------------------------
| If you turn on the light quickly enough, |
| you can see what the dark looks like. |
-----------------------------------------------
--
To unsubscribe, e-mail: <mailto:ant-dev-unsubscribe@jakarta.apache.org>
For additional commands, e-mail: <mailto:ant-dev-help@jakarta.apache.org> | http://mail-archives.eu.apache.org/mod_mbox/ant-dev/200201.mbox/%3C200201130333.g0D3XNE10766@mail016.syd.optusnet.com.au%3E | CC-MAIN-2020-24 | refinedweb | 917 | 60.85 |
Happy Saturday! In todays learn we have an article from Football Radar Engineering on a few aspects you may have noticed on your Scala journey or if you are at the start of your Scala journey ones you can look out for.
'As we have blogged about previously, we have been using Scala for much of our backend work for the last 2-3 years. On balance we think that it has worked well and continues to work well for the types of systems that we build.
Of course in that period of time, we have discovered, as many others before us travelling on the road from journeyman to master have likewise discovered, that Scala has its fair share of hidden complexity, pitfalls and hazards, in part due to its rich heritage that borrows from many languages and programming paradigms.
In fact, for me personally, I feel that with Scala, more than other languages I have learned in the past, one never arrives at mastery but is in a constant state of becoming (to paraphrase Dylan).
In this blog I will mention just a few "betcha-didn't-know" aspects of Scala that I have encountered on the long and winding road (we love our musical references as Football Radar Engineers).
These aren't necessarily the most advanced features of the language (no fancy type-level or 'implicit' stuff here) but they can crop up in daily usage and it can be hard to find suitable documentation explaining what is happening.
So here I'll:
- describe the hidden depths (or deceptions) of 'for' comprehensions
- then move on to why 'final case class' is not in fact a redundant construct as it might first appear
- and finish with a little warning about the use of by-named parameters
I hope you will find these little signposts useful on your journey to Scala mastery.
Hidden depths (or deceptions) of 'for'-comprehensions
When you are starting out with Scala you tend to use the 'for'-comprehension syntax quite a lot because it is vaguely familiar to the 'for' looping constructs you would likely have encountered in other languages.
After some time you find out that 'for'-comprehensions are really syntactic sugar for chained calls to the methods 'flatMap' and 'map' available on Scala collections.
At that point you probably first decide to stick with 'for'-comprehensions because 'flatMap' and 'map' are scary sounding things. But then once you get over your initial misgivings you tend to mainly reach for 'flatMap' and 'map', and only reach for the 'for'-comprehension when you have long chains of 'flatMaps' and 'maps' and you want to make the flow more readable.
Well at least that describes a bit of my journey and I am merely guessing that it is similar for others.
But recently I learned something new about 'for'-comprehensions that has made me start using them more and once again favouring them because they tend to be more readable. Specifically I learned that they can cover the functionality of one of my favourite work-horses, 'collect'.
Consider the following contrived example where we have an instance of 'List[Try[Int]]' where we want to keep only the 'Success's. Then, for each of the 'Success's that we have kept we want to double the 'Int' value that it contains. Finally, we want to be left with an instance of 'List[Int]'.
>Recall that the 'scala.util.Try[T]' type has two subtypes 'Success[T]' and 'Failure[T]' for representing, respectively, whether a computation has successfully completed and returned a value or whether it failed with an exception.
Let us try this in the REPL:
scala> import scala.util.{Try, Success} import scala.util.{Try, Success} scala> val tries: List[Try[Int]] = Try("1".toInt) :: Try("2".toInt) :: Try("3".toInt) :: Try("four".toInt) :: Nil tries: List[scala.util.Try[Int]] = List(Success(1), Success(2), Success(3), Failure(java.lang.NumberFormatException: For input string: "four"))
A very typical attempt, using the basic collection combinators to express our desired logic, might look like the following:
scala> :paste // Entering paste mode (ctrl-D to finish) tries .filter { t: Try[Int] => t.isSuccess } .map { t: Try[Int] => t.get * 2 } // Exiting paste mode, now interpreting. res0: List[Int] = List(2, 4, 6)
Sorry, of course the obligatory one-liner:
scala> tries filter (_.isSuccess) map (_.get * 2) res1: List[Int] = List(2, 4, 6)
But, whenever you have code where a 'filter' is followed by a 'map' (or vice versa), we can immediately think of replacing the two combinators with a single use of 'collect', which then saves us from traversing the list twice (and in this case has the added bonus of helping us avoid calling the '.get' method on 'Try[Int]' that might potentially throw an exception):
scala> tries collect { case Success(x) => x * 2 } res2: List[Int] = List(2, 4, 6)
>Recall that 'collect' is defined on 'List' (and other collection classes) as follows:
'def collect[B](pf: PartialFunction[A, B]): List[B]'
This indicates the 'collect' takes a 'PartialFunction' as a parameter (the 'case Success(x) => x * 2' part defined in the above listing), which you might in turn recall is a function only defined for some values of its input type.
So basically 'collect' acts like 'map' except it only applies the partial function to the values for which it is defined (in the above example it is defined only for 'Success' values so it is applied to only those values in the list).
Well, it turns out that we can use a 'for'-comprehension to achieve the same thing -- i.e. safely traverse the list and extract the elements that match the 'Success' pattern that we want -- just as efficiently as with a 'collect':
scala> for { Success(x) <- tries } yield x * 2 res3: List[Int] = List(2, 4, 6)
This works because as I recently discovered, when there is a pattern-match on the left side of a 'for'-comprehension generator -- i.e. 'Success(x)' in the above listing -- what the compiler does is to expand the pattern-matching into a 'withFilter' (a lazy version of 'filter'). So the above translates to:
scala> :paste // Entering paste mode (ctrl-D to finish) tries withFilter { case Success(x) => true case _ => false } map { case Success(x) => x * 2 } // Exiting paste mode, now interpreting. res4: List[Int] = List(2, 4, 6)
That of course looks a lot like the very first attempt we started with above, but the difference is that since 'withFilter' is lazy, we don't have to worry about traversing the list twice.
>If you consult the Scala library API docs you will see the description of 'withFilter' indicates that whereas 'filter' *eagerly* creates a new collection, 'withFilter' "only restricts the domain of subsequent 'map', 'flatMap', 'foreach', and 'withFilter' operations."
But what I like is that the 'for'-comprehension sugar hides all of that from you.
This corrects a misunderstanding I had about 'for'-comprehensions. I used to think that the compiler always directly translated clauses in a 'for'-comprehension to 'map' or 'flatMap'. So in my mind 'for { Success(x) <- l } yield x * 2' would have been de-sugared to...:
tries map { case Success(x) => x * 2 }
...which of course will fail at runtime because our 'Try("four".toInt)' was a 'Failure' and not a 'Success':
scala> tries map { case Success(x) => x * 2 } <console>:10: warning: match may not be exhaustive. It would fail on the following input: Failure(_) tries map { case Success(x) => x * 2 } ^ scala.MatchError: Failure(java.lang.NumberFormatException: For input string: "four") (of class scala.util.Failure)
Now based on that you would be justified if you complained that 'for'-comprehensions are a bit deceptive. However, in this case I will take the glass-half-full tack and conclude that the 'for'-comprehension has revealed that it has hidden depths!
'final case class' is not redundant after all
After some time, as you get more comfortable with the language, you might start to explore some of the more popular open-source Scala libraries to start to get a feel for best-practices and language idioms, and perhaps just satisfy your curiosity about how some of the constructs you use daily are implemented.
If you do this, it would not be too long before you encounter 'final case class'. In fact if you clicked through to the source of 'Success' from our previous example, you will see that it is defined as 'final case class Success(...)'.
'final' of course is a modifier from Java, and if you consult the Java documentation, you will see that "a class that is declared final cannot be subclassed" (See: The Java Tutorials -- Writing Final Classes and Methods).
For me this was a bit confusing because as I understood it, you couldn't 'extend' a case class anyway so there was no way to create a subclass.
scala> case class Point(x: Int, y: Int) defined class Point scala> case class ThreeDPoint(x: Int, y: Int, z: Int) extends Point(x, y) <console>:9: error: case class ThreeDPoint has case ancestor Point, but case-to-case inheritance is prohibited. To overcome this limitation, use extractors to pattern match on non-leaf nodes. case class ThreeDPoint(x: Int, y: Int, z: Int) extends Point(x, y)
So surely the 'final' is redundant, no? And all of those Scala experts, including those behind the 'final case class' lint in Scala WartRemover, must just enjoy typing unnecessary characters. Well, to misappropriate another (obscure) musical reference, fifty million Scala developers can't be wrong.
And of course they aren't.
First, I once again had to correct one of my common misunderstandings. If you look again at the above error message, it says "case-to-case inheritance is prohibited", meaning that the following is legitimate:
scala> class ThreeDPoint(x: Int, y: Int, z: Int) extends Point(x, y) defined class ThreeDPoint
However, if you declare 'Point' as final then you get the following, whether 'ThreeDPoint' is another case class or not:
scala> final case class Point(x: Int, y: Int) defined class Point scala> class ThreeDPoint(x: Int, y: Int, z: Int) extends Point(x, y) <console>:9: error: illegal inheritance from final class Point class ThreeDPoint(x: Int, y: Int, z: Int) extends Point(x, y)
You cannot even mix in a trait when trying to instantiate 'Point' with the 'new' keyword:
scala> trait ThirdDimension { val z: Int } defined trait ThirdDimension scala> new Point(1, 2) with ThirdDimension { val z = 3 } <console>:11: error: illegal inheritance from final class Point new Point(1, 2) with ThirdDimension { val z = 3 }
On the surface there doesn't seem much...err...point to this other than nailing your representations shut. However, with a little more digging, it seems that one of the real pay-offs with using 'final case class' is that the compiler can its knowledge of the fact the type cannot be extended to help to weed out certain types of errors.
For example, imagine this (only slightly) contrived example where you previously implemented a function 'distance' for calculating the distance between two points, where a point was represented as a sequence of 'Int's but now you are sensibly refactoring it to a 'Point' case class. So you change the signature of 'distance':. <console>:9: error: scrutinee is incompatible with pattern type; found : Seq[A] required: Point def distance(a: Point, b: Point): Double = (a, b) match { case (Seq(ax: Int, ay: Int), Seq(bx: Int, by: Int)) =>
The compiler blocks this buggy change because once 'Point' is declared 'final' the compiler makes it mandatory that you change the patten-match in the 'distance' function from a 'Seq' to a 'Point' because it knows that a 'Point' can be nothing other than a 'Point'.
Without 'Point' being declared 'final' you could do part of the refactoring and leave the remaining buggy implementation (although being good TDD'ers presumably you have some tests that would catch this regression!):. distance: (a: Point, b: Point)Double scala> distance(Point(-2, -3), Point(-4, 4)) scala.MatchError: (Point(-2,-3),Point(-4,4)) (of class scala.Tuple2)
The compiler cannot offer any help in the case when 'Point' isn't 'final' because it cannot rule out the scenario where 'distance' is called with an instance of 'Point' that mixes in the 'Seq[A]' trait.
Admittedly this is perhaps a bit "edge-casey" but it seems there is in fact a bit of method to the 'final case class' madness. And I am definitely intrigued to see if there are more cases where the compiler can help prevent certain bugs because of the added type restriction.
By-name parameters should come with a warning
This final signpost indicates a real gotcha that I think is not immediately obvious. And because it isn't obvious and has the capacity to yield subtle bugs, I am surprised that some usages of by-name parameters do not either:
- generate a warning when the '-Xlint' compiler flag is used
- or warrant a lint on tools like WartRemover
- or receive well-documented treatment in introductory books on the language (e.g. the common "gotcha" that I will describe below does not seem to be mentioned in Programming In Scala, which is meant as the definitive guide to the language.
In a nutshell, non-strict evaluation means that the arguments passed to a function are not evaluated before we execute the function (as is the default case in Scala and most modern languages which have strict evaluation) but are evaluated when they are referenced by-name inside the body the function.
You might not have realised it, but already in the first example in this post you encountered non-strict evaluation with the 'Try(...)' construct for creating the instances of 'Try[Int]'.
>Recall that 'Try(...)' is really shorthand for calling 'Try.apply(...)' on the 'Try' companion object.
If you take a look at the implementation of 'Try.apply()' you will see it is defined as:
def apply[T](r: => T): Try[T] = try Success(r) catch { case NonFatal(e) => Failure(e) }
The '=> T' in the method parameter list is what gives the method its "non-strictness" and can be contrasted with the form '(r: T)' that we use to define typical "strictly" evaluated function parameters.
The reason that 'Try.apply(...)' must be defined this way is because the expression being evaluated may throw an exception and we want a chance to catch that exception within the body of 'Try.apply(...)' so that we can materialise it as a 'Failure'.
If the expression passed to 'Try.apply(...)' was evaluated strictly the exception would be thrown up the call-chain before we got a chance to wrap it up in a 'Failure'. Consider this bad re-write of 'Try.apply(...)' called 'TryBadStrict.apply(...)':
scala> import scala.util.{Try, Success, Failure} import scala.util.{Try, Success, Failure} scala> import scala.util.control.NonFatal import scala.util.control.NonFatal scala> :paste // Entering paste mode (ctrl-D to finish) object TryBadStrict { def apply[T](r: T): Try[T] = try Success(r) catch { case NonFatal(e) => Failure(e) } } // Exiting paste mode, now interpreting. defined object TryBadStrict scala> val tries: List[Try[Int]] = TryBadStrict("1".toInt) :: TryBadStrict("2".toInt) :: TryBadStrict("3".toInt) :: TryBadStrict("four".toInt) :: Nil java.lang.NumberFormatException: For input string: "four"
We are not even able to create our 'val tries: List[Try[Int]]' because the expression '"four".toInt' has thrown before we got a chance to wrap it in a 'Failure'.
Another typical usage of the by-name feature is to implement a small control abstraction to time blocks of code. The following example shows one (subtly wrong) implementation of a method for timing a block of code that returns a 'Future' (e.g. we might use it to time an external service call):
scala> def timeFutureSubtlyWrong[A](block: => Future[A])(implicit ec: ExecutionContext): Future[A] = { | val t0 = System.currentTimeMillis() | block onComplete { _ => | val t1 = System.currentTimeMillis() | println(s"Elapsed time: ${t1 - t0} ms") | } | block | } timeFutureSubtlyWrong: [A](block: => scala.concurrent.Future[A])(implicit ec: scala.concurrent.ExecutionContext)scala.concurrent.Future[A]
When we invoke this method we get the following:
scala> import ExecutionContext.Implicits.global import ExecutionContext.Implicits.global scala> timeFutureSubtlyWrong(Future { println("Calling external service"); 1 + 2 }) Calling external service Calling external service Elapsed time: 1 ms res0: scala.concurrent.Future[Int] = scala.concurrent.impl.Promise$DefaultPromise@59696551
Note that "Calling external service" is printed twice indicating that the blockpassed to 'timeFutureSubtlyWrong' was evaluated twice -- once for each time it is referred to by name in 'timeFutureSubtlyWrong'.
Depending on what the block of code is doing this may not necessarily be the end of the world -- even though it is most certainly not intended. But it is not too difficult to imagine other scenarios where this is more of a serious bug -- e.g. mistakenly calling out twice to some payment service!!
The correct implementation (or at least the one that is most likely intended) is to cache the result of the block in a 'val' within the function and then refer to that 'val' when you need the result of the block. For example:
scala> def timeFuture[A](block: => Future[A])(implicit ec: ExecutionContext): Future[A] = { | val t0 = System.currentTimeMillis() | val invokedBlock = block | invokedBlock onComplete { _ => | val t1 = System.currentTimeMillis() | println(s"Elapsed time: ${t1 - t0} ms") | } | invokedBlock | } timeFuture: [A](block: => scala.concurrent.Future[A])(implicit ec: scala.concurrent.ExecutionContext)scala.concurrent.Future[A] scala> timeFuture(Future { println("Calling external service"); 1 + 2 }) Calling external service Elapsed time: 0 ms res1: scala.concurrent.Future[Int] = scala.concurrent.impl.Promise$DefaultPromise@5e0ec41f
And voila! -- we only have one occurrence of "Calling external service"
I will finish by saying that I hope none of this has put you off of the language. Scala is a rich, expressive language that we have had great success with as a team and that personally I have enjoyed more than any other language I have learned in the past years. It is just helpful to be aware of some of its quirks and hidden complexities.
Often these are the kinds of quirks that once you know about them you think "well of course, how else should it work!".
But sometimes these things are not immediately obvious and hopefully if you have read these signposts along your Scala journey and it saves you from a bug or helps you right safer code, then writing this post would have been worth it for me.'
This article was written by Neil Benn and posted originally on engineering.footballradar.com | https://www.signifytechnology.com/blog/2018/08/a-few-signposts-for-your-scala-journey-by-football-radar-engineering | CC-MAIN-2019-13 | refinedweb | 3,105 | 57.4 |
Promis: a small embeddable Promise polyfillPromis: a small embeddable Promise polyfill
This is a tiny (0.8KB gzipped, 1.9KB minified) Promise implementation meant for embedding in other projects and use as a standalone polyfill. It supports the full Promise API specification and passes the official Promises/A+ test suite.
APIAPI
The constructor is called with a single function argument.
var promise = {;};
Instances of a Promise have two methods available:
then and
catch. The
then method is used to add callbacks for when the promise is resolved or rejected.
promise;
The
catch method is used the catch rejected promises in a more convenient way.
promise;
Both methods return a new Promise that can be used for chaining.
The Promise class also has four class methods:
resolve,
reject,
race and
all. The
resolve and
reject methods are a convenient way of creating already settled promises:
var resolved = Promise;var rejected = Promise;
The
race method can be used to "race" two or more promises against each other. The returned promises is settled with the result of the first promise that settles.
// first will be resolved with 'hello'var first = Promise;
The
all method waits for all promises given to it to resolve and then resolves the promise with the result of all of them.
// all is settles with ['hello', 'world']var all = PromiseallPromise Promise;
You can find more information about Promises and the API in the official Promise specification and on MDN.
TestsTests
Use the
grunt test task to run all the tests. You can optionally pass the
--compiled flag to test the compiled and minified JavaScript file.
EmbeddingEmbedding
This implementation uses Closure Compiler's advanced optimization mode to make the resulting file size as small as possible. If you want to embed this library into your project you can also benefit from Closure Compiler's dead code elimination to remove methods that you are not using. If you want to use Promis this way, you'll need to copy
src/promise.js into your project and
goog.require the implementation. Unlike the standalone file, the
src/promise.js file by itself does not export anything to the global namespace. Instead you should require the
lang.Promise namespace to instantiate a Promise.
goog;...var promise = {;};
LicenseLicense
Licensed under the BSD license. Copyright 2014 - Bram Stein. All rights reserved. | https://www.npmjs.com/package/promis | CC-MAIN-2020-10 | refinedweb | 386 | 66.54 |
What is a function? – Using ”Functions” to Code
What is a function?A function is simply a “chunk” of code that you can use over and over again, rather than writing it out multiple times. Functions enable programmers to break down or decompose a problem into smaller chunks, each of which performs:
"hello world"is the only parameter. Functions can be thought of as little machines that perform a specific task, where the parameters are the inputs to the machine.
4.7
Want to keep
learning?
Raspberry Pi Foundation online course,
Programming 102: Think Like a Computer Scientist
print()
Importing additional functionsThere are lots of functions built into Python that are available straight away. Others, however, you will need to import before you can use them. An example of such a function is
sleep, part of the
timelibrary, which enables your program to pause for a period of time. To be able to use this function in your program, you would import it at the beginning, as in this example.
from time import sleep #imports the sleep function from the time libraryprint("Waiting 5 seconds")sleep(5)print("Done")
Defining new functionsCreating:
This is only three lines of code, but if you will need to print lists regularly, you should create a function. To create a new function, you need to define it before it is used for the first time, usually at the start of your program.This is only three lines of code, but if you will need to print lists regularly, you should create a function. To create a new function, you need to define it before it is used for the first time, usually at the start of your program.
#Create an initial shopping listshopping_list = ["Bread","Milk","Apples","Chocolate"]# Print each item in the list line by linefor item in shopping_list:print(item)
The definition is structured as follows:The definition is structured as follows:
#Define a new function display_listdef display_list():for item in shopping_list:print(item)
-.
display_list()# Add item to the listshopping_list.append("Sugar")#Display updated listdisplay_list()
Functions / Methods / Proceduresmethod:.append("Sugar")
Over to youCan a function that simulates a coin flip. Each time the function is called, it should randomly select either heads or tails and print it | https://www.futurelearn.com/info/courses/programming-102-think-like-a-computer-scientist/0/steps/53095 | CC-MAIN-2021-31 | refinedweb | 374 | 61.16 |
WSDL Analyzer Error
Having trouble installing <oXygen/>? Got a bug to report? Post it all here.
3 posts • Page 1 of 1
- Posts: 58
WSDL Analyzer Error
When using the WSDL Analyzer if the request has multiple namespaces for the output in the payload (i.e. references a XSD type that uses more than one namespace), the XML that is generated doesn't include the namespace prefixes. This causes the XML in the payload to not be valid. If manually correcting this it works correctly.
- Posts: 58
3 posts • Page 1 of 1
Return to “Common Problems”
Who is online
Users browsing this forum: No registered users and 3 guests | https://www.oxygenxml.com/forum/post7452.html | CC-MAIN-2018-13 | refinedweb | 110 | 66.23 |
IIS (Internet Information Services) remains a widely used web server for hosting websites, FTP (File Transfer Protocol) sites, and other applications on Windows servers.
While the graphical IIS Manager interface is great for setting things up initially and for providing a visual overview of your web configuration, there are also some great use cases for automation.
Whether you are creating new websites, amending Application Pool settings, or updating Site bindings, there are ways to accomplish all of these things from code.
This article covers how to interact with IIS using C# and the power of the Web Administration library. I will be focusing on a specific example of using the APIs that the library exposes to find websites by their name and then stop and start them on demand.
Web Administration
In the following sections our goal is to understand how we can interact with IIS from C# code, so naturally, we’ll be making use of .NET libraries to accomplish this.
Please note that the instructions in this article assume that you are using Visual Studio as your IDE. However, the steps will be very similar for other IDEs such as Visual Studio Code.
.NET Framework
If your application is targeting the .NET Framework you can add a reference to Microsoft.Web.Administration library.
The caveat is that you need to have the IIS Management Console Windows feature enabled in order for the required library file to be present on your system.
Please note that the library is only compatible with IIS 7.0+. This shouldn’t be a problem though as version 7.0 was released in 2008!
Check for the library
To get access to the Web Administration library, first of all, you should check if you already have the necessary DLL.
To do this, navigate to the following directory.
%systemdrive%\Windows\System32\inetsrv
If there is a file in this directory named ‘Microsoft.Web.Administration.dll’ then you can skip the next sub-section and proceed with adding a reference to the library.
Install Windows feature
If the library isn’t present on your system already you’ll need to install the necessary Windows feature.
There are a couple of options for doing this.
Programs and Features install
The first way to install the Windows feature is via the ‘Programs and Features’ interface.
Enter the following into a ‘Run’ dialog (WIN + R) and press the Enter/Return key.
appwiz.cpl
When the Programs and Features window loads click on the ‘Turn Windows features on or off’ link to open the Windows Features dialog.
Expand the ‘Internet Information Services’ node, expand the ‘Web Management Tools’ node, tick the ‘IIS Management Console’ checkbox, then press the ‘OK’ button.
It usually takes a few minutes for the feature to be installed.
Command-line install
The second way to install the Windows feature is to launch a Command Prompt (or your preferred terminal) as administrator and execute the following command.
%systemdrive%\Windows\System32\Dism.exe /enable-feature /all /online /featurename:IIS-ManagementConsole
You will receive a confirmation message when the installation of the feature has been completed.
Machine restart
In either case, after the Windows feature installation has been completed you may be asked to restart your computer.
I recommend that you go ahead with the restart to finalise the installation process.
Add the reference
The last setup step is to add a reference to the Web Administration library.
To do this, within your Visual Studio solution, right-click on ‘References’ within the Solution Explorer for the appropriate project and select the ‘Add Reference…’ option from the context menu.
Now you need to select ‘Extensions’ from the left-hand side of the Reference Manager dialog.
Find ‘Microsoft.Web.Administration’ within the list-view, select it, and press the ‘OK’ button to add the reference.
.NET Core
If you’re using .NET Core you’ll need to install the Microsoft.Web.Administration NuGet package into your project.
NuGet Package Manager install
To install the package you can right-click on your project within Visual Studio and use the ‘Manage NuGet Packages’ option to browse for and install the NuGet package.
Package Manager Console install
Alternatively, you can issue the following command from the Package Manager Console.
Install-Package Microsoft.Web.Administration
This will find and install the latest version of the NuGet package for you.
Important notes
Since .NET Core is designed to be platform-independent, the Web Administration library will be included as an additional dependency that will be deployed alongside your application.
It is important to remember that the features within the Web Administration library will only work on Windows since IIS is a Windows web server.
Server management
One of the key classes within the Microsoft.Web.Administration library is the
ServerManager class.
Among other things,
ServerManager provides access to manage Application Pools, Sites, and Worker Processes.
It is important to note at this stage that administrator rights are required in order for the following code samples to work, otherwise, you’ll get an
UnauthorizedException. If you’re using Visual Studio or Visual Studio Code, for example, right-click on the application shortcut and select the ‘Run as administrator’ option before you debug the code. When it comes to running your application in production, see my User Account Control blog post.
To use
ServerManager we, first of all, need to new it up.
var server = new ServerManager();
Once we have a
ServerManager instance we have access to a plethora of useful properties and methods.
For example, we can use the
Sites property to access a
SiteCollection containing all IIS websites and FTP sites etc. that have been created on the current machine.
We can iterate through the available sites as follows.
foreach (Site site in server.Sites) { Console.WriteLine(site.Name); }
If we want to find a site with a specific name we can use LINQ.
Site site = server.Sites.FirstOrDefault(s => s.Name.Equals("Default Web Site", StringComparison.OrdinalIgnoreCase));
If we want to view inspect bindings for a site we can iterate through them, as follows.
foreach (Binding binding in site.Bindings) { Console.WriteLine(binding.ToString()); }
For the ‘Default Web Site’ site that comes pre-installed with IIS, the above code produces the following output.
[http] *:80:
If we want to create an additional binding we can add a new binding to the
Bindings collection and then commit changes to the server, as follows.
site.Bindings.Add("*:8080:", "http"); server.CommitChanges();
In the above example, a new HTTP binding is being added for port 8080. The
CommitChanges method on the
ServerManager object instance is then called to save the changes to the IIS configuration.
There’s so much more that can be done with the Web Administration library and I encourage you to check out the Microsoft Docs for some great examples of how to accomplish other tasks such as creating new Sites and Application Pools.
In the following section, I am going to expand further on the
ServerManager functionality by demonstrating how to stop and start sites within a services class.
Services abstraction
Since there are many different versions of IIS out there and the means of interacting with IIS via .NET applications could change in the future, it makes sense to create a layer of abstraction over the Web Administration library.
In the example code that follows, I will be demonstrating how to create a services class that encapsulates IIS site operations. This includes finding a site, checking if a site is running, stopping a site, and starting a site.
The full source code can be found within the accompanying GitHub repository.
Interface definition
When planning to develop a services class it’s a good idea to design an interface that should be adhered to.
We can define the interface for IIS site services as follows.
/// <summary> /// IIS website services interface. /// </summary> public interface ISiteServices : IDisposable { #region Methods Site GetSite(string siteName); bool SiteIsRunning(string siteName); bool StartSite(string siteName); bool StopSite(string siteName); #endregion }
The
ISiteServices interface specifies a simplified API with methods to which we can pass the name of the site we want to check or stop/start. The methods return either a
Site object or a
bool to indicate the status or success/failure of an operation.
Feel free to adjust this interface to suit your specific needs (of course you would also need to amend the implementation of the concrete services class).
Services implementation
Now let’s look at the implementation of the services class which can be defined as follows.
Note that I am using Serilog as the logging framework.
/// <summary> /// Provides IIS website services. /// </summary> public class SiteServices : ISiteServices { #region Readonlys private readonly ServerManager _server; #endregion #region Constructor /// <summary> /// Constructor. /// </summary> public SiteServices() => _server = new ServerManager(); #endregion #region Methods /// <summary> /// Gets a <see cref="Site"/> object based on the specified site name. /// </summary> /// <param name="siteName">The site name</param> /// <returns><see cref="Site"/></returns> public Site GetSite(string siteName) { Log.Verbose("Getting site named: {0}", siteName); Site site = _server.Sites.FirstOrDefault(s => s.Name.Equals(siteName, StringComparison.OrdinalIgnoreCase)); if (site != null) { Log.Verbose("Found site named: {0}", siteName); } else { Log.Warning("Failed to find site named: {0}", siteName); } return site; } /// <summary> /// Checks if a site with the specified name is running. /// </summary> /// <param name="siteName">The site name</param> /// <returns>True if the site is running, otherwise false</returns> public bool SiteIsRunning(string siteName) { Site site = GetSite(siteName); bool siteIsRunning = site?.State == ObjectState.Started; Log.Verbose("The '{0}' site {1}", siteName, siteIsRunning ? "is running" : "is not running"); return siteIsRunning; } /// <summary> /// Starts the site with the specified name, if it is not already running. /// </summary> /// <param name="siteName">The site name</param> /// <returns>True if the site was started successfully, otherwise false</returns> public bool StartSite(string siteName) { Site site = GetSite(siteName); if (site == null) return false; bool started = false; if (site.State != ObjectState.Started) { Log.Verbose("Starting site named: {0}", siteName); site.Start(); started = site.State == ObjectState.Started; if (started) { Log.Verbose("Started site named: {0}", siteName); } else { Log.Warning("Failed to start site named: {0}", siteName); } } else { Log.Verbose("Site named '{0}' is already started", siteName); started = true; } return started; } /// <summary> /// Stops the site with the specified name, if it is not already stopped. /// </summary> /// <param name="siteName">The site name</param> /// <returns>True if the site was stopped successfully, otherwise false</returns> public bool StopSite(string siteName) { Site site = GetSite(siteName); if (site == null) return false; bool stopped = false; if (site.State != ObjectState.Stopped) { Log.Verbose("Stopping site named: {0}", siteName); site.Stop(); stopped = site.State == ObjectState.Stopped; if (stopped) { Log.Verbose("Stopped site named: {0}", siteName); } else { Log.Warning("Failed to stop site named: {0}", siteName); } } else { Log.Verbose("Site named '{0}' already stopped", siteName); stopped = true; } return stopped; } #region Implements IDisposable #region Private Dispose Fields private bool _disposed; #endregion /// <summary> /// Cleans up any resources being used. /// </summary> public void Dispose() { Dispose(true); // Take this object off the finalization queue to prevent // finalization code for this object from executing a second time. GC.SuppressFinalize(this); } /// <summary> /// Cleans up any resources being used. /// </summary> /// <param name="disposing">Whether or not we are disposing</param> protected void Dispose(bool disposing) { if (!_disposed) { if (disposing) { // Dispose managed resources. _server?.Dispose(); } // Dispose any unmanaged resources here... _disposed = true; } } #endregion #endregion }
In the above code, the
SiteServices class implements the operations specified by the
ISiteServices interface and uses an instance of the
ServerManager class underneath to accomplish the required tasks.
The
ServerManager object instance is initialised in the constructor.
The
SiteIsRunning,
StartSite, and
StopSite methods check the
ObjectState of the specified
Site to see if it is started or stopped.
Logging has been included to help diagnose issues in a production environment (you may or may not require this).
Since the
ISiteServices interface specifies that the
IDisposable interface must be implemented, the
SiteServices class includes a
Dispose method and uses a standardised pattern to ensure that resources i.e. the
ServerManager interface is properly disposed of at the appropriate time.
Services test run
Let’s look at an example of using the services class created in the previous section to stop and start an IIS website.
using ISiteServices services = new SiteServices(); string siteName = "Default Web Site"; Site site = services.GetSite(siteName); bool running = services.SiteIsRunning(siteName); bool stopped = services.StopSite(siteName); bool started = services.StartSite(siteName);
In the above example, specifying the
using keyword means that the
SiteServices object instance will be disposed of once it goes out of scope i.e. the code reaches the end of the current block or method.
After ‘newing up’ the
SiteServices, the code is setting up the name of the site we want to interact with and is then calling each of the available interface methods and storing the result of each method call.
As an example, with the logging in place and configured correctly, you will see output such as the following when calling the
StartSite method.
[23:02:43 VRB] Getting site named: Default Web Site
[23:02:43 VRB] Found site named: Default Web Site
[23:02:43 VRB] Starting site named: Default Web Site
[23:02:43 VRB] Started site named: Default Web Site
As a next step, you could look at amending the services to suit your specific needs. For example, perhaps you want to add the ability to create a new site or amend an aspect of a site via the services.
Depending on your specific needs, you may want the existing methods to accept a
Site object (for efficiency) or you may want to return the
ObjectState enum to get the exact status of a site.
Of course, there are trade-offs to consider in regards to flexibility, efficiency, and keeping your interface as generic as possible while also being practical.
However, now that you know the basics you can start to effectively use
ServerManager and other aspects of the Web Administration library to interact with IIS websites from your C# code.
Stopping…
In this article, I started off by explaining how to get access to the Web Administration library that is required to manage IIS using C#. I walked through how to do this for both .NET Framework and .NET Core applications.
Following this, I covered some examples of server management using the properties and methods exposed by the
ServerManager class.
Lastly, I provided an example of a services class that encapsulates IIS site management operations and includes logging to help provide valuable insights when automating the stopping and starting of IIS websites.
To see a working .NET Core example check out the accompanying GitHub repository.
I hope you enjoyed this post! Comments are always welcome and I respond to all questions.
If you like my content and it helped you out, please check out the button below 🙂 | https://jonathancrozier.com/blog/how-to-stop-and-start-iis-websites-using-c | CC-MAIN-2021-49 | refinedweb | 2,464 | 55.03 |
XQuery UDL
Hello,
I am desperately looking for an UDL for XQuery working with NPP 7.6
All I have found either fail to import or are useless (highlighting nothing)
TIA,
Robert.
@Robert-Kirkpatrick, welcome to the Notepad++ Community.
You mentioned:
NPP 7.6
If you’re really on the initial v7.6, and not a minor version like v7.6.4, I would highly recommend upgrading to v7.6.6. (v7.6 - v7.6.2 had major bugs). (If you are unsure, go to the ? menu, and look at Debug Info; if you need help interpreting, click copy debug info into clipboard and paste in your reply.)
If you could supply us with a small XQuery file (no proprietary data) that doesn’t get highlighted, plus paste in the UDL config file that you’re using (and where you put that file),
I don’t have any XQuery files (don’t even know what they are), but from the Notepad++ end of things, from a cursory glance, the UDL described at should work. With v7.6.6, instead of copying their
UserDefineLang.xmlinto
%AppData%\Notepad++, it can now go as
XQuery.xmlin the
%AppData%\Notepad++\userDefineLangs\folder (ie, save the downloaded
UserDefineLang.xmlas
%AppData%\Notepad++\userDefineLangs\XQuery.xml– which will avoid overwriting the default
UserDefineLang.xmlthat comes with Notepad++ v7.6.6).
If it doesn’t work, a screenshot of the lack of syntax highlighting would be nice. (It would also be good to confirm that the Languages > XQeury UDL has been selected.)
Thank you Peter,
I’m using NPP 7.6.6
I repeated the operations like you explained (using the github UDL), without sucess. No Languages > XQuery appears. No highlight of this file:
module namespace page = ‘’;
declare
%rest:path("/save1item")
%rest:POST("{$message}")
(: %rest:form-param(“amount”, “{$message}”, “(no data)”) :)
%rest:header-param(“User-Agent”, “{$agent}”)
function page:hello-postman(
$message as xs:string,
$agent as xs:string*
) as xs:string { $message };
(: as element(response) { <response type=‘form’>
<message>{ $message }</message>
<user-agent>{ $agent }</user-agent>
</response> }; :)
if you have followed @PeterJones 's guide correctly, your xquery example code should look as seen at this screenshot:
(if not, there are several possibilities, and things you can do, listed below.)
possibilities, if xquery still does not show up:
1) it might be a chance, that your windows explorer is set to
hide extensions for known file types.
this would mean, if you download , the filename would look like
userDefineLang.xmlto you, but in reality it will be saved as
userDefineLang.xml.txtwhich will not be recognised by notepad++ as a valid udl.xml file.
if this is the case, please set windows explorer to show all known extensions.
note: the procedure, for enabling extensions for known file types, depends on your windows version which we currently don’t know.
2) there might be a chance, that your notepad++ is not located at the expected place.
for that we would need you to paste your notepad++ debug information from the menu
? > debug info... > copy debug info into clipboardhere, to be able to tell you currently expected udl locations.
best regards
Thank you MC,
This proves that the UDL works fine - but still not for me !
File extension hiding is turned off.
Here is the debug info:
Notepad++ v7.6.6 (64-bit)
Build time : Apr 3 2019 - 23:52:32
Path : C:\Program Files\Notepad++\notepad++.exe
Admin mode : OFF
Local Conf mode : OFF
OS : Windows 10 (64-bit)
Plugins : mimeTools.dll NppConverter.dll
May be should local conf mode be ON ?
Kr,
Robert.
thank you for your detailed debug information.
May be should local conf mode be ON ?
no, local conf mode is just for the portable versions of notepad++.
it just serves to confine all configuration files to the notepad++ folder, instead of
%AppData%for every specific user.
this mode is not possible, if notepad++ is installed in
C:\Program Files\Notepad++.
your installation looks good to me as it is.
please try the following:
- right click and save
- make sure the saved file is called
userDefineLang.xml, if it is called
userDefineLang.xml.txtor anything else, rename it to
userDefineLang.xml.
- open up notepad++.
go to
language > define your language > import, select the
userDefineLang.xmlyou have just downloaded and press open, as seen at the screenshot below:
- it should now say
import successful.
- close notepad++ and reopen it, for the changes to take effect.
- open up an .xqy file or paste your example into a new tab.
- go to the
languagemenu and select
XQuery, as seen at the next screenshot:
- your xquery code should now be highlighted.
hope this helps.
note: the other file at , called
XQuery.xmlis just for the autocompletion and not needed for the highlighting. we can deal with that later.
I tried it many times, but I always get Fail to import
Robert.
very intriguing indeed.
please test the following, to verify, if a clean, portable notepad++ would work a folder, which is automatically called
npp.7.6.6.bin.x64and copy this folder to your desktop.
important note: make sure to close all instances of notepad++ that might be running, before starting the portable version at the next steps, to be sure you are using the portable version for this test.
open the
userDefineLangsfolder within the extracted
npp.7.6.6.bin.x64folder on your desktop.
(the path
%HomePath%\Desktop\npp.7.6.6.bin.x64\userDefineLangswill lead you there)
copy
userDefineLang.xmlyou have downloaded from >>> here <<< into
userDefineLangs.
start this portable version by double-clicking on
notepad++.exeinside the
npp.7.6.6.bin.x64folder.
go to the language menu and check if XQuery is present.
let’s hope this works for you.
if it does, we can search for the differences and eventually find the culprit on your system.
best regards.
Makes no difference, no new language, fail to import again.
Seem to get worse…:(
this is very strange, as the portable, stand alone notepad++ version always works for such cases.
last things you could try:
copy
userDefineLang.xmlyou have downloaded, without renaming, into
%AppData%\Notepad++and start your installed notepad++.
check if perhaps your browser or any anti malware tool has altered the content of
userDefineLang.xmlwhile downloading.
the contents of
userDefineLang.xmlshould begin with:
(due to allowed size limitations, i can’t post the whole file)
<NotepadPlus> <UserLang name="XQuery" ext="xqy"> <Settings> <Global caseIgnored="no" /> <TreatAsSymbol comment="no" commentLine="no" /> <Prefix words1="no" words2="yes" words3="no" words4="no" /> </Settings> <KeywordLists> <Keywords name="Delimiters">'<0'>0</Keywords> <Keywords name="Folder+">{</Keywords> <Keywords name="Folder-">}</Keywords> <Keywords name="Operators">' ! " ( ) * , . / ; @ [ ] < = ></Keywords> <Keywords name="Comment">1(: 2:) 0</Keywords> <Keywords name="Words1">after ancestor ancestor-or-self as ascending assert attribute before case cast child comment declare default descendant descendant-or-self descending element else every except following following-sibling follows for function if import in instance intersect item let module namespace node of only option parent precedes preceding preceding-sibling processing-instruction ref return returns satisfies schema self some sortby stable text then to treat typeswitch union variable version where xquery</Keywords> <Keywords name="Words2">$</Keywords>
other than than, i’m sorry, i am out of ideas :(
best regards.
@Robert-Kirkpatrick said:
Makes no difference, no new language, fail to import again.
Following @Meta-Chuh’s detailed instructions should work. You will need to give us more information if you want us to be able to help you.
It would help if you would take screenshots along the way, post them to imgur, and then link them using the syntax
. Or, even better, use a utility like ScreenToGif to take a “video” (animated gif) of the whole process, upload to imgur, and embed the animated gif the same way (
– make sure you use the .gif link, not the .gifv link) so we can see exactly what you’re doing
For example, I followed the download/import/restart/apply instructions to your sample .xqy file, and it highlighted properly. I recorded my process:
I might suspect an interference with an older installation of npp, which coexists on another drive used by older Windows versions (though currently disabled).
Is there some Registry setting that need cleanup?
Robert.
@Robert-Kirkpatrick said:
Is there some Registry setting that need cleanup?
Only if you didn’t follow @Meta-Chuh’s instructions.
He asked that you download the portable (zip or 7z) version of Notepad++, and extract that to your desktop. That portable copy doesn’t use any of the registry settings or the
%AppData%\Notepad++saved settings; it uses a completely clean edition. You’ll notice in my animation, the Debug Info shows
Local Conf Mode: ON, because I was using a portable edition.
To follow his instructions correctly, you should have run notepad++ from your portable download; looking at ? > Debug Info should show local configuration mode is on, and the notepad++.exe path should be relative to your desktop (or wherever you unzipped your portable). When you loaded your
.xqyfile, you should have used the File > Open menu of the portable Notepad++ rather than double-clicking or right-clicking on the .xqy in Explorer and running your installed version.
(To clarify: under an installed condition, there may be a mixup in registry settings given your old-vs-new installations; but Meta asked you to do the portable/zip/7z version of Notepad++, so that registry settings and
%AppData%\Notepad++settings would not get in the way.)
Until you start providing details, rather than giving one-to-two-sentence replies to our detailed posts, we won’t be able to help you any further, because we have run out of guesses as to what might be going wrong for you. If you follow these sequences we’ve described, it should work – we have shown in multiple ways that it does work. If it’s not working for you, you will have to do a better job of giving a step-by-step recounting of exactly what you did, with screenshots and/or animated gifs.
I did follow the instructions about the portable version strictly.
I will continue the investigation myself. | https://community.notepad-plus-plus.org/topic/17543/xquery-udl/10?lang=en-US | CC-MAIN-2021-10 | refinedweb | 1,687 | 65.62 |
Reinforcement Learning - Monte Carlo Methods
Playing Blackjack with Monte Carlo Methods¶
Introduction¶
In part 1, we considered a very simple problem, the n-armed bandit problem, and devised an appropriately very simple algorithm to solve it ($\epsilon$-greedy evaluation). In that case, the problem only has a single state: a choice among 10 actions with stationary probability distributions of rewards. Let's up the ante a bit and consider a more interesting problem with multiple (yet finite) states: the card game black jack (aka 21). Hunker down, this is a long one.
Rules and game-play of blackjack (check out if necessary):
- There is a dealer and 1 or more players that independently play against the dealer.
- Each player is delt 2 cards face-up. The dealer is delt two cards, one face-up, one face-down.
- The goal is to get the sum of your cards value to be as close to 21 as possible without going over.
- After the initial cards are dealt, each player can choose to 'stay' or 'hit' (ask for another card).
- The dealer always follows this policy: hit until cards sum to 17 or more, then stay.
- If the dealer is closer to 21, the dealer wins and the player loses, and vice versa.
So what's the state space for this problem? It's relatively large, much much larger than the single state in n-armed bandit. In reinforcement learning, a state is all information available to the agent (the decision maker) at a particular time $t$. The reason why the n-armed bandit state space includes just 1 state is because the agent is only aware of the same 10 actions at any time, no new information is available nor do the actions change.
So what are all the possible combinations of information available to the agent (the player) in blackjack? Well, the player starts with two cards, so there is the combination of all 2 playing cards. Additionally, the player knows one of the two cards that the dealer has. Thus, there are a lot of possible states (around 200). As with any RL problem, our ultimate goal is to find the best policy to maximize our rewards.
A policy is roughly equivalent to a strategy. There are reinforcement learning methods that essentially rely on brute force to compute every possible action-state pair (every possible action in a given state) and the rewards received to find an optimal policy, but for most of the problems we care about, the state-action space is much too large for brute force methods to be computationally feasible. Thus we must rely on experience, i.e. playing the game, trying out various actions and learning what seems to result in the greatest reward returns; and we need to devise an algorithm that captures this experiential learning process.
The most important take-aways from part 1 and 2 are the concepts of state values, state-action values, and policies. Reinforcement learning is in the business of determining the value of states or of actions taken in a state. In our case, we will primarily concern ourselves with action values (value of an action taken in a given state) because it is more intuitive in how we can make an optimal action. I find the value of being in a given state less intuitive because the value of a state depends on your policy. For example, what is the value of being in a state of a blackjack game where your cards total to 20? Most people would say that's a pretty good position to be in, but it's only a good state if your policy is to stay and not hit. If your policy is to hit when you have 20 (of course it's a bad policy), then that state isn't very good. On the other hand, we can ask the question of, what's the value of hitting when I have 20 versus the value of staying when I have 20, and then just choose whichever action has the highest value. Of course staying would produce the highest value in this state (on average).
Our main computational effort, therefore, is in iteratively improving our estimates for the values of states or state-action pairs. In parts 1 and 2, we keep track of every single state-action pair we encounter, and record the rewards we receive for each and average them over time. Thus, over many iterations, we go from knowing nothing about the value of state-actions to knowing enough to be able to choose the highest value actions. Problems like the n-armed bandit problem and blackjack have a small enough state or state-action space that we can record and average rewards in a lookup table, giving us the exact average rewards for each state-action pair. Most interesting problems, however, have a state space that is continuous or otherwise too large to use a lookup table. That's when we must use function approximation (e.g. neural networks) methods to serve as our $Q$ function in determining the value of states or state-actions. We will have to wait for part 3 for neural networks.
Learning with Markov Decision Processes¶
A Markov decision process (MDP) is a decision that can be made knowing only the current state, without knowledge of or reference to previous states or the path taken to the current state. That is, the current state contains enough information to choose optimal actions to maximize future rewards. Most RL algorithms assume that the problems to be learned are (at least approximately) Markov decision processes. Blackjack is clearly an MDP because we can play the game successfully by just knowing our current state (i.e. what cards we have + the dealer's one face-up card). Google DeepMind's deep Q-learning algorithm learned to play Atari games from just raw pixel data and the current score. Does raw pixel data and the score satisfy the Markov property? Not exactly. Say the game is Pacman, if our state is the raw pixel data from our current frame, we have no idea if that enemy a few tiles away is approaching us or moving away from us, and that would strongly influence our choice of actions to take. This is why DeepMind's implementation actually feeds in the last 4 frames of gameplay, effectively changing a non-Markov decision process into an MDP. With the last 4 frames, the agent has access to the direction and speed of each enemy (and itself).
Terminology & Notation Review¶
- $Q_k(s, a)$ is the function that accepts an action and state and returns the value of taking that action in that state at time step $k$. This is fundamental to RL. We need to know the relative values of every state or state-action pair.
- $\pi$ is a policy, a stochastic strategy or rule to choose action $a$ given a state $s$. Think of it as a function, $\pi(s)$, that accepts state, $s$ and returns the action to be taken. There is a distinction between the $\pi(s)$ function and a specific policy $\pi$. Our implementation of $\pi(s)$ as a function is often to just choose the action $a$ in state $s$ that has the highest average return based on historical results, $argmaxQ(s,a)$. As we gather more data and these average returns become more accurate, the actual policy $\pi$ may change. We may start out with a policy of "hit until total is 16 or more then stay" but this policy may change as we gather more data. Our implemented $\pi(s)$ function, however, is programmed by us and does not change.
- $G_t$, return. The expected cumulative reward from starting in a given state until the end of an episode (i.e. game play), for example. In our case we only give a reward at the end of the game, there are no rewards at each time step or move.
- Episode: The full sequence of steps leading to a terminal state and receiving a return. E.g. from the beginning of a blackjack game until the terminal state (someone winning) constitutes an episode of play.
- $v_\pi$, a function that determines the value of a state given a policy $\pi$. We do not really concern our selves with state values here, we focus on action values.
Monte Carlo & Tabular Methods¶
Monte Carlo is going to feel very familiar to how we solved the n-armed bandit problem from part 1. We will store the history of our state-action pairs associated with their values in a table, and then refer to this table during learning to calculate our expected rewards, $Q_k$.
From Wikipedia, Monte Carlo methods "rely on repeated random sampling to obtain numerical results." We'll use random sampling of states and state-action pairs and observe rewards and then iteratively revise our policy, which will hopefully converge on the optimal policy as we explore every possible state-action couple.
Here are some important points:
- We will asign a reward of +1 to winning a round of blackjack, -1 for losing, and 0 for a draw.
- We will establish a table (python dictionary) where each key corresponds to a particular state-action pair and each value is the value of that pair. i.e. the average reward received for that action in that state.
- The state consists of the player's card total, whether or not the player has a useable ace, and the dealer's one face-up card
Blackjack Game Implementation¶
Below I've implemented a blackjack game. I think I've commented it well enough to be understood but it's not critical that you understand the game implementation since we're just concerned with how to learn to play the game with machine learning.
This implementation is completely functional and stateless. I mean that this implementation is just a group of functions that accept data, transform that data and return new data. I intentionally avoided using OOP classes because I think it complicates things and I think functional-style programming is useful in machine learning (see my post about computational graphs to learn more). It is particularly useful in our case because it demonstrates how blackjack is an MDP. The game does not store any information, it is stateless. It merely accepts states and returns new states. The player is responsible for saving states if they want.
The state is just a Python tuple where the first element is the player's card total, the 2nd element is a boolean of whether or not the player has a useable ace. The 3rd element is the card total for the dealer and then another boolean of whether or not its a useable ace. The last element is a single integer that represents the status of the state (whether the game is in progress, the player has won, the dealer has won, or it was a draw).
We actually could implement this in a more intuitive way where we just store each player's cards and not whether or not they have a useable ace (useable means, can the ace be an 11 without losing the game by going over 21, because aces in blackjack can either be a 1 or an 11). However, as you'll see, storing the player card total and an useable ace boolean is equivalent and yet compresses our state space (without losing any information) so we can have a smaller lookup table.
import math import random #each value card has a 1:13 chance of being selected (we don't care about suits for blackjack) #cards (value): Ace (1), 2, 3, 4, 5, 6, 7, 8, 9, 10, Jack (10), Queen (10), King (10) def randomCard(): card = random.randint(1,13) if card > 10: card = 10 return card #A hand is just a tuple e.g. (14, False), a total card value of 14 without a useable ace #accepts a hand, if the Ace can be an 11 without busting the hand, it's useable def useable_ace(hand): val, ace = hand return ((ace) and ((val + 10) <= 21)) def totalValue(hand): val, ace = hand if (useable_ace(hand)): return (val + 10) else: return val def add_card(hand, card): val, ace = hand if (card == 1): ace = True return (val + card, ace) #The first is first dealt a single card, this method finishes off his hand def eval_dealer(dealer_hand): while (totalValue(dealer_hand) < 17): dealer_hand = add_card(dealer_hand, randomCard()) return dealer_hand #state: (player total, useable_ace), (dealer total, useable ace), game status; e.g. ((15, True), (9, False), 1) #stay or hit => dec == 0 or 1 def play(state, dec): #evaluate player_hand = state[0] #val, useable ace dealer_hand = state[1] if dec == 0: #action = stay #evaluate game; dealer plays dealer_hand = eval_dealer(dealer_hand) player_tot = totalValue(player_hand) dealer_tot = totalValue(dealer_hand) status = 1 if (dealer_tot > 21): status = 2 #player wins elif (dealer_tot == player_tot): status = 3 #draw elif (dealer_tot < player_tot): status = 2 #player wins elif (dealer_tot > player_tot): status = 4 #player loses elif dec == 1: #action = hit #if hit, add new card to player's hand player_hand = add_card(player_hand, randomCard()) d_hand = eval_dealer(dealer_hand) player_tot = totalValue(player_hand) status = 1 if (player_tot == 21): if (totalValue(d_hand) == 21): status = 3 #draw else: status = 2 #player wins! elif (player_tot > 21): status = 4 #player loses elif (player_tot < 21): #game still in progress status = 1 state = (player_hand, dealer_hand, status) return state #start a game of blackjack, returns a random initial state def initGame(): status = 1 #1=in progress; 2=player won; 3=draw; 4 = dealer won/player loses player_hand = add_card((0, False), randomCard()) player_hand = add_card(player_hand, randomCard()) dealer_hand = add_card((0, False), randomCard()) #evaluate if player wins from first hand if totalValue(player_hand) == 21: if totalValue(dealer_hand) != 21: status = 2 #player wins after first deal! else: status = 3 #draw state = (player_hand, dealer_hand, status) return state
There you have it. We've implemented a simplified blackjack game (no double downs or splitting) with just a few functions that basically just consist of some if-else conditions. Here's some sample game-play so you know how to use it.
state = initGame() print(state)
((7, False), (5, False), 1)
state = play(state, 1) #Player has total of 7, let's hit print(state)
((9, False), (5, False), 1)
state = play(state, 1) #player has a total of 9, let's hit print(state)
((15, False), (5, False), 1)
state = play(state, 0) #player has a total of 15, let's stay print(state)
((15, False), (20, False), 4)
Damn, I lost. Oh well, that should demonstrate how to use the blackjack game. As a user, we only have to concern ourselves with the
initGame() and
play() functions.
initGame() just creates a random state by dealing the player 2 random cards and the dealer one random card and setting the game status to 1 ('in progress').
play() accepts a state and an action (either 0 or 1, for 'stay' and 'hit', respectively). Please keep in mind the distinction between a blackjack game state and the state with respect to our Reinforcement Learning (RL) algorithm. We will compress the states a bit by ignoring the useable ace boolean for the dealer's hand because the dealer only shows a single card and if it's an ace the player has no idea if it's useable or not, so it offers no additional information to us.
Time for Reinforcement Learning¶
Let's start the real fun: building our Monte Carlo-based reinforcement learning algorithm. Here's the algorithm words/math (adapted from the Sutton & Barto text):
- Choose a random state $S_0 \in \mathcal{S}$ (some state in the set of all possible states); this is what
initGame()does
- Take action $A_0 \in \mathcal{A(S_0)}$ (take some action in set of all possible actions given we're in state $S_0$)
- Generate a complete episode starting from $S_0\,\ A_0$ following policy $\pi$
- For each pair $s, a$ occuring in the episode:
- $G = \text{returns/rewards following the first occurence of s,a}$
- If this is the first experience of $s, a$ in any episode, simply store $G$ in our $Q(s, a)$ table. If it's not the first time, then recalculate the average returns and store in $Q(s, a)$.
- For each state $s$ in the episode: We use an $\epsilon$-greedy action select process such that $\pi(s) = argmax_a{Q(s, a)}$ most of the time but with probability $\epsilon$, $\pi(s) = random(A_0 \in \mathcal{A(S_0)})$ (basically the same as our n-armed bandit policy function). Recall that we use an epsilon-greedy policy function to ensure we have a good balance of exploration versus exploitation.
In essence, with Monte Carlo we are playing randomly initialized games, sampling the state-action pair space and recording returns. In doing so, we can iteratively update our policy $\pi$.
Let's get to coding.
import numpy as np #Create a list of all the possible states def initStateSpace(): states = [] for card in range(1,11): for val in range(11,22): states.append((val, False, card)) states.append((val, True, card)) return states #Create a dictionary (key-value pairs) of all possible state-actions and their values #This creates our Q-value look up table def initStateActions(states): av = {} for state in states: av[(state, 0)] = 0.0 av[(state, 1)] = 0.0 return av #Setup a dictionary of state-actions to record how many times we've experienced #a given state-action pair. We need this to re-calculate reward averages def initSAcount(stateActions): counts = {} for sa in stateActions: counts[sa] = 0 return counts #This calculates the reward of the game, either +1 for winning, 0 for draw, or -1 for losing #We can determine this by simply substracting the game status value from 3 def calcReward(outcome): return 3-outcome #This recalculates the average rewards for our Q-value look-up table def updateQtable(av_table, av_count, returns): for key in returns: av_table[key] = av_table[key] + (1.0 / av_count[key]) * (returns[key]- av_table[key]) return av_table #returns Q-value/avg rewards for each action given a state def qsv(state, av_table): stay = av_table[(state,0)] hit = av_table[(state,1)] return np.array([stay, hit]) #converts a game state of the form ((player total, ace), (dealer total, ace), status) #to a condensed state we'll use for our RL algorithm (player total, usable ace, dealer card) def getRLstate(state): player_hand, dealer_hand, status = state player_val, player_ace = player_hand return (player_val, player_ace, dealer_hand[0])
Above we've defined basically all the functions we need to run our Monte Carlo algorithm. We initialize our state and state-action space, define methods to calculate rewards and update our state-action table (Q-value table). Below is where we'll actually run 5,000,000 Monte Carlo simulations of blackjack and fill out our Q-value table.
epochs = 5000000 #takes just a minute or two on my Macbook Air epsilon = 0.1 state_space = initStateSpace() av_table = initStateActions(state_space) av_count = initSAcount(av_table) for i in range(epochs): #initialize new game; observe current state state = initGame() player_hand, dealer_hand, status = state #if player's total is less than 11, increase total by adding another card #we do this because whenever the player's total is less than 11, you always hit no matter what #so we don't want to waste compute cycles on that subset of the state space while player_hand[0] < 11: player_hand = add_card(player_hand, randomCard()) state = (player_hand, dealer_hand, status) rl_state = getRLstate(state) #convert to compressed version of state #setup dictionary to temporarily hold the current episode's state-actions returns = {} #state, action, return while(state[2] == 1): #while in current episode #epsilon greedy action selection act_probs = qsv(rl_state, av_table) if (random.random() < epsilon): action = random.randint(0,1) else: action = np.argmax(act_probs)#select an action sa = ((rl_state, action)) returns[sa] = 0 #add a-v pair to returns list, default value to 0 av_count[sa] += 1 #increment counter for avg calc state = play(state, action) #make a play, observe new state rl_state = getRLstate(state) #after an episode is complete, assign rewards to all the state-actions that took place in the episode for key in returns: returns[key] = calcReward(state[2]) av_table = updateQtable(av_table, av_count, returns) print("Done")
Done
Okay, so we just ran a Monte Carlo simulation of blackjack 5,000,000 times and built up an action-value (Q-value) table that we can use to determine what the optimal action is when we're in a particular state.
How do we know if it worked? Well, below I've written some code that will show us a 3d plot of the dealer's card, player's total and the Q-value for that state (limited to when the player does not have a useable ace). You can compare to a very similar plot shown in the Sutton & Barto text on page 117, compare to this ()
#3d plot of state-value space where no useable Aces are present import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm %matplotlib inline fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(111, projection='3d', ) ax.set_xlabel('Dealer card') ax.set_ylabel('Player sum') ax.set_zlabel('State-Value') x,y,z = [],[],[] for key in state_space: if (not key[1] and key[0] > 11 and key[2] < 21): y.append(key[0]) x.append(key[2]) state_value = max([av_table[(key, 0)], av_table[(key, 1)]]) z.append(state_value) ax.azim = 230 ax.plot_trisurf(x,y,z, linewidth=.02, cmap=cm.jet)
Looks pretty good to me. This isn't a major point, but notice that I plotted the State-Value on the z-axis, not an action value. I calculated the state value by simply taking the largest action value for a state from our state-action lookup table. Thus, the value of a state is equivalent to the average rewards following the best action.
Below I've used our action-value lookup table to build a crappy looking table that displays the optimal actions one should take in a game of blackjack given you're in a particular state. The left column are the possible player totals (given no useable ace) and the top row is the possible dealer cards. So you can lookup what's the best action to take if I have a total of 16 and the dealer is showing a 7 (the answer is "hit"). You can compare to wikipedia's article on blackjack that has a similar table: As you can tell, ours is pretty accurate.
Conclusion & What's Next¶
Here we've covered Monte Carlo reinforcement learning methods that depending on stochastically sampling the environment and iteratively improving a policy $\pi$ after each episode. One disadvantage of Monte Carlo methods is that we must wait until the end of an episode to update our policy. For some types of problems (like blackjack), this is okay, but in a lot of cases, it makes more sense to able to learn at each time step (immediately after each action is taken).
The whole point of the Monte Carlo simulations were to build an action-value table. The action-value table basically is our $Q(s,a)$ function. You give it a state and an action and it just goes and looks up the value in the table. The most important thing to learn from all of this is that in essentially any RL method, our goal is to find an optimal $Q$ function. Most of the differences between RL algorithms revolve around differences in determining Q-values. The policy function is straightforward, just pick the best action using $Q(s,a)$. We might throw in a softmax or something to add in some randomness, but there's not a lot more to $\pi(s)$.
In the next part, I will abandon tabular learning methods and cover Q-learning (a type of temporal difference (TD) algorithm) using a neural network as our $Q$ function (what we've all been waiting for).
This was a pretty meaty post so please email me (outlacedev@gmail.com) if you spot any errors or have any questions or comments.
Download this IPython Notebook¶
References:¶
-
-
- "Reinforcement Learning: An Introduction" Sutton & Barto
- (Adapted some code from here)
- | http://outlace.com/rlpart2.html | CC-MAIN-2018-17 | refinedweb | 4,038 | 56.59 |
Apache2::AuthCookieDBI - An AuthCookie module backed by a DBI database.
This is version 2.03
Starting witrh version 2.03 the module is in the Apache2::* namespace, i.e. Apache2::AuthCookieDBI. Prior versions were named Apache::AuthCookieDBI
Apache::* versions:
# In httpd.conf or .htaccess PerlModule Apache2:2: </Files>
This module is an authentication handler that uses the basic mechanism provided by Apache2:2::Authcookie for the directives required for any kind of Apache2:. (NOTE: In AuthCookieDBI.2:. does nothing. )
Copyright (C) 2002 SF Interactive. Copyright (C) 2003-2004 Jacob Davies Copyright (C) 2004-2005 Matisse En>
Latest version:
Apache2::AuthCookie(1) Apache2::Session(1) | http://search.cpan.org/~matisse/Apache2-AuthCookieDBI-2.03/AuthCookieDBI.pm | CC-MAIN-2018-17 | refinedweb | 104 | 53.17 |
Hello everybody,
i’m new to quasar and now I need to implement a simple state. Vuex is an option but it is too much for my needs.
I don’t no where to import my store.js. Currently this is my store.js:
//This is store.js export default { store: { state: { message: 'Hello!' }, duplicateMessage: function() { this.state.message += this.state.message; }, halfMessage: function() { this.state.message = this.state.message.substr(0, this.state.message.length/2); } } }
So I have two questions and I hope anyone can help me with this simple question.
- Where should I import my store.js?
- How can I use it inside my components? (How can I emit the events for changing the state)
- benoitranque last edited by
Simplest store:
export default { message: '' }
use in component/anywhere
import store from './store' // make sure this points to the correct relative path export default { methods: { met () { store.message = 'hello world' } } }
another component:
import store from './store' export default { methods: { met () { console.log(store.message) } } }
@benoitranque Thank you very much. You helped me a lot! | https://forum.quasar-framework.org/topic/1648/how-to-implement-simple-state | CC-MAIN-2021-10 | refinedweb | 176 | 70.9 |
Microsoft.
UI. Xaml. Controls. Primitives Namespace
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Defines the components that comprise WinUI controls, or otherwise support the control composition model.
Note
This namespace requires the Microsoft.UI.Xaml.Controls NuGet package, a part of the Microsoft Windows UI Library.
This documentation applies to WinUI 2 for UWP (for WinUI in the Windows App SDK, see the Windows App SDK namespaces).
Classes
Interfaces
Enums
Examples
See the XAML Controls Gallery sample app for examples of WinUI features and controls.
If you have the XAML Controls Gallery app installed, open the app to see the controls in action.
If you don't have the XAML Controls Gallery app installed, get the WinUI 2.x version from the Microsoft Store.
You can also view, clone, and build the XAML Controls Gallery source code from GitHub (switch to the WinUI 3 Preview branch for WinUI 3 Preview controls and features).
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/windows/winui/api/microsoft.ui.xaml.controls.primitives?view=winui-2.7&viewFallbackFrom=winui-3.0 | CC-MAIN-2022-27 | refinedweb | 182 | 56.55 |
Bugtraq
mailing list archives
----- Forwarded message from Michael Widenius <monty () monty pp sci fi> -----
From: Michael Widenius <monty () monty pp sci fi>
Message-ID: <14497.29884.464639.784337 () monty pp sci fi>
Date: Wed, 9 Feb 2000 16:07:56 +0200 (EET)
To: Elias Levy <aleph1 () securityfocus com>
Subject: Remote access vulnerability in all MySQL server versions
X-Mailer: VM 6.72 under 21.1 (patch 7) "Biscayne" XEmacs Lucid
Reply-To: monty () tcx se
Hi!
"Elias" == Elias Levy <aleph1 () securityfocus com> writes:
Elias> Hi,
Elias> Below you find a security advisory i wrote concerning a vulnerability found in
Elias> all (known to me) mysql server versions, including the latest one.
Elias> As mysql is a widely used sql platform, i strongly advise everyone using it
Elias> to read it, and fix where appropriate.
Elias> This email has been bcc'd to the mysql bug list, and other appropriate parties.
Elias> Greets,
Elias> Robert van der Meulen/Emphyrio
Elias> .Introduction.
Elias> There exists a vulnerability in the password checking routines in the latest
Elias> versions of the MySQL server, that allows any user on a host that is allowed
Elias> to connect to the server, to skip password authentication, and access databases.
Elias> For the exploit to work, a valid username for the mysql server is needed, and
Elias> this username must have access to the database server, when connecting from
Elias> the attacking host.
<cut>
Thanks to for finding this!
The official patch to fix this follows:
*** /my/monty/master/mysql-3.23.10-alpha/sql/sql_parse.cc Sun Jan 30 10:42:42 2000
--- ./sql_parse.cc Wed Feb 9 16:05:49 2000
***************
*** 17,22 ****
--- 17,24 ----
#include <m_ctype.h>
#include <thr_alarm.h>
+ #define SCRAMBLE_LENGTH 8
+
extern int yyparse(void);
extern "C" pthread_mutex_t THR_LOCK_keycache;
***************
*** 188,195 ****
end=strmov(buff,server_version)+1;
int4store((uchar*) end,thd->thread_id);
end+=4;
! memcpy(end,thd->scramble,9);
! end+=9;
#ifdef HAVE_COMPRESS
client_flags |= CLIENT_COMPRESS;
#endif /* HAVE_COMPRESS */
--- 190,197 ----
end=strmov(buff,server_version)+1;
int4store((uchar*) end,thd->thread_id);
end+=4;
! memcpy(end,thd->scramble,SCRAMBLE_LENGTH+1);
! end+=SCRAMBLE_LENGTH +1;
#ifdef HAVE_COMPRESS
client_flags |= CLIENT_COMPRESS;
#endif /* HAVE_COMPRESS */
***************
*** 268,273 ****
--- 270,277 ----
char *user= (char*) net->read_pos+5;
char *passwd= strend(user)+1;
char *db=0;
+ if (passwd[0] && strlen(passwd) != SCRAMBLE_LENGTH)
+ return ER_HANDSHAKE_ERROR;
if (thd->client_capabilities & CLIENT_CONNECT_WITH_DB)
db=strend(passwd)+1;
if (thd->client_capabilities & CLIENT_INTERACTIVE)
I will make a new MySQL release with this fix during this week!
Elias> .Commentary.
Elias> I think this exploit should not be a very scary thing to people that know
Elias> how to secure their servers.
Elias> In practice, there's almost never a need to allow the whole world to connect
Elias> to your SQL server, so that part of the deal should be taken care of.
Elias> As long as your MySQL ACL is secure, this problem doesn't really occur (unless
Elias> your database server doubles as a shell server).
Elias> We have also located several other security bugs in mysql server/client. These
Elias> bugs can only be exploited by users who have a valid username and password.
Elias> We will send these to the mysql maintainers, and hope they'll come
Elias> with a fix soon.
Yes, please send them to me or mysql_all () mysql com (our internal
developers list).
Regards,
Monty
----- End forwarded message -----
--
Elias Levy
SecurityFocus.com
By Date
By Thread | http://seclists.org/bugtraq/2000/Feb/154 | CC-MAIN-2014-35 | refinedweb | 559 | 54.12 |
Sounds like your trying to inject a piece of code somewhere. I don't think the forum encourages hacking practices.
Ok heres what I want to do: I want to put a file in a console application (during compileation\codeing not running) and when it runs it saves the file in the same place it was run from. Like moving a movie as an exe to hide it while it is in
Add the file to the project as a resource. Of course, if you don't have a legitimate reason to do so, then by all means don't for the good of everyone.
Sorry got cut off, in computer lab typing this and lunch ended. Well it's not hacking... More like hiding (have no reason to do so just sounds interesting) like 7zip's Self extracting Arcive (but with only one file)
Ok and also; if I could duplicate the code many times I would make a fast eassyaly replicatable arcive the downside would be that folders are tought to do aren't they.
I haven't really done this myself but my hypothesis is you should be able to store the directory hierarchy information in your application, and use it to make new folders and then place the actual data files in them.
Oh I see in your sig you're using Code::Blocks, I don't know how to add resources to the project using that compiler/IDE. Hopefully you can find that yourself.
In visual studio you right click in the solution explorer and select "Add->Resource"
and then you click "Import" and select the path to your resource, after which you must name your new resource type.
Afterward something like this is in order:
#include "stdafx.h" int _tmain(int argc, _TCHAR* argv[]) { // HRSRC hRes = FindResourceA(NULL,MAKEINTRESOURCEA(IDR_EXECUTABLE1),"Executable"); assert(hRes != NULL); // HGLOBAL res = LoadResource(NULL, hRes); assert(res != NULL); //Obtain a pointer to the first byte of the resource. //MSDN Quote: /* Note LockResource does not actually lock memory; it is just used to obtain a pointer to the memory containing the resource data. The name of the function comes from versions prior to Windows XP, when it was used to lock a global memory block allocated by LoadResource. */ // LPVOID pRes = LockResource(res); assert(pRes != NULL); // DWORD resourceSize = SizeofResource(NULL, hRes); assert(resourceSize != 0); //Write file to disk. std::cout << "The resource file size is " << resourceSize << " bytes." << std::endl; std::cout << "Writing file to current working directory:\n \""; char pDir[MAX_PATH]; GetCurrentDirectoryA(MAX_PATH,pDir); std::cout << pDir << "\""<< std::endl; std::ofstream outFile("Executable.exe",std::ios::binary); outFile.write((const char *)pRes, resourceSize); outFile.close(); std::cin.get(); return 0; }
Edited 5 Years Ago by pseudorandom21: n/a
Edited 5 Years Ago by Zssffssz: Autocorrect is perverted
I don't think this is the best solution to the problem, but then again without installing third-party software it may be..
Usually .exe is more restricted than .obj though.
Solution 1. Rename the .obj file.
If that doesn't work you may need to encrypt it, which you can write an application to do (it just needs to obfuscate it somehow and then transform it back to the original).
Normally this is done via renaming the file, then archiving it with a password (using third party software like winrar or peazip). This will prevent determining the type of file from it's data and file name.
Solution 2. What you have proposed, basically the example uses Windows API functions necessary to interact with the resource stored inside the executable.
So it seems to me you can either,
1) Encrypt the file's bytes using XOR or some other method, and then decrypt it (using ifstream/ofstream).
2) Take the time to read the MSDN page links I have included, and include it in an executable as a resource.
Ok like I said That's not my senario the pouts I want to focus on is how to do it via ifstream and ofstream to make something like 7zip's Self extracting Arcive (without encryption or compression)
How would I make a program to do that XOR encryption? It has always sounded cool but couldn't find any good documentation on it. I want the encryption program to be in a console app (just like the file moving thing this thread was made for)
PS Sorry not trying to sound rude I know your trying your best and still have to fight through all of my iPods typos so thanks (I still want you to answer the question that wasn't a thanks you can leave now thanks)
Edit: Ya know I'll start another thread fir the XOR thing you just focus ya effects on the question here, he' (I can't do a street accent can I?)
Edited 5 Years Ago by Zssffssz: Clara
I don't know how 7zip makes those.
Visual Studio does provide a type of installer project though.
XOR encryption:
Oh I know what XOR Encryption is I don't know how to apply it in a program. (like not"what to use this for" but what I need is how to do it) Yegh ... This is akward. Man that Wikipedia article was confusing. I started a thread on XOR Encryption so back to the subject. If you can't find anything (I couldn't google gigablast and alltheinternet just gave me articles on text files.) I know you tried your hardest and I don't need this for a school project or something; just thought it sounded cool... Um are there any other file IOs than ofstream and ifstream?
Edit: I know about iexpress.exe (start,run,iexpress.exe in xp) I just wanted to make my own for console apps.
Edited 5 Years Ago by Zssffssz: Edit
Of course there are other ways to do file IO, namely the underlying Operating System API (which is the Windows API in this case).
If you do intend to work with it, please try to develop good coding strategies soon or it will become very convoluted as the API is pure C.
I would wager the OS API is more powerful than the C++ std lib, but I may be wrong and they are | https://www.daniweb.com/programming/software-development/threads/384591/c-file | CC-MAIN-2016-50 | refinedweb | 1,045 | 68.6 |
In Java, the
String class encapsulates an array of
char. Put simply,
String is an array of characters used to compose words, sentences, or any other data you want.
Encapsulation is one of the most powerful concepts in object-oriented programming. Because of encapsulation, you don’t need to know how the String class works; you just need to know what methods to use on its interface.
When you look at the
String class in Java, you can see how the array of
char is encapsulated:
public String(char value[]) { this(value, 0, value.length, null); }
To understand encapsulation better, consider a physical object: a car. Do you need to know how the car works under the hood in order to drive it? Of course not, but you do need to know what the interfaces of the car do: things like the accelerator, brakes, and steering wheel. Each of these interfaces supports certain actions: accelerate, brake, turn left, turn right. It’s the same in object-oriented programming.
My first blog in the Java Challengers series introduced method overloading, which is a technique the
String class uses extensively. Overloading can make your classes really flexible, including
String:
public String(String original) {} public String(char value[], int offset, int count) {} public String(int[] codePoints, int offset, int count) {} public String(byte bytes[], int offset, int length, String charsetName) {} // And so on…...
Rather than trying to understand how the
String class works, this Java Challenger will help you understand what it does and how to use it in your code.
What is a String pool?
String is possibly the most-used class in Java. If a new object was created in the memory heap everytime we used a
String, we would waste a lot of memory. The
String pool solves this problem by storing just one object for each
String value, as shown below.
Figure 1. Strings in the String pool
Although we created a
String variable for the
Duke and
Juggy
Strings, only two objects are created and stored in the memory heap. For proof, look at the following code sample. (Recall that the “
==” operator in Java is used to compare two objects and determine whether they are the same.)
String juggy = "Juggy"; String anotherJuggy = "Juggy"; System.out.println(juggy == anotherJuggy);
This code will return
true because the two
Strings point to the same object in the
String pool. Their values are the same.
An exception: The ‘new’ operator
Now look at this code--it looks similar to the previous sample, but there is a difference.
String duke = new String("duke"); String anotherDuke = new String("duke"); System.out.println(duke == anotherDuke);
Based on the previous example, you might think this code would return
true, but it’s actually
false. Adding the
new operator forces the creation of a new
String in the memory heap. Thus, the JVM will create two different objects.
String pools and the intern() method
To store a
String in the
String pool, we use a technique called
String interning. Here’s what Javadoc tells us about the
intern() method:
/** * Returns a canonical representation for the string object. * * A pool of strings, initially empty, is maintained privately by the * class {@code String}. * *. * * It follows that for any two strings {@code s} and {@code t}, * {@code s.intern() == t.intern()} is {@code true} * if and only if {@code s.equals(t)} is {@code true}. * * All literal strings and string-valued constant expressions are * interned. String literals are defined in section 3.10.5 of the * The Java™ Language Specification. * * @returns a string that has the same contents as this string, but is * guaranteed to be from a pool of unique strings. * @jls 3.10.5 String Literals */ public native String intern();
The
intern() method is used to store
Strings in a
String pool. First, it verifies if the
String you’ve created already exists in the pool. If not, it creates a new
String in the pool. Behind the scenes, the logic of
String pooling is based on the Flyweight pattern.
Now, notice what happens when we use the
new keyword to force the creation of two
Strings:
String duke = new String("duke"); String duke2 = new String("duke"); System.out.println(duke == duke2); // The result will be false here System.out.println(duke.intern() == duke2.intern()); // The result will be true here
Unlike the previous example with the
new keyword, in this case the comparison turns out to be true. That’s because using the
intern() method ensures the
Strings will be stored in the pool.
Equals method with the String class
The
equals() method is used to verify if the state of two Java classes are the same. Because
equals() is from the
Object class, every Java class inherits it. But the
equals() method has to be overridden to make it work properly. Of course,
String overrides
equals().
Take a look:
public boolean equals(Object anObject) { if (this == anObject) { return true; } if (anObject instanceof String) { String aString = (String)anObject; if (coder() == aString.coder()) { return isLatin1() ? StringLatin1.equals(value, aString.value) : StringUTF16.equals(value, aString.value); } } return false; }
As you can see, the state of the
String class value has to be
equals() and not the object reference. It doesn’t matter if the object reference is different; the state of the
String will be compared.
Most common String methods
There’s just one last thing you need to know before taking the
String comparison challenge. Consider these common methods of the
String class:
// Removes spaces from the borders trim() // Gets a substring by indexes substring(int beginIndex, int endIndex) // Returns the characters length of the String length() // Replaces String, regex can be used. replaceAll(String regex, String replacement) // Verifies if there is a specified CharSequence in the String contains(CharSequences)
Take the String comparison challenge!
Let’s try out what you’ve learned about the
String class in a quick challenge.
For this challenge, you’ll compare a number of
Strings using the concepts we’ve explored. Looking at the code below, can you determine the final value of each results variable?
public class ComparisonStringChallenge { public static void main(String... doYourBest) { String result = ""; result += " powerfulCode ".trim() == "powerfulCode" ? "0" : "1"; result += "flexibleCode" == "flexibleCode" ? "2" : "3"; result += new String("doYourBest") == new String("doYourBest") ? "4" : "5"; result += new String("noBugsProject") .equals("noBugsProject") ? "6" : "7"; result += new String("breakYourLimits").intern() == new String("breakYourLimits").intern() ? "8" : "9"; System.out.println(result); } }
Which output represents the final value of the results variable?
A: 02468
B: 12469
C: 12579
D: 12568
What just happened?
In the first line of the code, we see:
result += " powerfulCode ".trim() == "powerfulCode" ? "0" : "1";
Although the
String will be the same after the
trim() method is invoked, the
String
“ powerfulcode “ was different in the beginning. In this case the comparison is
false, because when the
trim() method removes spaces from the borders it forces the creation of a new
String with the new operator.
Next, we see:
result += "flexibleCode" == "flexibleCode" ? "2" : "3";
No mystery here, the
Strings are the same in the
String pool. This comparison returns
true.
Next, we have:
result += new String("doYourBest") == new String("doYourBest") ? "4" : "5";
Using the
new reserved keyword forces the creation of two new
Strings, whether they are equal or not. In this case the comparison will be
false even if the
String values are the same.
Next is:
result += new String("noBugsProject") .equals("noBugsProject") ? "6" : "7";
Because we’ve used the
equals() method, the value of the
String will be compared and not the object instance. In that case, it doesn’t matter if the objects are different because the value is being compared. This comparison returns
true.
Finally, we have:
result += new String("breakYourLimits").intern() == new String("breakYourLimits").intern() ? "8" : "9";
As you’ve seen before, the
intern() method puts the
String in the
String pool. Both
Strings point to the same object, so in this case the comparison is
true.
Video challenge! Debugging String comparisons
Debugging is one of the easiest ways to fully absorb programming concepts while also improving your code. In this video you can follow along while I debug and explain the Java Strings challenge:
Common mistakes with Strings
It can be difficult to know if two
Strings are pointing to the same object, especially when the
Strings contain the same value. It helps to remember that using the reserved keyword
new always results in a new object being created in memory, even if the values are the same.
Using
String methods to compare
Object references can also be tricky. The key is, if the method changes something in the
String, the object references will be different.
A few examples to help clarify:
System.out.println("duke".trim() == "duke".trim());;
This comparison will be true because the
trim() method does not generate a new
String.
System.out.println(" duke".trim() == "duke".trim());
In this case, the first
trim() method will generate a new
String because the method will execute its action, so the references will be different.
Finally, when
trim() executes its action, it creates a new
String:
// Implementation of the trim method in the String class new String(Arrays.copyOfRange(val, index, index + len), LATIN1);
What to remember about Strings
Strings are immutable, so a
String’s state can’t be changed.
- To conserve memory, the JVM keeps
Strings in a
Stringpool. When a new
Stringis created, the JVM checks its value and points it to an existing object. If there is no
Stringwith that value in the pool, then the JVM creates a new
String.
- Using the
==operator compares the object reference. Using the
equals()method compares the value of the
String. The same rule will be applied to all objects.
- When using the
newoperator, a new
Stringwill be created in the
Stringpool even if there is a
Stringwith the same value.
Learn more about Java
- Get quick code tips: Read all of Rafael's articles in the InfoWorld Java Challengers series.
- Check out all of the videos in Rafael's Java Challengers video playlist.
- Find even more Java Challengers on Rafael's Java Challengers blog and in his book, with more than 70 code challenges.
This story, "String comparisons in Java" was originally published by JavaWorld. | https://www.infoworld.com/article/3276354/string-comparisons-in-java.html | CC-MAIN-2022-27 | refinedweb | 1,702 | 64.91 |
I'm having some trouble with my while loop in my code. If I enter in 20 and then enter in 14 for the coupons I want to redeem, it goes into a constant loop and doesn't stop. However I did stop it from doing this by putting in a
break; . While this may work it also causes some issues. Some times it runs the whole program properly where the loop will run until it see's that there aren't enough coupons and it tells you that or it will stop and won't get to that point.
How do I fix this? I want the loop to run through and then stop like it should no matter what number I enter. Maybe this is very simple and I'm just not seeing it. But it's 12am here in NJ and I'm beat, it's the one and only thing that's giving me trouble. If that small qwerk get's solved I can turn it in tomorrow :)
// candybar.cpp : Defines the entry point for the console application. // David Tarantula #include "stdafx.h" #include <iostream> using namespace std; int _tmain(int argc, _TCHAR* argv[]) { int dollars = 0; int coupons = 0; int redeem = 0; int bars = 0; int freebars = 0; int endcoupon = 0; int endingcoupon = 0; int endingbar = 0; int totalbars = 0; int newcoupon; int totalcoupons = 0; cout << "Please enter dollar bills." << endl; cin >> dollars; cout << endl; bars = dollars; coupons = bars; cout << "You have " << bars << " chocolate bars and " << coupons << " coupons" << endl; cout << endl; cout << "7 Coupons = 1 Free Bar. How many would you like to redeem?: " << endl; cin >> redeem; cout << endl; // Runs and calculates if Coupons are redeemable if(redeem >= 7) { cout <<"You got " << bars << " Candy Bars! " << endl << endl; cout << "You originally had " << coupons << " coupons!" << endl << endl; newcoupon = coupons - redeem; cout << "You now have " << newcoupon << " more coupons!" << endl << endl; // More coupons are redeemable the following runs. do{ totalcoupons = redeem/7; freebars = totalcoupons; cout << "You get " << freebars << " free bars." << endl << endl; endcoupon = newcoupon + totalcoupons; cout << "You have " << endcoupon << " coupons!" << endl << endl; endingcoupon = endcoupon - 7; endingbar = endingcoupon; cout << "You get " << endingbar << " additional bar(s)." << endl << endl; totalbars = bars + freebars + endingbar; cout << "Total amount of candy bars you have is " << totalbars << endl << endl; break; }while(endingcoupon < 7); while(endingcoupon < 7){ cout << "You do not have enough coupons to redeem anymore." << endl; break; cout << endl; } } // Displays if there are no coupons redeemable from the start else if (redeem < 7) { cout << "Error: You do not have enough coupons for a free Candy Bar!" << endl << endl; cout << "You only get " << bars << " candy bars." << endl; cout << endl; } system("pause"); return 0; } | https://www.daniweb.com/programming/software-development/threads/175733/while-loop-trouble | CC-MAIN-2018-05 | refinedweb | 437 | 80.62 |
Check Armstrong number
About Armstrong Number
Definition for Armstrong number:
An Armstrong number is a sum of ‘nth’ power of each digit. ‘n’th digit is equal to n digit of a number.
153 = 111 + 555 + 333 — 153 is an Armstrong number
371 = 333 + 777 + 111 — 371 is an Armstrong number
Logic to implemented in program:
- Get input from keyboard
- Get total number of digit in given input digit
- Save input in temp variable for comparison purpose
- Find the sum of ‘nth’ power of each digit. here n is number of digit in given input number
- Display the output
1. Check Armstrong number without scanner class
import java.util.Scanner; public class ArmstrongNo { public static void main(String[] args) { int inputvalue= 371; // Given number Armstrong"); }else{ System.out.println("Number *"+temp+"* is Not Armstrong"); } } }
Output: Number *371* is armstrong
2. Check Armstrong number with scanner class
import java.util.Scanner; public class ArmtrongNumber { public static void main(String[] args) { //Create object for Scanner class to get input digit from keyboard Scanner scn = new Scanner(System.in); //Get input digit from keyboard System.out.print("Enter number : "); int inputvalue = scn.nextInt(); Amstrong"); }else{ System.out.println("Number *"+temp+"* is Not Amstrong"); } } }
Output 1: Enter number : 123 Number *123* is Not Amstrong Output 2: Enter number : 371 Number *371* is Amstrong
Recommended article:
- Introduction to java
- How to install java
- Add two matrix multidimensional array in java
- Check given number Prime or Not
- Sort an array in ascending, descending order in java
Hey, Now it is your time! Drop a comment if more details needed or if any update requires. Your comments are more valuable to improve our site to help others.
111 thoughts on “Java Program To Check Armstrong Number”
Fantastic site!
Thank you
I am really inspired with your writing abilities as smartly as with the layout in your blog.
Is that this a paid theme or did you modify it your
self? Either way keep up the nice high quality writing, it is uncommon to
see a great weblog like this one nowadays..
Hi Mindy, Thank you for your comments.
It is not paid theme
Thank you for the auspicious writeup. It in fact was a amusement account
it. Look advanced to far added agreeable from you!
However, how can we communicate?
Hi there! Do you know if they make any plugins to safeguard against
hackers? I’m kinda paranoid about losing everything I’ve worked hard on. Any tips?
Good web site you’ve got here.. It’s hard to find high quality writing like yours nowadays.
I truly appreciate individuals like you! Take care!!
Everything is very open with a very clear explanation of the issues.
It was truly informative. Your website is useful. Thank you for sharing!
of course like your web-site however you need to check the spelling on several of your posts.
Several of them are rife with spelling issues and
I find it very troublesome to tell the reality nevertheless I’ll definitely come again again.
It’s very straightforward to find out any matter on net as
compared to textbooks, as I found this post at this web.
This is a topic which is near to my heart… Best wishes!
Where are your contact details though?
This post is truly a fastidious one it helps new net visitors, who are wishing in favor of blogging.
This website definitely has all of the information I needed about
this subject and didn’t know who to ask.
I am regular visitor, how are you everybody? This article posted at this
site is genuinely fastidious.
My family always say that I am killing my time here at net,
however I know I am getting knowledge everyday by reading such good articles or reviews.
Hey! Someone in my Facebook group shared this site
with us so I came to check it out. I’m definitely loving the information. I’m book-marking and will be tweeting this to my followers!
Great blog and amazing design.! Superb blog by the way!
Thanks for one’s marvelous posting! I really enjoyed reading it, you might
be a great author. I will ensure that I bookmark your blog and may
come back down the road. I want to encourage you continue your great work,
have a nice evening!
Hi there!
Hey there! I’ve been reading your weblog for a long time now and finally got the
courage to go ahead and give you a shout out from
Austin Tx! Just wanted to tell you keep up the fantastic job!
What a information of un-ambiguity and preserveness of valuable familiarity on the topic of unpredicted emotions.
It’s amazing in support of me to have a site, which is beneficial for my know-how.
thanks admin
I think this is among the most vital info for me. And i’m glad reading your
article. But wanna remark on some general things,
The website style is wonderful, the articles is really great : D.
Good job, cheers
You need to be a part of a contest for one of the greatest blogs on the net.
I am going to recommend this
Hello, yeah this post is truly pleasant and I have learned lot
of things from it about blogging. thanks.
wonderful post, very informative. I ponder why the other
specialists of this sector do not notice this. You must continue your
writing. I am sure, you’ve a huge readers’ base already!
You could certainly see your skills in the article you write.
The world hopes for even more passionate writers such as you who are not afraid to say
how they believe. At all times follow your!, great written and come with almost all significant infos.
I’d like to see extra posts like this .
I savor, lead to I found exactly what I was taking a
look for. You’ve ended my four day lengthy hunt!
God Bless you man. Have a great day. Bye
Good day! Do you know if they make any plugins to protect against hackers?
I’m kinda paranoid about losing everything I’ve worked hard on.
Any tips?
Thanks to my father who informed me on the topic of this
web site, this weblog is genuinely amazing.
Ahaa, its nice conversation on the topic of this paragraph at this place at this blog,
I have read all that, so now me also commenting at this place.
It’s remarkable in favor of me to have a web site, which is useful for my know-how.
thanks admin
You ought to be a part of a contest for one of the highest quality sites on the web.
I am going to highly recommend this blog!
I always spent my half an hour to read this web site’s articles or reviews
every day along with a mug of coffee.
Hello, its nice article regarding media print, we all be
aware of media is a great source of data.
I was pretty pleased to discover this site. I want
to to thank you for your time for this particularly wonderful read!!
I definitely really liked every bit of it and I have you book marked to look at new things on your website.
I couldn’t resist commenting. Well’ve made some decent points there. I checked on the internet for more information about the issue
and found most individuals will go along with your views on this
site.
Currently it appears like Drupal is the preferred blogging platform
out there right now. (from what I’ve read) Is that what you’re using on your blog?
It’s amazing for me to have a web page, which is valuable
in support of my experience. thanks admin
If you would like to take a good deal from this piece of writing then you
have to apply such methods to your won webpage.
Hi, I do believe this is an excellent website. I stumbledupon it 😉 I’m going to return yet again since I saved as a favorite it.
Money and freedom is the best way to change, may you be rich
and continue to help others.
Fastidious replies in return of this issue with real arguments and telling everything regarding that.
I like the valuable info you supply on your articles.
I’ll bookmark your blog and check again right here frequently.
I’m rather sure I will be informed mmany new stuff proper here!
Best of luck for the next!
Admiring the hard work you put into your site and in depth information you
present. It’s nice to come across a blog every once in a while that isn’t the same old rehashed information. Excellent read!
I’ve saved your site and I’m including your RSS feeds to my Google account.
Nice replies in return of this difficulty with firm arguments and describing the whole thing concerning that.
Wow that was strange. I just wrote an really long comment
but after I clicked submit my comment didn’t show up. Grrrr…
well I’m not writing all that over again. Anyway, just wanted to
say excellent blog!
Thank you for any other magnificent article. The place else may anyone get that type of info
in such an ideal approach of writing? I have a presentation next
week, and I’m at the search for such info.
Every weekend i used to go to see this site, because i want enjoyment, as this this site conations really pleasant funny data too.
You actually make it seem so easy together with your presentation but I to find this matter to be actually something which I think I would never understand.
It seems too complex and very huge for me. I am having a look
forward on your next submit, I’ll try to get the hold of
This is very interesting, You’re a very professional blogger.
I have joined your rss feed and stay up for looking for more of your great
post. Also, I have shared your site in my social networks
I think that is among the most significant info for me. And i’m glad reading your article.
But should observation on few normal issues, The web site style is wonderful,
the articles is in point of fact nice : D.
Excellent task, cheers
Excellent weblog here! Also your web site loads up fast!
What web host are you the usage of? Can I am getting your associate link on your host?
I want my site loaded up as quickly as yours lol
Spot on with this write-up, I actually think this amazing site needs
a lot more attention. I’ll probably be back again to
read through more, thanks for the info!
Excellent way of describing, and good post to obtain information on the
topic of my presentation focus, which i am going to convey in college.
It’s an awesome piece of writing designed for all the internet visitors; they
will obtain advantage from it I am sure..
Heya i’m for the first time here. I came across this board and I find It truly useful & it helped me out a lot.
I hope to give something back and aid others like you aided
me.
Hey there terrific blog! Does running a blog similar to this take
a great deal of work? I’ve very little understanding of programming however I was hoping to start my
own blog soon. Anyways, should you have any recommendations or techniques for new blog owners
please share. I know this is off subject but I just wanted
to ask. Thank you!
Appreciation to my father who informed me on the topic of this
web site, this weblog is really remarkable. way of my cousin. I am no longer positive whether
this put up is written through him as nobody else realize such unique approximately my difficulty.
You are wonderful! sure this paragraph has touched all the internet people, its really really nice
post on building up new blog.
excellent points altogether, you just received a logo new reader.
What might you suggest about your submit that you simply made a few days in the past?
Any sure?
Way cool! Some extremely valid points! I appreciate you penning this write-up plus the rest of the
website is also very good.
Oh my goodness! Amazing article dude! Many thanks, However I am encountering issues with your RSS.
I don’t understand the reason why I can’t subscribe to it.
Is there anybody else having the same RSS issues?
Anybody who knows the answer can you kindly respond? Thanx!!
I’m really enjoying the theme/design of your weblog.
Do you ever run into any internet browser compatibility issues?
A number of my blog visitors have complained about my website not working correctly in Explorer but looks great in Chrome.
Do you have any solutions to help fix this problem?
Wow, this article is pleasant, my sister is analyzing these things,
so I am going to inform her.
Appreciate the recommendation. Let me try it experience and views online.
Please let me know if you have any kind of ideas or tips for new aspiring blog owners.
Thankyou!
I do accept as true with all of the ideas you’ve introduced on your post.
They’re really convincing and will definitely work.
Nonetheless, the posts are very brief for newbies. Could you please lengthen them
a little from subsequent time? Thanks for the post.
My spouse and I stumbled over here by a different web page and thought I may as well check things out.
I like what I see so now i am following you. Look forward to looking at your web page again.
you are actually a excellent webmaster. The web site loading pace is
amazing. It seems that you’re doing any distinctive
trick. Moreover, The contents are masterpiece.
you’ve performed a excellent process in this subject!
I am truly thankful to the owner off this website who has
shared this impressive paragraph at at this time.
I am truly grateful to the owner of this web site who has shared this impressive
piece of writing at here.
Greetings! Very useful advice within this article! It’s the little changes that make the greatest changes.
Thanks for sharing!
Excellent beat ! I wish to apprentice while you amend your website, how could i subscribe
for a weblog website? The account helped me a appropriate deal.
I had been tiny bit familiar of this your broadcast provided brilliant
clear concept
I go to see day-to-day some blogs and blogs to read articles, except this web site
presents feature based posts.
Wow! This blog looks exactly like my old one! It’s on a entirely different topic
but it has pretty much the same layout and design. Wonderful choice of colors!
Very nice post. I just stumbled upon your weblog and wished to say that I have truly
enjoyed browsing your blog posts. After all I’ll be subscribing to your feed
and I hope you write again soon!
I think this is one of the most important info for me.
And i am glad reading your article. But should remark on few general things,
The site style is perfect, the articles is really nice : D.
Good job, cheers
I am not sure where you’re getting your info,
but good topic. I needs to spend some time learning
more or understanding more. Thanks for wonderful information I was looking for this
information for my mission.
It’s really very complex in this full of activity life to listen news on TV, so I just use web for that purpose, and obtain the most
up-to-date information.
Oh my goodness! Impressive article dude! Thanks,
However I am encountering problems with your RSS.
I don’t understand why I am unable to subscribe to it.
Is there anyone else having identical RSS issues?
Anyone that knows the solution can you kindly respond?
Thanks!!
I always used to read piece of writing in news papers but now as I am a user of web therefore from now I
am using net for posts, thanks to web.
Amazing! Its really remarkable piece of writing, I have got much clear idea concerning from this piece of
writing.
Thanks for one’s marvelous posting! I quite enjoyed reading it, you will be a great author.
I will always bookmark your blog and will come back sometime soon. I want to encourage
you to continue your great job, have a nice day!
Finest selling operating watches evaluate, comparison and every part it is advisable leartn about buying a perfect look ahad to running.
It’s the best time to make a few plans for the future and it’s time to be happy.
I’ve read this put up and if I may just I
desire to counsel you few fascinating issues or suggestions.
Maybe you could write subsequent articles referring to this article.
I wish to learn even more things approximately it!
Pingback: Google
Hello there, I discovered your site by means of Google
at the same time as looking for a similar topic, your site came up, it seems good. can be benefited out of your writing.
Cheers!
I want to to thank you for this excellent read!!
I certainly loved every little bit of it. I’ve got you book-marked to check out
new stuff you post…
Pingback: Google
Thanks for one’s marvelous posting! I actually
enjoyed reading it, you can be a great author.I will be sure to
bookmark your blog and will come back in the future.
I want to encourage you to continue your great posts, have a nice day!
You should take part in a contest for one of the highest quality websites on the internet.
I am going to highly recommend this blog!
Hi, yeah this post is truly fastidious and I have learned lot of things from it on the topic of blogging.
thanks.
Excellent post. Keep posting such kind of info on your page.
Im really impressed by it.
Hey there, You’ve done an incredible job. I’ll definitely
digg it and individually suggest to my friends. I’m confident they’ll be benefited from
this site.
I’m curious to find out what blog system you happen to be using?
I’m having some small security issues with my latest website and I’d like to find
something more safe. Do you have any recommendations?
What’s up to every , for the reason that I am in fact keen of
reading this webpage’s post to be updated regularly. It consists
of pleasant. | https://www.protechbeach.com/java-programs/java-program-to-check-armstrong-number/?replytocom=1708 | CC-MAIN-2020-16 | refinedweb | 3,126 | 75.2 |
Runtime evaluation
You are encouraged to solve this task according to the task description, using any language you may know.
- Task
Demonstrate a language's ability for programs to execute code written in the language provided at runtime.
Show what kind of program fragments are permitted (e.g. expressions vs. statements), and how to get values in and out (e.g. environments, arguments, return values), if applicable what lexical/static environment the program is evaluated in, and what facilities for restricting (e.g. sandboxes, resource limits) or customizing (e.g. debugging facilities) the execution.
You may not invoke a separate evaluator program, or invoke a compiler and then its output, unless the interface of that program, and the syntax and means of executing it, are considered part of your language/library/platform.
For a more constrained task giving a specific program fragment to evaluate, see Eval in environment.
Contents
- 1 ALGOL 68
- 2 AutoHotkey
- 3 BASIC
- 4 BBC BASIC
- 5 Burlesque
- 6 Caché ObjectScript
- 7 Common Lisp
- 8 Déjà Vu
- 9 E
- 10 EchoLisp
- 11 Elixir
- 12 Erlang
- 13 Factor
- 14 Forth
- 15 Frink
- 16 Go
- 17 Groovy
- 18 GW-BASIC
- 19 Harbour
- 20 HicEst
- 21 J
- 22 Lasso
- 23 Liberty BASIC
- 24 Lua
- 25 JavaScript
- 26 Mathematica
- 27 MATLAB
- 28 Maxima
- 29 Oforth
- 30 ooRexx
- 31 OxygenBasic
- 32 Oz
- 33 PARI/GP
- 34 Perl
- 35 Perl 6
- 36 PHP
- 37 PicoLisp
- 38 Pike
- 39 PowerShell
- 40 Python
- 41 R
- 42 Racket
- 43 REBOL
- 44 REXX
- 45 Ring
- 46 Ruby
- 47 Scheme
- 48 Sidef
- 49 Slate
- 50 Smalltalk
- 51 SNOBOL4
- 52 Sparkling
- 53 Tcl
- 54 TI-89 BASIC
- 55 UNIX Shell
- 56 Ursa
- 57 zkl
ALGOL 68[edit]
Variable names are generally not visible at run time with classic compilers. However ALGOL 68G is an interpretor and it retains this ability.
print(evaluate("4.0*arctan(1.0)"))
- Output:
+3.14159265358979e +0
This example demonstrates the use of variables and that the Algol 68G evaluate uses the normal Algol 68 scoping rules:
# procedure to call the Algol 68G evaluate procedure #
# the environment of the evaluation will be the caller's environment #
# with "code", "x" and "y" defined as the procedure parameters #
PROC ev = ( STRING code, INT x, INT y )STRING: evaluate( code );
BEGIN
INT i := 1;
INT j := 2;
REAL x := 4.2;
REAL y := 0.7164;
# evaluates "i + j" in the current environment #
print( ( evaluate( "i + j" ), newline ) );
# evaluates "x + y" in the environment of the procedure body of ev #
print( ( ev( "x + y", i, j ), newline ) );
# evaluates "x + y" in the current environment, so shows a different #
# result to the previous call #
print( ( evaluate( "x + y" ), newline ) );
# prints "code" because code is defined in the environment of the #
# call to evaluate (in ev) although it is not defined in this #
# environment #
print( ( ev( "code", 1, 2 ), newline ) );
# prints "code + codecode + code" - see above #
print( ( ev( "code + code", 1, 2 ), newline ) )
END
# if this next call was executed, a runtime error would occur as x and y #
# do not exist anymore #
# ;print( ( evaluate( "x + y" ), newline ) ) #
- Output:
+3 +3 +4.91640000000000e +0 code code + codecode + code
AutoHotkey[edit]
function addScript can be used to dynamically add lines of code to a running script.
; requires AutoHotkey_H or AutoHotkey.dll
msgbox % eval("3 + 4")
msgbox % eval("4 + 4")
return
eval(expression)
{
global script
script =
(
expression(){
return %expression%
}
)
renameFunction("expression", "") ; remove any previous expressions
gosub load ; cannot use addScript inside a function yet
exp := "expression"
return %exp%()
}
load:
DllCall(A_AhkPath "\addScript","Str",script,"Uchar",0,"Cdecl UInt")
return
renameFunction(funcName, newname){
static
x%newname% := newname ; store newname in a static variable so its memory is not freed
strput(newname, &x%newname%, strlen(newname) + 1)
if fnp := FindFunc(funcName)
numput(&x%newname%, fnp+0, 0, "uint")
}
BASIC[edit]
Evaluating expressions[edit]
VAL() function converts string into numeric value. On many Basic implementations, VAL only accepts simple numeric values. However, Sinclair Basic and its derivates such as Beta Basic and SAM Basic accept any expression that evaluates to numeric value.
The following example shows a functon that plots graph of any function f(x). The function is passed in string parameter f$.
100 DEF PROC graph f$ 110 LOCAL x,y 120 PLOT 0,90 130 FOR x = -2 TO 2 STEP 0.02 140 LET y = VAL(f$) 150 DRAW TO x*50+100, y*50+90 160 NEXT x 170 END PROC
Usage example:
500 graph "SIN(x) + SIN(x*3)/3"
Executing code[edit]
The KEYIN statement available on Beta Basic and SAM Basic executes a string as if it had been entered from keyboard in command mode. It can execute commands directly, or add (or replace) lines in the program while the program is executing. This allows creating self-modifying programs.
The function do_with_x in the following example loops variable x from 1 to 10 and within the loop executes any code passed to function in parameter p$.
100 DEF PROC do_with_x p$ 110 LOCAL x 130 FOR x = 1 TO 10 140 KEYIN p$ 160 NEXT x 170 END PROC
The usage example below creates a multiplication table by executing inner loop for y:
500 LET y$ = "FOR y=1 TO 10: PRINT AT y, x*3; x*y: NEXT y" 510 do_with_x y$
VAL and KEYIN execute code in the environment they are called from. In the above examples, VAL and KEYIN both see the local variable x. There is no sandbox functionality in Bata BASIC or SAM BASIC.
[edit]
In ZX Spectrum Basic, loading a new program will replace the existing program. The new program will sutomatically run, if it was saved to do so by using SAVE together with LINE:
10 REM load the next program
20 LOAD "PROG2"
You can also include code in a text string as follows:
10 LET f$=CHR$ 187+"(x)+"+CHR$ 178+"(x*3)/2": REM LET f$="SQR (x)+SIN (x*3)/2"
20 FOR x=0 TO 2 STEP 0.2
30 LET y=VAL f$
40 PRINT y
50 NEXT x
CHR$ 178 is the token of function SQR, and CHR$ 178 is the token of function SIN.
In 48 k mode, you can also write this:
10 LET f= SQR (x)+SIN (x*3)/2
Then the type of the variable is changed and the formula is enclosed in quotation marks:
10 LET f$=" SQR (x)+SIN (x*3)/2"
BBC BASIC[edit]
Expressions[edit]
Expressions can be evaluated using the EVAL function:
expr$ = "PI^2 + 1"
PRINT EVAL(expr$)
- Output:
10.8696044
Statements[edit]
Statements can be executed by being tokenised and then written to a temporary file:
exec$ = "PRINT ""Hello world!"""
bbc$ = FNtokenise(exec$)
tmpfile$ = @tmp$+"temp.bbc"
tmpfile% = OPENOUT(tmpfile$)
BPUT#tmpfile%, bbc$+CHR$0
CLOSE #tmpfile%
CALL tmpfile$
END
DEF FNtokenise(A$)
LOCAL A%
A% = EVAL("0:"+A$)
A$ = $(!332+2)
= CHR$(LENA$+4) + CHR$0 + CHR$0 + A$ + CHR$13
- Output:
Hello world!
Burlesque[edit]
In Burlesque "Code" is actually just a list of identifiers. It is therefore possible to create and manipulate code at runtime. Evaluating a block:
blsq ) {5 5 .+}e!
10
Creating code at runtime (changing map (+5) to map (+6) at runtime):
blsq ) 1 10r@{5.+}m[
{6 7 8 9 10 11 12 13 14 15}
blsq ) 1 10r@{5.+}6 0sam[
{7 8 9 10 11 12 13 14 15 16}
Code from string at runtime:
blsq ) 1 10r@"5.+"psm[
{6 7 8 9 10 11 12 13 14 15}
Evaluating strings at runtime (reverse is just for demonstration):
blsq ) "[[email protected] 1"<-pe
{6 7 8 9 10 11 12 13 14 15}
Injecting other functions into code:
blsq ) {3 2}(?*)[+e!
6
Identifiers not contained in a block require to be quoted to push them to the stack. Note the difference:
blsq ) ?+
ERROR: Burlesque: (.+) Invalid arguments!
blsq ) ?+to
"Error"
blsq ) (?+)
?+
blsq ) (?+)to
"Ident"
(also note the fallback to .+ from ?+).
Caché ObjectScript[edit]
The 'XECUTE' command performs substantially the same operation as the '$XECUTE' function, except the latter must specify a return value.
- Examples:
USER>Set cmd="Write ""Hello, World!""" USER>Xecute cmd Hello, World! USER>Set fnc="(num1, num2) Set res=num1+num2 Quit res" USER>Write $Xecute(fnc, 2, 3) 5
Common Lisp[edit]
Brief
eval tutorial[edit]
The eval function evaluates Lisp code at runtime.
(eval '(+ 4 5)) ; returns 9
In Common Lisp, programs are represented as trees (s-expressions). Therefore, it is easily possible to construct a program which includes externally specified values, particularly using backquote template syntax:
(defun add-four-complicated (a-number)
(eval `(+ 4 ',a-number)))
Or you can construct a function and then call it. (If the function is used more than once, it would be good to use
compile instead of
eval, which compiles the code before returning the function.
eval is permitted to compile as well, but
compile requires it.)
(defun add-four-by-function (a-number)
(funcall (eval '(lambda (n) (+ 4 n)))) a-number)
If your program came from a file or user input, then you have it as a string, and
read or
read-from-string will convert it to s-expression form:
(eval (read-from-string "(+ 4 5)"))
Common Lisp has lexical scope, but
eval always evaluates “in the null lexical environment”. In particular,
eval does not inherit the lexical variables from the enclosing code. (Note that
eval is an ordinary function and as such does not have access to that environment anyway.)
(let ((x 11) (y 22))
;; This is an error! Inside the eval, x and y are unbound!
(format t "~%x + y = ~a" (eval '(+ x y))))
One way to fix the error is to (declare (special x y)) for dynamic variables; but the easier and shorter way is to insert the values of x and y with the backquote template syntax.
(let ((x 11) (y 22))
(format t "~%x + y = ~a" (eval `(+ ,x ,y))))
Sandboxing Discussion[edit]
Sandboxing in Common Lisp can be approached in a variety of ways, none of which are standardized.
One approach to define a sublanguage and validate expressions before passing them to
compile or
eval. Of course, a whole different language entirely can be defined, and translated to Lisp. This is essentially the classic "trusted compiler generating safe code in an untrusted target language" approach.
sandbox. The validator then simply has to make sure that all symbols used in the expression are restricted to those which are visible inside the
sandboxpackage. Inside
sandbox, we include only those functions, operators, variables and other symbols from system packages that are safe: materials which don't allow sandboxed code to do anything harmful from within the sandbox, or to escape from the sandbox. For instance, suppose that some package
systemhas a function called
run-shell-command. We do not import
run-shell-commandinto the
sandboxpackage, and our validator will reject code which has references such as
(system:run-shell-command ...). Therefore, the sandboxed code has no direct way to run that function. To gain access to it, it must exploit some flaw in the sandbox. One flaw in the sandbox would be the inclusion of certain package-related functions like
find-symbol. The expression
(find-symbol "FOO" "BAR")will retrieve symbol
foo::barif it exists. The validator will not find this code because it has no embedded symbolic references to package
foo; they are disguised as character string. A cautious approach to the sandbox should be taken: include less rather than more, and consider each expansion of the sandbox with meticulous care.
Debugging Notes[edit]
There are no standardized debugging facilities specific to the
eval operation itself, but code evaluted may be affected by the current global declarations, particularly the
optimize declaration's
debug and
safety qualities.
Déjà Vu[edit]
The compiler, module system and interactive interpreter are all implemented in Déjà Vu itself, and the first two are part of the standard library.
Each compiled fragment is considered to be a single "file", and cannot access any local variables from outside of itself.
!run-blob !compile-string "(fake filename)" "!print \qHello world\q"
- Output:
Hello world
E[edit]
In E, eval is a method of expression ASTs (EExprs). (Other types of program fragment ASTs such as methods and patterns may not be directly evaluated, and must be inserted into an expression.)
The lexical environment is provided as a parameter and cannot be omitted. The evaluated program has no access to anything but the provided environment.
? e`1 + 1`.eval(safeScope)
# value: 2
eval returns the value of the expression.
evalToPair also returns the modified environment for use with further evaluation, e.g. for implementing a REPL.
? def [value, env] := e`def x := 1 + 1`.evalToPair(safeScope)
# value: [2, ...]
? e`x`.eval(env)
# value: 2
Eval from a string may be done by invoking the parser.
? def prog := <elang:syntax.makeEParser>.run("1 + 1")
# value: e`1.add(1)`
? prog.eval(safeScope)
# value: 2
EchoLisp[edit]
eval : The evaluation of the eval argument must give a symbolic expression, which is in turn evaluated. Alternatively, read-from-string produces a s-expression - any kind of program - from a string.
(eval (list * 6 7))
→ 42
(eval '(* 6 7)) ;; quoted argument
→ 42
(eval (read-from-string "(* 6 7)"))
→ 42
Elixir[edit]
iex(1)> Code.eval_string("x + 4 * Enum.sum([1,2,3,4])", [x: 17])
{57, [x: 17]}
iex(2)> Code.eval_string("c = a + b", [a: 1, b: 2])
{3, [a: 1, b: 2, c: 3]}
iex(3)> Code.eval_string("a = a + b", [a: 1, b: 2])
{3, [a: 3, b: 2]}
Erlang[edit]
Erlang eval is a bit complex/verbose and requires the interaction of 3 modules: erl_scan (tokenizes), erl_parse (returns an abstract form) and erl_eval (variable binding, evaluate abstract form, etc).
1> {ok, Tokens, _} = erl_scan:string("X + 4 * lists:sum([1,2,3,4]).").
...
2> {ok, [Form]} = erl_parse:parse_exprs(Tokens).
...
3> Bindings = erl_eval:add_binding('X', 17, erl_eval:new_bindings()).
[{'X',17}]
4> {value, Value, _} = erl_eval:expr(Form, Bindings).
{value,57,[{'X',17}]}
5> Value.
57
Factor[edit]
Arbitrary strings can be eval'd, but you must provide their stack effect.
IN: scratchpad "\"Hello, World!\" print" ( -- ) eval Hello, World! IN: scratchpad 4 5 "+" ( a b -- c ) eval 9
You can use the infer word to infer a quotation's stack effect. You can combine infer with parse-string to eval an arbitrary string without writing the stack effect yourself.
( scratchpad ) "USE: math 8 9 +" dup parse-string "USE: math 8 9 +" [ 8 9 + ] ( scratchpad ) infer "USE: math 8 9 +" ( x x -- x ) ( scratchpad ) eval 17
Forth[edit]
EVALUATE invokes the interpreter on a string of Forth code, using and modifying the current dictionary and stack state.
s" variable foo 1e fatan 4e f*" evaluate
f. \ 3.14159...
1 foo !
Sandboxing can be achieved in general by using MARKER, which defines a checkpoint for the dictionary state which can later be restored.
unused . \ show how much dictionary space is available
marker restore
create foo 30 allot
: my-def 30 0 do cr i . ." test" loop ;
unused . \ lower than before
restore
unused . \ same as first unused; restore, foo, and my-def no longer defined
Frink[edit]
The
eval[] function can be used to evaluate aribitrary Frink code in the current environment, or in a new context.
eval["length = 1234 feet + 2 inches"] (that is, create a new context) before evaluation.
Frink has an extensive security manager which allows the eval statement to prevent unsecure operations such as reading or writing a file or URL, creating new functions or classes, altering systemwide flags, evaluate arbitrary Java code, and so on. If code needs to evaluate unsecure statments, you can use the intentionally frighteningly-named
unsafeEval[str] (which may itself be disallowed in secure contexts.)
Go[edit]
As a compiled, strongly typed language,
eval() is not the strong suit of Go. Nevertheless, an
eval package exists that does that. Just don't expect it to be as easy or efficient as in interpreted languages. The eval package was originally part of the Go standard library but is now hosted and maintained externally.
package main
import (
"fmt"
"bitbucket.org/binet/go-eval/pkg/eval"
"go/token"
)
func main() {
w := eval.NewWorld();
fset := token.NewFileSet();
code, err := w.Compile(fset, "1 + 2")
if err != nil {
fmt.Println("Compile error");
return
}
val, err := code.Run();
if err != nil {
fmt.Println("Run time error");
return;
}
fmt.Println("Return value:", val) //prints, well, 3
}
Groovy[edit]
Each of these solutions evaluates a Groovy script based on some variation of the solution to the "Yuletide Holiday" task.
Each variation has been verified to give the same output:
[2011, 2016, 2022, 2033, 2039, 2044, 2050, 2061, 2067, 2072, 2078, 2089, 2095, 2101, 2107, 2112, 2118]
Simple evaluation[edit]
The GroovyShell class allows the evaluation of a string or of the text contents of a File or InputStream as a Groovy script. A script is a either a set of statements to be executed in order, or a Groovy class with a main() method, or a Groovy Thread subclass or Runnable implementation. The return value is the value of the last statement executed, or the value of an explicit return statement (if any).
def years1 = new GroovyShell().evaluate('''
(2008..2121).findAll {
Date.parse("yyyy-MM-dd", "${it}-12-25").format("EEE") == "Sun"
}
''')
println years1
The last expression evaluated in the script, a list of years found, is the return value of the evaluate() method.
Evaluation with variables[edit]
There are several approaches to evaluating a script with variables:
- GString embedded values
- Binding variables
- Eval shortcut
GString embedded values
Setting up the script as a GString with embedded value parsing is a "natural" ad hoc solution for Groovy programmers, but there are possible pitfalls if the script itself contains GStrings.
def startYear = 2008
def endYear = 2121
def years2 = new GroovyShell().evaluate("""
(${startYear}..${endYear}).findAll {
Date.parse("yyyy-MM-dd", "\${it}-12-25").format("EEE") == "Sun"
}
""")
println years2
The variables "startYear" and "endYear" are dynamically pulled into the script GString as embedded values before the script itself ever executes.
Notice that in the script the embedded value "${it}" must be quoted with backslash (\) to prevent parsing as a part of the script GString. However, it is still correctly parsed within the internal GString when the script is run.
Binding variables
GroovyShell uses a Binding object to pass variable values to a script. This is the only way to pass variables if the script comes from a File or InputStream, but even if the script is a string Binding avoids the nested quoting issue caused by the ad hoc use of GString.
def context = new Binding()
context.startYear = 2008
context.endYear = 2121
def years3 = new GroovyShell(context).evaluate('''
(startYear..endYear).findAll {
Date.parse("yyyy-MM-dd", "${it}-12-25").format("EEE") == "Sun"
}
''')
We may instantiate Binding with the variables as named parameters, allowing a more terse syntax:
def years4 = new GroovyShell( new Binding(startYear: 2008, endYear: 2121) ).evaluate('''
(startYear..endYear).findAll {
Date.parse("yyyy-MM-dd", "${it}-12-25").format("EEE") == "Sun"
}
''')
println years4
We may also access the Binding object after script evaluation to extract values of any global variables set during the evaluation:
def binding = new Binding(startYear: 2008, endYear: 2121)
new GroovyShell( binding ).evaluate('''
yearList = (startYear..endYear).findAll {
Date.parse("yyyy-MM-dd", "${it}-12-25").format("EEE") == "Sun"
}
''')
println binding.yearList
Eval shortcut
For simple evaluation of string-based scripts with only a few variables (like this one), the Eval class has static shortcut methods that do the Binding setup and GroovyShell evaluation under the surface. Eval.me(script) evaluates a script with no variables. Eval.x(x,script), Eval.xy(x,y,script), or Eval.xyz(x,y,z,script) each evaluates a script with 1, 2, or 3 variables, respectively. Here is an example with start and end years as script variables x and y.
def years5 = Eval.xy(2008, 2121, '''
(x..y).findAll {
Date.parse("yyyy-MM-dd", "${it}-12-25").format("EEE") == "Sun"
}
''')
println years5
GW-BASIC[edit]
10 LINE INPUT "Type an expression: ",A$
20 OPEN "CHAIN.TMP" FOR OUTPUT AS #1
30 PRINT #1, "70 LET Y=("+A$+")"
40 CLOSE #1
50 CHAIN MERGE "CHAIN.TMP",60,ALL
60 FOR X=0 TO 5
70 REM
80 PRINT X,Y
90 NEXT X
100 GOTO 10
Harbour[edit]
Procedure Main()
local bAdd := {|Label,n1,n2| Qout( Label ), QQout( n1 + n2 )}
Eval( bAdd, "5+5 = ", 5, 5 )
Eval( bAdd, "5-5 = ", 5, -5 )
return
Upon execution you see:
5+5 = 10
5-5 = 0
HicEst[edit]
XEQ invokes the interpreter on a string of HicEst code, but keeps the current dictionary and stack state. Blocks of expressions are not possible.
value = XEQ( " temp = 1 + 2 + 3 ") ! value is assigned 6
! temp is undefined outside XEQ, if it was not defined before.
XEQ(" WRITE(Messagebox) 'Hello World !' ")
OPEN(FIle="my_file.txt")
READ(FIle="my_file.txt", Row=6) string
XEQ( string ) ! executes row 6 of my_file.txt
J[edit]
Use monadic
". (Do) to execute a string.
". 'a =: +/ 1 2 3' NB. execute a string to sum 1, 2 and 3 and assign to noun a
Only J expressions are allowed in strings used as as arguments for
". (control words and blocks of expressions are not allowed).
Alterntively, you can use the conjunction
: (Explicit Definition) to create various kinds of functions and evaluate them. Arguments have names, such as "y", which are specified by the language definition. For example:
monad :'+/y' 1 2 3
Rules of scope for such functions match those described on the Scope modifiers page. Also, control words (like if. or for. or while.) and blocks of expressions are allowed in strings which are evaluated in this fashion.
The context for these evaluations will always be the current locale (which might typically be the current object [or class]). If only expressions are allowed, then local variables will be local to the current explicit definition. Otherwise a new local context will be created for the evaluation (and this will be discarded when evaluation has completed). Local contexts are lexical while locales may also be manipulated programatically.
Debugging facilities [currently] require that the operation be given a name.
J relies on the OS for sandboxing and does not offer any additional resource constraints.
Lasso[edit]
"Sourcefile" when executed has access to all variables and other data that would be available in scope to an included file.
This means thread vars ($) and types/methods already defined will be accessible.
Types, methods, traits and thread vars created or modified will maintain state subsequently - i.e. if a type is defined in code executed in a sourcefile context then that type will be available after execution. if a thread var is modified in the sourcefile executed code then the var will maintain that value after execution.
Local variables (#) maintain scope behaviour as normal.
Output is governed by the "autocollect" boolean, the third parameter in the sourcefile invocation.
//code, fragment name, autocollect, inplaintext
local(mycode = "'Hello world, it is '+date")
sourcefile('['+#mycode+']','arbritraty_name', true, true)->invoke
'\r'
var(x = 100)
local(mycode = "Outside Lasso\r['Hello world, var x is '+var(x)]")
// autocollect (3rd param): return any output generated
// inplaintext (4th param): if true, assumes this is mixed Lasso and plain text,
// requires Lasso code to be in square brackets or other supported code block demarkation.
sourcefile(#mycode,'arbritraty_name', true, true)->invoke
'\r'
var(y = 2)
local(mycode = "'Hello world, is there output?\r'
var(x) *= var(y)")
// autocollect (3rd param): as false, no output returned
// inplaintext (4th param): as false, assumes this is Lasso code, no mixed-mode Lasso and text.
sourcefile(#mycode,'arbritraty_name', false, false)->invoke
'var x is now: '+$x
'\r'
var(z = 3)
local(mycode = "var(x) *= var(z)")
sourcefile(#mycode,'arbritraty_name', false, false)->invoke
'var x is now: '+$x
- Output:
Hello world, it is 2013-11-10 15:54:19 Outside Lasso Hello world, var x is 100 var x is now: 200 var x is now: 600
Liberty BASIC[edit]
Liberty BASIC has the ability to evaluate arrays using a string for the array name and a variable for the element.
'Dimension a numerical and string array
Dim myArray(5)
Dim myStringArray$(5)
'Fill both arrays with the appropriate data
For i = 0 To 5
myArray(i) = i
myStringArray$(i) = "String - " + str$(i)
Next i
'Set two variables with the names of each array
numArrayName$ = "myArray"
strArrayName$ = "myStringArray"
'Retrieve the array data by evaluating a string
'that correlates to the array
For i = 0 To 5
Print Eval$(numArrayName$ + "(" + str$(i) + ")")
Print Eval$(strArrayName$ + "$(" + str$(i) + ")")
Next i
An example using a struct and a pointer.
Struct myStruct, value As long, _
string As ptr
myStruct.value.struct = 10
myStruct.string.struct = "Hello World!"
structName$ = "myStruct"
numElement$ = "value"
strElement$ = "string"
Print Eval$(structName$ + "." + numElement$ + "." + "struct")
'Pay close attention that this is EVAL() because we are
'retrieving the PTR to the string which is essentially a ulong
Print Winstring(Eval(structName$ + "." + strElement$ + "." + "struct"))
Lua[edit]
f = loadstring(s) -- load a string as a function. Returns a function.
one = loadstring"return 1" -- one() returns 1
two = loadstring"return ..." -- two() returns the arguments passed to it
In Lua 5.2 the
loadstring function is superseded by the more general
load function, which can be used in a compatible way. Nevertheless,
loadstring is still available.
f = load("return 42")
f() --> returns 42
JavaScript[edit]
The eval method handles statements and expressions well:
var foo = eval('{value: 42}');
eval('var bar = "Hello, world!";');
typeof foo; // 'object'
typeof bar; // 'string'
Mathematica[edit]
Mathematica's
ToExpression evaluates an expression string as if it were placed directly in the code. Statements are just
CompoundExpressions, so they also work. Any evaluation can be limited with
TimeConstrained and
MemoryConstrained.
Print[ToExpression["1 + 1"]];
Print[ToExpression["Print[\"Hello, world!\"]; 10!"]];
x = 5;
Print[ToExpression["x!"]];
Print[ToExpression["Module[{x = 8}, x!]"]];
Print[MemoryConstrained[ToExpression["Range[5]"], 10000, {}]];
Print[MemoryConstrained[ToExpression["Range[10^5]"], 10000, {}]];
Print[TimeConstrained[ToExpression["Pause[1]; True"], 2, False]];
Print[TimeConstrained[ToExpression["Pause[60]; True"], 2, False]];
- Output:
2 Hello, world! 3628800 120 40320 {1, 2, 3, 4, 5} {} True False
MATLAB[edit]
The eval and evalin functions handles any kind of code. It can handle multi-line code, although it needs the lines to be separated by the newline character. It can even allow you to program at runtime, as illustrated in the last example in the code and output below. Errors can occur when mixing eval statements with regular code, especially "compile-time" errors if the code appears to be missing key elements (ending brackets or end statements, etc). Some of these are also demonstrated.
function testEval
fprintf('Expressions:\n')
x = eval('5+10^2')
eval('y = (x-100).*[1 2 3]')
eval('z = strcat(''my'', '' string'')')
try
w eval(' = 45')
catch
fprintf('Runtime error: interpretation of w is a function\n\n')
end
% eval('v') = 5
% Invalid at compile-time as MATLAB interprets as using eval as a variable
fprintf('Other Statements:\n')
nl = sprintf('\n');
eval(['for k = 1:20' nl ...
'fprintf(''%.3f\n'', k)' nl ...
'if k == 3' nl ...
'break' nl ...
'end' nl ...
'end'])
true == eval('1')
try
true eval(' == 1')
catch
fprintf('Runtime error: interpretation of == 1 is of input to true\n\n')
end
fprintf('Programming on the fly:\n')
userIn = true;
codeBlock = '';
while userIn
userIn = input('Enter next line of code: ', 's');
codeBlock = [codeBlock nl userIn];
end
eval(codeBlock)
end
- Output:
Expressions: x = 105 y = 5 10 15 z = my string Runtime error: interpretation of w is a function Other Statements: 1.000 2.000 3.000 ans = 1 Runtime error: interpretation of == 1 is of input to true Programming on the fly: Enter next line of code: fprintf('Goodbye, World!\n') Enter next line of code: str = 'Ice and Fire'; Enter next line of code: words = textscan(str, '%s'); Enter next line of code: fprintf('%s ', words{1}{end:-1:1}) Enter next line of code: Goodbye, World! Fire and Ice
Maxima[edit]
/* Here is how to create a function and return a value at runtime. In the first example,
the function is made global, i.e. it still exists after the statement is run. In the second example, the function
is declared local. The evaluated string may read or write any variable defined before eval_string is run. */
kill(f)$
eval_string("block(f(x) := x^2 + 1, f(2))");
5
fundef(f);
/* f(x) := x^2 + 1 */
eval_string("block([f], local(f), f(x) := x^3 + 1, f(2))");
9
fundef(f);
/* f(x) := x^2 + 1 */
Oforth[edit]
Oforth can evaluate strings at runtime.
In order to restrict evaluation, perform is used on strings. With perform, only objects can be evaluated. If a function or a method is included into the string an exception is raised and the function is not evaluated.
"[ [ $a, 12], [$b, 1.2], [ $c, [ $aaa, $bbb, $ccc ] ], [ $torun, #first ] ]" perform .s
[1] (List) [[a, 12], [b, 1.2], [c, [aaa, bbb, ccc]], [torun, #first]]
"12 13 +" perform
[1:interpreter] ExCompiler : Can't evaluate <+>
In order to evaluate any Oforth code, eval can be used. This method should not be used on unsafe strings.
"12 13 + println" eval
25
": newFunction(a) a + ; 12 10 newFunction println" eval
22
ooRexx[edit]
The ooRexx INTERPRET instruction allows execution of dynamically constructed code. Almost any well-formed code can be executed dynamically, including multiple instructions at a time. The instructions are executed in the local context where the interpret instruction executes, so full access to the current variable context is available. For example:
a = .array~of(1, 2, 3)
ins = "loop num over a; say num; end"
interpret ins
Executes the LOOP instruction, displaying the contents of the array pointed to by variable A.
OxygenBasic[edit]
Runtime (secondary) compiling is possible, with some restrictions. For instance, static variables may not be created by the compiled code, but parental variables are visible to it. This demo produces tables of Y values, given a formula, and a range of X values to step through.
function ExecSeries(string s,double b,e,i) as string
'===================================================
'
sys a,p
string v,u,tab,cr,er
'
'PREPARE OUTPUT BUFFER
'
p=1
cr=chr(13) chr(10)
tab=chr(9)
v=nuls 4096
mid v,p,s+cr+cr
p+=4+len s
'
double x,y,z 'shared variables
'
'COMPILE
'
a=compile s
er=error
if er then
print "runtime error: " er : exit function
end if
'
'EXECUTE
'
for x=b to e step i
if p+128>=len v then
v+=nuls len(v) 'extend buffer
end if
call a
u=str(x) tab str(y) cr
mid v,p,u : p+=len u
'
freememory a 'release compiled code
'
return left v,p-1 'results
'
end function
'=====
'TESTS
'=====
'Expression, StartVal, EndVal stepVal, Increment
print ExecSeries "y=x*x*x", 1, 10, 1
print ExecSeries "y=sqrt x",1, 9 , 1
Oz[edit]
declare
%% simplest case: just evaluate expressions without bindings
R1 = {Compiler.virtualStringToValue "{Abs ~42}"}
{Show R1}
%% eval expressions with additional bindings and
%% the possibility to kill the evaluation by calling KillProc
KillProc
R2 = {Compiler.evalExpression "{Abs A}" unit('A':~42) ?KillProc}
{Show R2}
%% full control: add and remove bindings, eval expressions or
%% statements, set compiler switches etc.
Engine = {New Compiler.engine init}
{Engine enqueue(setSwitch(expression false))} %% statements instead of expr.
{Engine enqueue(mergeEnv(env('A':42 'System':System)))}
{Engine enqueue(feedVirtualString("{System.show A}"))}
By restricting the environment it is possible to restrict what kind of programs can be run.
PARI/GP[edit]
Since GP is usually run from the REPL gp, it is trivial to evaluate programs at runtime (most are run this way). Slightly less trivial is passing code around as a first-class object:
runme(f)={
f()
};
runme( ()->print("Hello world!") )
One facility designed for restricting such embedded programs is
default(secure,1) which denies scripts the ability to run
system and
extern. This cannot be turned off except interactively.
Perl[edit]
The
eval function accepts a block or a string as its argument. The difference is that a block is parsed at compile-time, whereas a string is parsed at runtime. The block or string may represent any valid Perl program, including a single expression. The subprogram executes in the same lexical and dynamic scope as the surrounding code. The return value of a call to
eval depends on how the subprogram terminates:
- If control reaches the end of the subprogram,
evalreturns the value of the last expression evaluated.
- If the subprogram uses an explicit
return,
evalreturns the given value.
- If the subprogram throws an exception,
evalreturns
undef. The text of the exception is assigned to
$@. (When the subprogram terminates without an exception,
$@is set to the null string instead.)
my ($a, $b) = (-5, 7);
$ans = eval 'abs($a * $b)'; # => 35
Perl 6[edit]
Any syntactically valid sequence of statements may be run, and the snippet to be run can see its outer lexical scope at the point of the eval:
use MONKEY-SEE-NO-EVAL;
my ($a, $b) = (-5, 7);
my $ans = EVAL 'abs($a * $b)'; # => 35
Unlike in Perl 5, eval in Perl 6 only compiles and executes the string, but does not trap exceptions. You must say try eval to get that behavior (or supply a CATCH block within the text to be evaluated).
PHP[edit]
The eval construct allow string evaluation as PHP code. Opening and closing tags are not required. Return statements immediatly terminates evaluation . Eval returns NULL, unless return is called in evalued code.
<?php
$code = 'echo "hello world"';
eval($code);
$code = 'return "hello world"';
print eval($code);
PicoLisp[edit]
In PicoLisp there is a formal equivalence of code and data. Almost any piece of data is potentially executable. PicoLisp has three internal data types: Numbers, symbols and lists. Though in certain contexts (e.g. GUI objects) also atomic data (numbers and symbols) are evaluated as code entities, a typical executable item is a list.
The PicoLisp reference distinguishes between two terms: An 'exe' (expression) is an executable list, with a function as the first element, followed by arguments. A 'prg' (program) is a list of 'exe's, to be executed sequentially.
'exe's and 'prg's are implicit in the whole runtime system. For example, the body of a function is a 'prg', the "true" branch of an 'if' call is an 'exe', while the "false" branch again is a 'prg'.
For explicit execution, an 'exe' can be evaluated by passing it to the function 'eval', while a 'prg' can be handled by 'run'.
As PicoLisp uses exclusively dynamic binding, any 'exe' or 'prg' can be executed in arbitrary contexts. The environmet can be controlled in any conceivable way, through implicit function parameter bindings, or explicitly with the aid of functions like 'bind', 'let' or 'job'.
Pike[edit]
Pike provides
compile_string() and
compile_file() which can compile code into a class that can be instantiated:
program demo = compile_string(#"
string name=\"demo\";
string hello()
{
return(\"hello, i am \"+name);
}");
demo()->hello();
Result: "hello, i am demo"
an actual application of this is shown in Simple database.
PowerShell[edit]
Evaluate an expression:
$test2plus2 = '2 + 2 -eq 4'
Invoke-Expression $test2plus2
- Output:
True
Evaluate a
[scriptblock] (a statement or group of statements) with code surrounded by curly braces using the & (call) operator:
$say = {"Hello, world!"}
& $say
- Output:
Hello, world!
Scriptblocks behave just as functions so they may have parameters:
$say = {param ([string]$Subject) "Hello, $Subject!"}
& $say -Subject "my friend"
- Output:
Hello, my friend!
A slightly more complex example:
$say = {param ([string]$Exclamation, [string]$Subject) "$Exclamation, $Subject!"}
& $say -Exclamation "Goodbye" -Subject "cruel world"
- Output:
Goodbye, cruel world!
To reverse the normal behaviour of a
[scriptblock] use the GetNewClosure method. This makes the scriptblock self-contained or closed; ie, the variable will only be read when the scriptblock is initialised:
$title = "Dong Work For Yuda"
$scriptblock = {$title}
$closedScriptblock = $scriptblock.GetNewClosure()
& $scriptblock
& $closedScriptblock
- Output:
Dong Work For Yuda Dong Work For Yuda
Change the variable and execute the scriptblock, the closed version will not reflect the change:
$title = "I'm Too Sexy"
& $scriptblock
& $closedScriptblock
- Output:
I'm Too Sexy Dong Work For Yuda
Since the [scriptblock] type is an anonymous function, the Begin {}, Process {} and End {} blocks may be added to a scriptblock, just like any function.
Python[edit]
The exec statement allows the optional passing in of global and local names via mappings (See the link for full syntax). The example below shows exec being used to parse and execute a string containing two statements:
>>> exec '''
x = sum([1,2,3,4])
print x
'''
10
Note that in Python 3.x exec is a function:
>>> exec('''
x = sum([1,2,3,4])
print(x)
''')
10
R[edit]
In R, expressions may be manipulated directly as abstract syntax trees, and evaluated within environments.
quote() captures the abstract syntax tree of an expression. parse() does the same starting from a string. call() constructs an evaluable parse tree. Thus all these three are equivalent.
expr1 <- quote(a+b*c)
expr2 <- parse(text="a+b*c")[[1]]
expr3 <- call("+", quote(`a`), call("*", quote(`b`), quote(`c`)))
eval() evaluates a quoted expression. evalq() is a version of eval() which quotes its first argument.
> a <- 1; b <- 2; c <- 3
> eval(expr1)
[1] 7
eval() has an optional second environment which is the lexical environment to evaluate in.
> env <- as.environment(list(a=1, b=3, c=2))
> evalq(a, env)
[1] 1
> eval(expr1, env) #this fails; env has only emptyenv() as a parent so can't find "+"
Error in eval(expr, envir, enclos) : could not find function "+"
> parent.env(env) <- sys.frame()
> eval(expr1, env) # eval in env, enclosed in the current context
[1] 7
> assign("b", 5, env) # assign() can assign into environments
> eval(expr1, env)
[1] 11
Racket[edit]
Racket has the usual eval that is demonstrated here, and in addition, it has a sandbox environment that provides a safe evaluator that is restricted from accessing files, network, etc.
#lang racket
(require racket/sandbox)
(define e (make-evaluator 'racket))
(e '(define + *))
(e '(+ 10 20))
(+ 10 20)
;; (e '(delete-file "/etc/passwd"))
;; --> delete-file: `delete' access denied for /etc/passwd
And, of course, both of these methods can use Racket's multilingual capabilities and evaluate the code in a language with different semantics.
REBOL[edit]
The do function evaluates a script file or a series of expressions and returns a result.
It performs the fundamental interpretive action of the Rebol language and is used internally within many other functions such as if, case, while, loop, repeat, foreach, and others.
a: -5
b: 7
answer: do [abs a * b] ; => 35
REXX[edit]
This REXX program does a:
- run-time evaluation of an internal expression, and
- run-time evaluation of a user-prompted expression.
/*REXX program illustrates the ability to execute code entered at runtime (from C.L.)*/
numeric digits 10000000 /*ten million digits should do it. */
bee=51
stuff= 'bee=min(-2,44); say 13*2 "[from inside the box.]"; abc=abs(bee)'
interpret stuff
say 'bee=' bee
say 'abc=' abc
say
/* [↓] now, we hear from the user. */
say 'enter an expression:'
pull expression
say
say 'expression entered is:' expression
say
interpret '?='expression
say 'length of result='length(?)
say ' left 50 bytes of result='left(?,50)"···"
say 'right 50 bytes of result=···'right(?, 50) /*stick a fork in it, we're all done. */
output when using the input: 2**44497 - 1
which happens to be the 27th Mersenne prime.
26 [from inside the box.] bee= -2 abc= 2 enter an expression: 2**44497 - 1 expression entered is: 2**44497 - 1 length of result=13395 left 50 bytes of result=85450982430363380319330070531840303650990159130402··· right 50 bytes of result=···22977396345497637789562340536844867686961011228671
Ring[edit]
Eval("nOutput = 5+2*5 " )
See "5+2*5 = " + nOutput + nl
Eval("for x = 1 to 10 see x + nl next")
Eval("func test see 'message from test!' ")
test()
Output :
5+2*5 = 15
1
2
3
4
5
6
7
8
9
10
message from test!
We can create simple interactive programming environment using the next program
while true
see nl + "code:> "
give cCode
try
eval(cCode)
catch
see cCatchError
done
end
Output
code:> see "hello world"
hello world
code:> for x = 1 to 10 see x + nl next
1
2
3
4
5
6
7
8
9
10
code:> func test see "Hello from test" + nl
code:> test()
Hello from test
code:> bye
Ruby[edit]
The eval method evaluates a string as code and returns the resulting object. With one argument, it evaluates in the current context:
a, b = 5, -7
ans = eval "(a * b).abs" # => 35
With two arguments, eval runs in the given Binding or Proc context:
def first(main_var, main_binding)
foo = 42
second [[main_var, main_binding], ["foo", binding]]
end
def second(args)
sqr = lambda {|x| x**2}
deref(args << ["sqr", binding])
end
def deref(stuff)
stuff.each do |varname, context|
puts "value of #{varname} is #{eval varname, context}"
end
end
hello = "world"
first "hello", binding
value of hello is world value of foo is 42 value of sqr is #<Proc:[email protected]:7>
Scheme[edit]
In Scheme, the expression passed to eval is evaluated in the current interaction environment, unless otherwise specified. The result is read back as a Scheme value.
> (define x 37)
> (eval '(+ x 5))
42
> (eval '(+ x 5) (interaction-environment))
42
> (eval '(+ x 5) (scheme-report-environment 5)) ;; provides R5RS definitions
Error: identifier not visible x.
Type (debug) to enter the debugger.
> (display (eval (read)))
(+ 4 5) ;; this is input from the user.
9
Sidef[edit]
The eval method evaluates a string as code and returns the resulting object.
var (a, b) = (-5, 7);
say eval '(a * b).abs'; # => 35
say (a * b -> abs); # => 35
Slate[edit]
In Slate, programs are represented as Syntax Node trees, with methods defined on the various syntactic types. The backtick syntax provides a convenient quoting mechanism, and as objects, they have convenient methods defined for evaluation or evaluation within a specific environment:
`(4 + 5) evaluate.
`(4 + 5) evaluateIn: prototypes.
You can also explicitly invoke the Parser on a String, to convert it into syntactic objects:
(Syntax Parser newOn: '4 + 5') upToEnd do: [| :each | print: each evaluate]
You can construct a program using externally-specified values using `unquote within a quoted expression:
define: #x -> 4.
`(x `unquote + 5) evaluate.
Or you can obviously construct a string:
define: #x -> 4.
(Syntax Parser newOn: x printString ; ' + 5')
The evaluate method also takes into consideration the current lexical scope, unless another environment is specified. The following returns 10, no matter what binding x has in the local namespace:
define: #x -> 4.
[| x | x: 5. `(x `unquote + 5) evaluate] do.
Slate can sandbox via constructing a fresh namespace and evaluating within it, but this mechanism is not strongly secure yet.
Smalltalk[edit]
[ 4 + 5 ] value.
Evaluating an expression without bindings:
e := ' 123 degreesToRadians sin '.
Transcript show: (Compiler evaluate: e) .
To get local bindings (x, y), evaluate an expression which yields a block given as a string, then call the resulting block:
e := '[ :x :y | (x*x + (y*y)) sqrt ]'.
Transcript show: ((Compiler evaluate: e) value: 3 value: 4).
this could be wrapped into a utility, which expects the names to bind as argument, if required.
SNOBOL4[edit]
The built in function eval() evaluates SNOBOL4 expressions and returns the value. The expression is evaluated in the current environment and has access to then-current variables.
expression = "' page ' (i + 1)"
i = 7
output = eval(expression)
end
- Output:
page 8
The built in function code() compiles complete SNOBOL4 source statements, or even complete programs. The compiled program is returned (as a value of type CODE), and when executed the program is executed in the then-current environment and has access to the then-current variables. Labels in the compiled program are added to the current program. Programs of type CODE are executed by a variant of the goto clause:
compiled = code(' output = "Hello, world."') :s<compiled>
end
When passing programs to code(), semicolons are used to separate lines.
The calling (already-compiled) program can call, for example, functions that are defined in the code compiled at runtime, and can include gotos to labels only defined in the code compiled at runtime. Likewise, the code compiled at runtime has access to not just variables, but also files, functions, etc., that are in the already-compiled program.
Sparkling[edit]
In Sparkling, the standard library provides functions to compile expressions and statements into functions. Each such function is considered a different top-level program, running in the execution context of it's "parent" program (i. e. the piece of code from within which it was created). Consequently, functions compiled at runtime share their environment (e. g. all globals) with their parent program.
Compiled expressions and statements can take arbitrary arguments and return values to the caller. As with any function, the expression or statement being compiled can refer to its arguments using the # prefix operator.
An expression always "returns" a value (i. e. evaluates to one) to the caller. Basically, compiling an expression is semantically (and syntactically) equivalent with creating a function with no declared arguments of which the body consists of a single return statement, returning the expression.
Evaluating expressions[edit]
Simple[edit]
let fn = exprtofn("13 + 37");
fn() // -> 50
With arguments[edit]
let fn = exprtofn("#0 * #1");
fn(3, 4) // -> 12
Evaluating statements[edit]
let fn = compile("for (var i = 0; i < 10; i++) { print(i); }");
fn(); // result: 0 1 2 3 4 5 6 7 8 9
Tcl[edit]
Simple Evaluation[edit]
Evaluation in the current interpreter:
set four 4
set result1 [eval "expr {$four + 5}"] ;# string input
set result2 [eval [list expr [list $four + 5]]] ;# list input
Evaluation in a restricted context[edit]
Tcl handles sandboxing by creating new interpreters. Each interpreter is strongly isolated from all other interpreters except in that the interpreter that creates a sub-interpreter retains management control over that “slave” interpreter. The exact capabilities exposed in the slave are controlled by what commands exist in it; commands in the slave may be aliases for other commands in the master interpreter, which allows for trapping into a more highly authorized context (which can be considered analogous to a system call to an OS kernel).
# Create an interpreter with a default set of restrictions
interp create -safe restrictedContext
# Our secret variable
set v "secret"
# Allow some guarded access to the secret from the restricted context.
interp alias restrictedContext doubleSecret {} example
proc example {} {
global v
lappend v $v
return [llength $v]
}
# Evaluate a script in the restricted context
puts [restrictedContext eval {
append v " has been leaked"
catch {file delete yourCriticalFile.txt} ;# Will be denied!
return "there are [doubleSecret] words in the secret: the magic number is [expr {4 + 5}]"
}]; # --> there are 2 words in the secret: the magic number is 9
puts $v; # --> secret secret
As can be seen, the result of the overall evaluation is the same as the result of the evaluation in the slave.
Note that with providing values to the restricted context, it is normal to do this by providing an alias/trap command in the restricted context to allow the script to pick up the value when it wants it. Although the value could also have been provided by setting a variable in the restricted context, this is fairly unusual in practice. The example above shows how this might be done with the result of the
doubleSecret command.
Evaluation within limits[edit]
Even stronger protection of the master interpreter is available from Tcl 8.5 onwards through the setting of resource limits on the slaves. These allow the master to prevent the evaluated script from going berserk:
set i [interp create]
interp limit $i commands -value [expr [$i eval info cmdcount]+20] -granularity 1
interp eval $i {
set x 0
while {1} { # Infinite loop! Bwahahahaha!
puts "Counting up... [incr x]"
}
}
- Output:
Counting up... 1 Counting up... 2 Counting up... 3 Counting up... 4 Counting up... 5 Counting up... 6 Counting up... 7 Counting up... 8 Counting up... 9 Counting up... 10 command count limit exceeded
TI-89 BASIC[edit]
The function
expr(string) evaluates the string as an expression. It is evaluated in the environment of the calling program (it can see local variables).
The
Exec string, args... statement executes arbitrary 68k machine code (and is thus entirely unsafe).
TODO: Is there a way to execute statements as well as evaluate expressions?
UNIX Shell[edit]
eval is the command to use:
$ a=42
$ b=a
$ eval "echo \$$b"
42
Ursa[edit]
The eval statement in Ursa takes a string and evaluates it as a command, redirecting the console to the specified I/O device.
# writes hello world to the console
eval "out \"hello world\" endl console" console
zkl[edit]In zkl, the compiler is part of the language and compiling a chunk of code returns an executable (which how the REPL works), so
Compiler.Compiler.compileText(
"fcn f(text){text.len()}").f("foobar")
//-->6
All language constructs are allowed, the only sand boxing is the new code can only touch global resources or items explicitly passed in ("foobar" in the example).
- Programming Tasks
- Solutions by Programming Task
- ALGOL 68
- AutoHotkey
- BASIC
- BBC BASIC
- Burlesque
- Caché ObjectScript
- Common Lisp
- Déjà Vu
- E
- EchoLisp
- Elixir
- Erlang
- Factor
- Forth
- Frink
- Go
- Groovy
- GW-BASIC
- Harbour
- HicEst
- J
- Lasso
- Liberty BASIC
- Lua
- JavaScript
- Mathematica
- MATLAB
- Maxima
- Oforth
- OoRexx
- OxygenBasic
- Oz
- PARI/GP
- Perl
- Perl 6
- PHP
- PicoLisp
- Pike
- PowerShell
- Python
- R
- Racket
- REBOL
- REXX
- Ring
- Ruby
- Scheme
- Sidef
- Slate
- Smalltalk
- SNOBOL4
- Sparkling
- Tcl
- TI-89 BASIC
- TI-89 BASIC examples needing attention
- UNIX Shell
- Ursa
- Zkl
- Ada/Omit
- Axe/Omit
- C/Omit
- C++/Omit
- D/Omit
- Haskell/Omit
- Java/Omit
- Lily/Omit
- Pascal/Omit
- PureBasic/Omit
- Swift/Omit | http://rosettacode.org/wiki/Runtime_evaluation | CC-MAIN-2017-13 | refinedweb | 8,260 | 53 |
Tutorials > C++ > Relational Operators
It is almost impossible to make a program worthwhile without allowing it to make
certain decisions. To make a decision, your program needs to be able to
calculate whether a statement is true or false.
This is done using logical and relational operators.
A special data type known as a boolean is a 1 bit
data type. It can consist of either a 0 or a 1.
The data type is declared in C / C++ by the bool data type.
2 special keywords, true and false.
false is equal to 0 and true is equal to 1.
We use these keywords usually instead of 0 and 1 as it is
easier to read and understand.
Contents of main.cpp :
#include <stdlib.h>
#include <iostream>
using namespace std;
int main()
{
To shorten the code, I have created 2 variables, t and f
which have the values of true and false respectively.
bool t = true;
bool f = false;
You can see that if you output t, you get a
1 and NOT the string "true".
cout << t << endl;
Our first logical operator that we will discuss is the ! operator.
This is known as the NOT operator. It is placed
before a variable or expression. It can only be used on a variable or expression
that can be represented as true or false.
It has the effect of giving the opposite ie.
NOT true is false
NOT false is true
cout << "!true : " << !t << endl;
cout << "!false : " << !f << endl;
Another operator is the AND (&&)
operator. Both expressions need to be true for an
&& operator to return true.
If any of the 2 operands are false, the final answer is false.
cout << "true && true : " << (t && t) << endl;
cout << "true && false : " << (t && f) << endl;
cout << "false && false : " << (f && f) << endl;
Another operator is the OR (||)
operator. Either expression needs to be true for a
|| operator to return true.
If any of the 2 operands are true, the final answer is true.
cout << "true || true : " << (t || t) << endl;
cout << "true || false : " << (t || f) << endl;
cout << "false || false : " << (f || f) << endl;
Now that you understand how logical operators work, we can mix them with relational
operators. Relational operators compare two operands with each other. Some of the relational operators
are shown below :
Please note that the equal to operator is NOT = but.
It must be a double equals (==).
If you said (x == 3), this would return true if x was equal to 3. If you said
(x = 3), it would assign 3 to x.
As you may have picked up from the previous statement, the result of any relational
comparison is a boolean value.
You may have not seen it before, but you can define more than one variable on the
same line if they are the same type as shown below.
int x = 3, z = 3;
int y = 6;
The statements below give examples of relational expressions. 3 is not greater
than 6, therefore (3 > 6) returns false.
cout << "3 > 6 : " << (x > y) << endl;
cout << "3 < 6 : " << (x < y) << endl;
cout << "3 == 6 : " << (x == y) << endl;
cout << "3 == 3 : " << (x == z) << endl;
cout << "3 >= 1 : " << (3 >= 1) << endl;
cout << "3 != 1 : " << (3 != 1) << endl;
Relational expressions returning boolean values
allow us to mix them with logical expressions.
The first statement in the next code segment will evaluate as follows :
cout << "3 < 9 OR 7 > 2 : " <<
((3 < 9) || (7 > 2)) << endl;
Similarly, the next statement will evaluate as follows :
cout << "4 != 6 AND 10 > 12 : " <<
((4 != 6) && (10 > 12)) << endl;
system("pause");
return 0;
}
Congratulations. You should now know how to create logical and relational expressions
using special operators. The operators will become second nature to you as they are the basis
of every program. You may be wondering how these expressions can help you? You will be able
to see this more clearly in the next tutorial.
Please let me know of any comments you may have : Contact Me
Back to Top
Read the Disclaimer | http://www.zeuscmd.com/tutorials/cplusplus/12-RelationalOperators.php | CC-MAIN-2019-04 | refinedweb | 662 | 73.88 |
tqdm
tqdm (read taqadum, تقدّم) means "progress" in arabic.
Instantly make your loops show a smart progress meter – just wrap any iterable with "tqdm(iterable)", and you’re done!
from tqdm import tqdm for i in tqdm(range(9)): ...
Here’s what the output looks like:
76%|████████████████████ | 7641/10000 [00:34<00:10, 222.22 it/s]
trange(N) can be also used as a convenient shortcut for
tqdm(xrange(N)) .
Overhead is low — about 60ns per iteration (80ns with
gui=True ), and is unit tested against performance regression. By comparison, the well establishedProgress, Solaris/SunOS), in any console or in a GUI, and is also friendly with IPython/Jupyter notebooks.
tqdm does not require any library (not even curses!) to run, just a vanilla Python interpreter will do and an environment supporting
carriage return /r and
line feed /n control characters.
Table of contents
Latest pypi stable release
pip install tqdm
Latest development release on github
Pull and install in the current directory:
pip install -e git+
The list of all changes is available either onGithub’s Releases or on crawlers such as allmychanges.com .
tqdm is very versatile and can be used in a number of ways. The two main ones are given below.
Wrap
tqdm() around any iterable:
text = "" for char in tqdm(["a", "b", "c", "d"]): text = text + char
trange(i) is a special optimised instance of
tqdm(range(i)) :
for i in trange(100): pass
Instantiation outside of the loop allows for manual control over
tqdm() :
pbar = tqdm(["a", "b", "c", "d"]) for char in pbar: pbar.set_description("Processing %s" % char)
Manual control on
tqdm() updates by using a
with statement:
with tqdm(total=100) as pbar: for i in range(10): pbar.update(10)
If the optional variable
total (or an iterable with
len() ) is provided, predictive stats are displayed.
with is also optional (you can just assign
tqdm() to a variable, but in this case don’t forget to
del or
close() at the end:
pbar = tqdm(total=100) for i in range(10): pbar.update(10) pbar.close()
class tqdm(object): """ Decorate an iterable object, returning an iterator which acts exactly like the original iterable, but prints a dynamically updating progressbar every time a value is requested. """ def __init__(self, iterable=None, desc=None, total=None, leave=True, file=sys.stderr, ncols=None, mininterval=0.1, maxinterval=10.0, miniters=None, ascii=None, disable=False, unit='it', unit_scale=False, dynamic_ncols=False, smoothing=0.3, bar_format=None, initial=0, position=None):
- iterable : iterable, optional
Iterable to decorate with a progressbar. Leave blank [default: None] to manually manage the updates.
- desc : str, optional
Prefix for the progressbar [default: None].
- total : int, optional
The number of expected iterations. If not given, len(iterable) is used if possible. As a last resort, only basic progress statistics are displayed (no ETA, no progressbar). If gui is True and this parameter needs subsequent updating, specify an initial arbitrary large positive integer, e.g. int(9e9).
- leave : bool, optional
If [default: True], removes all traces of the progressbar upon termination of iteration.
- file : io.TextIOWrapper or io.StringIO, optional
Specifies where to output the progress messages [default: sys.stderr]. Uses file.write(str) and file.flush() methods.
- ncols : int, optional
The width of the entire output message. If specified, dynamically resizes the progressbar to stay within this bound. If [default: None], attempts to use environment width. The fallback is a meter width of 10 and no limit for the counter and statistics. If 0, will not print any meter (only stats).
- mininterval : float, optional
Minimum progress update interval, in seconds [default: 0.1].
- maxinterval : float, optional
Maximum progress update interval, in seconds [default: 10.0].
- miniters : int, optional
Minimum progress update interval, in iterations [default: None]. If specified, will set mininterval to 0.
- ascii : bool, optional
If [default: None] or false, use unicode (smooth blocks) to fill the meter. The fallback is to use ASCII characters 1-9 #.
- disable : bool
Whether to disable the entire progressbar wrapper [default: False].
- unit : str, optional
String that will be used to define the unit of each iteration [default: ‘it’].
- unit_scale : bool, optional
If set, the number of iterations will be reduced/scaled automatically and a metric prefix following the International System of Units standard will be added (kilo, mega, etc.) [default: False].
- dynamic_ncols : bool, optional
If set, constantly alters ncols to the environment (allowing for window resizes) [default: False].
- smoothing : float
Exponential moving average smoothing factor for speed estimates (ignored in GUI mode). Ranges from 0 (average speed) to 1 (current/instantaneous speed) [default: 0.3].
- bar_format : str, optional
Specify a custom bar string formatting. May impact performance. [default: ‘{l_bar}{bar}{r_bar}’], where l_bar is ‘{desc}{percentage:3.0f}%|’ and r_bar is ‘| {n_fmt}/{total_fmt} [{elapsed_str}<{remaining_str}, {rate_fmt}]’. Possible vars: bar, n, n_fmt, total, total_fmt, percentage, rate, rate_fmt, elapsed, remaining, l_bar, r_bar, desc.
- initial : int, optional
The initial counter value. Useful when restarting a progress bar [default: 0].
- position : int, optional
Specify the line offset to print this bar. Useful to manage multiple bars at once (eg, from threads).
- out : decorated iterator.
def update(self, n=1): """ Manually update the progress bar, useful for streams such as reading files. E.g.: >>> t = tqdm(total=filesize) # Initialise >>> for current_buffer in stream: ... ... ... t.update(len(current_buffer)) >>> t.close() The last line is highly recommended, but possibly not necessary if `t.update()` will be called in such a way that `filesize` will be exactly reached and printed. Parameters ---------- n : int Increment to add to the internal counter of iterations [default: 1]. """ def close(self): """ Cleanup and (if leave=False) close the progressbar. """ def trange(*args, **kwargs): """ A shortcut for tqdm(xrange(*args), **kwargs). On Python3+ range is used instead of xrange. """ class tqdm_gui(tqdm): """ Experimental GUI version of tqdm! """ def tgrange(*args, **kwargs): """ Experimental GUI version of trange! """
Examples and Advanced Usage
See theexamples folder or import the module and run
tqdm can easily support callbacks/hooks and manual updates. Here’s an example with
urllib :
urllib.urlretrieve documentation
[…]
If present, the hook function will be called once
on establishment of the network connection and once after each block read
thereafter. The hook will be passed three arguments; a count of blocks
transferred so far, a block size in bytes, and the total size of the file.
[…]
import urllib from tqdm import tqdm def my_hook(t): """ Wraps tqdm instance. Don't forget to close() or __exit__() the tqdm instance once you're done with it (easiest using `with` syntax). Example ------- >>> with tqdm(...) as t: ... reporthook = my_hook(t) ... urllib.urlretrieve(..., reporthook=reporthook) """ last_b = [0] def inner(b=1, bsize=1, tsize=None): """ b : int, optional Number of blocks just transferred [default: 1]. bsize : int, optional Size of each block (in tqdm units) [default: 1]. tsize : int, optional Total size (in tqdm units). If [default: None] remains unchanged. """ if tsize is not None: t.total = tsize t.update((b - last_b[0]) * bsize) last_b[0] = b return inner eg_link = '' with tqdm(unit='B', unit_scale=True, leave=True, miniters=1, desc=eg_link.split('/')[-1]) as t: # all optional kwargs urllib.urlretrieve(eg_link, filename='/dev/null', reporthook=my_hook(t), data=None)
It is recommend to use
miniters=1 whenever there is potentially large differences in iteration speed (e.g. downloading a file over a patchy connection).
Due to popular demand we’ve added support for
pandas — here’s an example for
DataFrameGroupBy.progress_apply :
import pandas as pd import numpy as np from tqdm import tqdm, tqdm_pandas df = pd.DataFrame(np.random.randint(0, 100, (100000, 6))) # Create and register a new `tqdm` instance with `pandas` # (can use tqdm_gui, optional kwargs, etc.) tqdm_pandas(tqdm()) # Now you can use `progress_apply` instead of `apply` df.groupby(0).progress_apply(lambda x: x**2)
In case you’re interested in how this works (and how to modify it for your own callbacks), see theexamples folder or import the module and run
tqdm supports nested progress bars. Here’s an example:
from tqdm import trange from time import sleep for i in trange(10, desc='1st loop'): for j in trange(5, desc='2nd loop', leave=False): for k in trange(100, desc='3nd loop'): sleep(0.01)
On Windowscolorama will be used if available to produce a beautiful nested display.
For manual control over positioning (e.g. for multi-threaded use), you may specify position=n where n=0 for the outermost bar, n=1 for the next, and so on.
How to make a good progress bar
A good progress bar is a useful progress bar. To be useful,
tqdm displays statistics and uses smart algorithms to predict and automagically adapt to a variety of use cases with no or minimal configuration.
However, there is one thing that
tqdm cannot do: choose a pertinent progress indicator. To display a useful progress bar, it is very important that
tqdm:
import os from tqdm import tqdm, trange from time import sleep def dosomething(buf): """Do something with the content of a file""" sleep(0.01) pass def walkdir(folder): """Walk through each files in a directory""" for dirpath, dirs, files in os.walk(folder): for filename in files: yield os.path.abspath(os.path.join(dirpath, filename)) def process_content_no_progress(inputpath, blocksize=1024): for filepath in walkdir(inputpath): with open(filepath, 'rb') as fh: buf = 1 while (buf): buf = fh.read(blocksize) dosomething(buf)
process_content_no_progress() does the job, but does not show any information about the current progress, nor how long it will take.
To quickly fix that using
tqdm , we can use this naive approach:
def process_content_with_progress1(inputpath, blocksize=1024): for filepath in tqdm(walkdir(inputpath), leave=True): with open(filepath, 'rb') as fh: buf = 1 while (buf): buf = fh.read(blocksize) dosomething(buf)
process_content_with_progress1() will load
tqdm() , but since the iterator does not provide any length (
os.walkdir() does not have a
__len__() method for the total files count), there is only an indication of the current and past program state, no prediction:
4it [00:03, 2.79it/s]
The way to get predictive information is to know the total amount of work to be done. Since
os.walkdir() cannot give us this information, we need to precompute this by ourselves:
def process_content_with_progress2(inputpath, blocksize=1024): # Preprocess the total files count filecounter = 0 for dirpath, dirs, files in tqdm(os.walk(inputpath)): for filename in files: filecounter += 1 for filepath in tqdm(walkdir(inputpath), total=filecounter, leave=True): with open(filepath, 'rb') as fh: buf = 1 while (buf): buf = fh.read(blocksize) dosomething(buf)
process_content_with_progress2() is better than the naive approach because now we have predictive information:
50%|██████████████████████ | 2/4 [00:00<00:00, 4.06it/s]
However, the progress is not smooth: it increments in steps, 1 step being 1 file processed. The problem is that we do not just walk through files tree, but we process the files contents. Thus, if we stumble on one very large file which takes a great deal more time to process than other smaller files, the progress bar will still considers that file is of equal processing weight.
To fix this, we should use another indicator than the files count: the total sum of all files sizes. This would be more pertinent since the data we process is the files’ content, so there is a direct relation between size and content.
Below we implement this approach using a manually updated
tqdm bar, where
tqdm will work on size, while the
for loop works on files paths:
def process_content_with_progress3(inputpath, blocksize=1024): # Preprocess the total files sizes sizecounter = 0 for dirpath, dirs, files in tqdm(os.walk(inputpath)): for filename in files: fullpath = os.path.abspath(os.path.join(dirpath, filename)) sizecounter += os.stat(fullpath).st_size # Load tqdm with size counter instead of files counter with tqdm(total=sizecounter, leave=True, unit='B', unit_scale=True) as pbar: for dirpath, dirs, files in os.walk(inputpath): for filename in files: fullpath = os.path.abspath(os.path.join(dirpath, filename)) with open(fullpath, 'rb') as fh: buf = 1 while (buf): buf = fh.read(blocksize) dosomething(buf) if buf: pbar.update(len(buf))
And here is the result: a much smoother progress bar with meaningful predicted time and statistics:
47%|██████████████████▍ | 152K/321K [00:03<00:03, 46.2KB/s]
To run the testing suite please make sure tox ( ) is installed, then type
tox from the command line.
Where
tox is unavailable, a Makefile-like setup is provided with the following command:
$ python setup.py make alltests
To see all options, run:
$ python setup.py make
See the CONTRIBUTE file for more information.
Multiple licences, mostly MPLv2.0, MIT licences .
- Casper da Costa-Luis (casperdcl)
- Stephen Larroque (lrq3000)
- Hadrien Mary (hadim)
- Noam Yorav-Raphael (noamraph)*
- Ivan Ivanov (obiwanus)
- Mikhail Korobov (kmike)
* Original » Tqdm – a fast, extensible progress bar for Python
评论 抢沙发 | http://www.shellsec.com/news/6388.html | CC-MAIN-2016-50 | refinedweb | 2,130 | 56.86 |
The whole area of Face Recognition is something I love reading about. Implementing one yourself makes you sound like you are Tony Stark and you can use them for a variety of different projects such as an automatic lock on your door, or building a surveillance system for your office.
In this tutorial, we are going to be building our own, really simple face recognition based system in Go using a few existing libraries. We’ll start by doing simple face recognition on still images and seeing how that works and we’ll then be expanding upon this to look into real-time face recognition on video feeds in part 2 of this mini-series.
Video Tutorial
This tutorial is also available in video format! If you want to support me and my channel then please like and subscribe :)
The Kagami/go-face package
For the basis of this tutorial, we’ll be using the kagami/go-face package which wraps around the dlib machine learning toolkit!
Kagami actually wrote about how he went about writing this package. It’s definitely an interesting read and you can find it here:
The dlib toolkit
The Dlib toolkit is built in C++ and is incredible at both face and object recognition/detection. According to its documentation, it scores around 99.4% accuracy on detecting labeled faces in the Wild benchmark which is incredible and it’s the reason why so many other third-party libraries utilize it as their base.
I’ve covered the Dlib toolkit’s Python library — face_recognition in a previous tutorial. If you want to check out the python equivalent of this tutorial, here it is: An introduction to Face Recognition in Python
Setup
I’m not going to lie, getting this up and running is slightly more painful than your standard Go package. You’ll need to install both
pkg-config and
dlib on your machine. If you are running on MacOS then this is the command:
$ brew install pkg-config dlib
$ sed -i '' 's/^Libs: .*/& -lblas -llapack/' /usr/local/lib/pkgconfig/dlib-1.pc
Getting Started
We’ll first of all need to download the `kagami/go-face` package which can be done with the following `go get` command:
$ go get -u github.com/Kagami/go-face
Create a new directory called
go-face-recognition in your GOPATH directory. Within this directory create a new file called
main.go, this is where all of our source code is going to reside.
Once you’ve done this, you will need to grab the files from the
image/ directory in the TutorialEdge/go-face-recognition-tutorial repo. The easiest way to do this is to clone the repo into another directory and just copy the image directory into your current working directory:
$ git clone
Once that has been successfully cloned, we have both the
.dat files that we need in order to kick off our face recognition program. You should also see a list of other
.jpg files which contain the faces of some of the Marvel Avengers.
package main
import (
"fmt"
"github.com/Kagami/go-face"
)
const dataDir = "testdata"
func main() {
fmt.Println("Facial Recognition System v0.01")
rec, err := face.NewRecognizer(dataDir)
if err != nil {
fmt.Println("Cannot initialize recognizer")
}
defer rec.Close()
fmt.Println("Recognizer Initialized")
}
Ok, so if we try and run our program at this point, we should see both
Facial Recognition System v0.01 and
Recognizer Initialized in our program’s output. We’ve successfully set everything we need up in order to do some cool advanced facial recognition!
Counting Faces in a Picture
Our first real test of this package will be to test to see whether we can accurately count the number of faces in a photograph. For the purpose of this tutorial, I will be using this photo:
As you can see, nothing fancy, just the solitary face of Tony Stark.
So, we now need to extend our existing program to be able to analyze this image and then count the number of faces within said image:
package main
import (
"fmt"
"log"
"path/filepath"
"github.com/Kagami/go-face"
)
const dataDir = "testdata"
func main() {
fmt.Println("Facial Recognition System v0.01")
rec, err := face.NewRecognizer(dataDir)
if err != nil {
fmt.Println("Cannot initialize recognizer")
}
defer rec.Close()
fmt.Println("Recognizer Initialized")
// we create the path to our image with filepath.Join
avengersImage := filepath.Join(dataDir, "tony-stark.jpg")
// we then call RecognizeFile passing in the path
// to our file to retrieve the number of faces and any
// potential errors
faces, err := rec.RecognizeFile(avengersImage)
if err != nil {
log.Fatalf("Can't recognize: %v", err)
}
// we print out the number of faces in our image
fmt.Println("Number of Faces in Image: ", len(faces))
}
When we run this, we should see the following output:
$ go run main.go
Facial Recognition System v0.01
Recognizer Initialized
Number of Faces in Image: 1
Awesome, we’ve been able to analyze an image and determine that the image contains the face of one person. Let’s try a more complex image with more of the Avengers in it:
When we update line 24:
avengersImage := filepath.Join(dataDir, "avengers-01.jpg")
And re-run our program, you should see that our program is able to determine that 2 people are in this new image.
Recognizing Faces:
Sweet, so we’re able to calculate the number of faces in an image, now what about actually determining who those people are?
To do this, we’ll need a number of reference photos. For example, if we wanted to be able to recognize Tony Stark out of a photo, we would need example photos tagged with his name. The recognition software would then be able to analyze photos for faces with his likeness and match them together.
So, let’s take our
avengers-02.jpeg as our reference image for Tony Stark and then see if we can identify him from the image we previously used for him.
avengersImage := filepath.Join(dataDir, "avengers-02.jpeg")
faces, err := rec.RecognizeFile(avengersImage)
if err != nil {
log.Fatalf("Can't recognize: %v", err)
}
fmt.Println("Number of Faces in Image: ", len(faces))
var samples []face.Descriptor
var avengers []int32
for i, f := range faces {
samples = append(samples, f.Descriptor)
// Each face is unique on that image so goes to its own category.
avengers = append(avengers, int32(i))
}
labels := []string{
"Dr Strange",
"Tony Stark",
"Bruce Banner",
"Wong",
}
// Pass samples to the recognizer.
rec.SetSamples(samples, avengers)
So, in the above code, we’ve gone through all of the faces in order from left to right and labeled them with their appropriate names. Our recognition system can then use these reference samples to try and perform it’s own facial recognition on subsequent files.
Let’s try testing out our recognition system with our existing image of Tony Stark and seeing if it’s able to recognize this based of the face descriptor it generated from the
avengers-02.jpeg file:
testTonyStark := filepath.Join(dataDir, "tony-stark.jpg")
tonyStark, err := rec.RecognizeSingleFile(testTonyStark)
if err != nil {
log.Fatalf("Can't recognize: %v", err)
}
if tonyStark == nil {
log.Fatalf("Not a single face on the image")
}
avengerID := rec.Classify(tonyStark.Descriptor)
if avengerID < 0 {
log.Fatalf("Can't classify")
}
fmt.Println(avengerID)
fmt.Println(labels[avengerID])
Let’s now try to validate that this wasn’t a fluke and try to see if our image recognition system works with an image of Dr Strange.
testDrStrange := filepath.Join(dataDir, "dr-strange.jpg")
drStrange, err := rec.RecognizeSingleFile(testDrStrange)
if err != nil {
log.Fatalf("Can't recognize: %v", err)
}
if drStrange == nil {
log.Fatalf("Not a single face on the image")
}
avengerID = rec.Classify(drStrange.Descriptor)
if avengerID < 0 {
log.Fatalf("Can't classify")
}
And finally, let’s try this out using Wong’s image:
testWong := filepath.Join(dataDir, "wong.jpg")
wong, err := rec.RecognizeSingleFile(testWong)
if err != nil {
log.Fatalf("Can't recognize: %v", err)
}
if wong == nil {
log.Fatalf("Not a single face on the image")
}
avengerID = rec.Classify(wong.Descriptor)
if avengerID < 0 {
log.Fatalf("Can't classify")
}
fmt.Println(avengerID)
fmt.Println(labels[avengerID])
When you run this all together, you should see the following output:
$ go run main.go
Facial Recognition System v0.01
Recognizer Initialized
Number of Faces in Image: 4
1 Tony Stark
0 Dr Strange
3 Wong
Awesome, we’ve managed to build up a really simple face recognition system that allows us to identify the various different Avengers.
Challenge: Build up a number of reference files on all of the Avengers and try to extract out the face recognition code snippets into a reusable function
Complete Source Code:
The complete source code for this tutorial can be found in Github: Tutorialedge/go-face-recognition-tutorial
Conclusion
In this tutorial, we successfully managed to build a really simple face recognition system that works on still images. This will hopefully form the basis of the next part of this tutorial series, in which we look at how to do this in a real-time context on a video stream.
Hopefully you enjoyed this tutorial, if you did then please let me know in the comments section down below!
Originally published at tutorialedge.net. | https://hackernoon.com/go-face-recognition-tutorial-part-1-373357230baa | CC-MAIN-2019-47 | refinedweb | 1,543 | 56.96 |
.us Domains Coming in 2002 261
marnanel writes "Perhaps it had to happen eventually: the .us top-level domain has been transferred to a private company, NeuStar. One of the most interesting effects of this is that second-level domains, such as foo.us, will be available for the first time, instead of the existing hierarchical county.state.us system." But not until mid 2002.
Finally (Score:5, Funny)
Re:Finally (Score:2, Funny)
Re:Finally (Score:2, Insightful)
Re:Finally (Score:3, Funny)
kinda like if you remember when NSI wouldn't let people register domains with swears in them... (like the "f" word), so someone registered
.off.com.....
I'm just wondering what lucky porn site is gonna get fuck.us
Re:Finally (Score:2)
Unfortunately, it won't be put to any good use. Some waste of sperm will toss up a porno clusterfuck page, with unending pop-up windows when you try to close it. I'd rather see it used for something amusing.
BFD. (Score:4, Insightful)
On the upside for NeuStar, they are sure to make a fortune from all the companies sick of getting into lawsuits over this sort of thing and buy thier
How about an email address at ... (Score:1, Funny)
or
sue.us
us domain jokes (Score:1)
.org.us (Score:1, Interesting)
.edu and .gov (Score:4, Insightful)
Since they are only used by the US governement and US schools, i think they should be moved to
Just my thoughts..
-J
Re:.edu and .gov (Score:1)
also, in the name of simplicity i think we should keep things as they are: established and shorter than the proposed change.
Re:.edu and .gov (Score:2)
Re:.edu and .gov (Score:1)
Faulklands worth invading???? (Score:2)
hawk
Re:.edu and .gov (Score:1)
Re:.edu and .gov (Score:1)
Since they are only used by the US governement and US schools
There is at least one non-US
.edu domain: mm.edu. .gov is quite US-only, though.
But it's strange that this happens now, when most other national top-domain has lost their "national" feeling, with USians controlling domains like
.nu where a lot of Swedish companies (and even branches of the government) have sites (since "nu" means "now" in Swedish).
Oh, well, perhaps we'll see non-US domains under
.us? That would be the perfect retribution...
Re:.edu and .gov (Score:1, Flamebait)
Plus, non-US organizations are free to use the non-country specific TLDs. Check out london.edu, nokia.com, or un.org.
Re:.edu and .gov (Score:1)
Re:.edu and .gov (Score:2, Insightful)
I think that the good people at CERN would disagree with that statement.
You're thinking of Tim's development of the Web. The Interent and the Web are not synonymous.
1Alpha7
Re:.edu and .gov (Score:1)
Re:.edu and .gov (Score:1)
While the US DID invent
Europe DID invent latin and greek letters.
Without us you would have to write your domains in babylonian, hebrew or chinese letters.
BTW, we hold all copyrights for the latin and greek alphabet, so start rolling the money over or stop using our IP !
Ignorance check! (Score:2)
> world series loving Americans, more then one country.
uh, check on what the "World Series" is . . .
It does *not* mean "world champion," and never did.
"World" was the name of the now defunct newspaper that schemed to get the champions of the two major professional baseball leagues to play each other. They slapped their name on it. The event and its name have outlasted the paper by decades . . .
hawk
US TLD linked at the hip to BIZ TLD (Score:2, Interesting)
In other words, if you don't accept the ICANN version of
Re:US TLD linked at the hip to BIZ TLD (Score:1, Interesting)
Just what we need (Score:1)
All these domains do is make things more complicated for those of us who have to remember all these web addresses and more expensive for companies trying to protect their trademarks in cyberspace. Maybe we should REDUCE the number of domains... From now on let's just just
Re:Just what we need (Score:1)
I got "r" and "are" (Score:1)
Squatters.r.us
Re:I got "r" and "are" (Score:1)
Re:I got "r" and "are" (Score:2)
Since you also mentioned two-letter domain names in your post as being verboten, what about Hewlett-Packard [hp.com] or Texas Instruments [ti.com]? (You need the "www." in front of them, though, to access their websites. General Motors [gm.com], OTOH, works without the "www.")
I had "z" (Score:2, Interesting)
I was the first to register Z.COM. IANA once gave a directive that said, "all one-letter names shall be reserved to enable name-hashing at a later time". Working for a company that registered domain names on a daily basis, I thought, "If X.ORG can have a domain name, why can't I register Z.COM?" To my surprise, it worked! The following month, IANA gobbled up all the rest of the one-letter names.
A few years later, I started having people knock on my door monthly saying they'd buy or trade my domain. They didn't see much of a value to it, and neither did I. While I was a bit altruistic, I did have a price in mind where I'd do away with my domain. One day this guy offered me 50% more than that price, so I took it. It went toward a down payment on a house that later made me some real money.
The guy tried to make a simple Z.COM web portal out of it. Their gimmick was that all one had to do was hit "z" on their web browser address, and poof, there you were at Z.COM. The portal never gained momentum.
Other people bought it from him and tried again to make a portal out of it, but their gimmick was to give "lifetime" e-mail accounts if they visited the portal regularly. Again, another Z.COM portal failed, and those "lifetime" addresses disappeared with it.
The next purchaser was apparently IDEAlab. They never did anything with it and with their financial demise probably thought they should sell/dump it for whatever they could.
Enter Nissan. My guess is that they might release or re-release a "Z" car in the future.
I mildly regret selling the name away. I thought the purchaser would have done something better with it. I could give Nissan a web redirect as good as anyone else.
--
Eric Z iegast
eric@z.com
uunet!z!eric
Re:I got "r" and "are" (Score:2)
Q is Qwest's ticker symbol. I'd assume that TI and GM are their symbols as well, though HP's is HWP, so...
OK, so there wasn't *really* a point.
:)
Other domain suggestions: (Score:1)
or
theshortb.us for archival of all score 1 posts.
and
RIAA.us, MPAA.us and FUCK.us, because after all they are 4-letter words meaning the same thing, right?
(I suppose that TLD in this case would mean Top Level Dicks...but I digress....)
Re:Other domain suggestions:..oops (Score:1)
dang, should have been (less than symbol) 1 posts.....
Looks like I'll be the owner of the shortb.us domain...
Whatever.us? (Score:1)
If not who will get screw.us? A p0rn site or another "name your price" e-tailer site??
Some ideas... (Score:2)
long.live.the.us
you.missed.the.b.us
time.to.disc.us/s
come.to.us
visit.us
screw.us
i.hate.the.us
Okay, that's enough...
so.. (Score:2, Offtopic)
Re:so.. (Score:2)
So? Spam them back as!
Re:so.. (Score:1)
<sigh>
--
Every normal man must be tempted, at times, to spit on his hands, hoist the black flag, and begin slitting throats.
-- H. L. Mencken
"Funny" domains coming up... (Score:2)
Any other funny URL predictions?
.asm (Score:3, Interesting)
These domain names were just brain farts, i do not support acts of terrorism.
Re:.asm (Score:1)
Get quick. It's still available.
Re:.asm [a bit OT, watch out] (Score:1)
From their registration rules:
B.0.1 Identification
The domain name that is requested for the registration of an entity must neither be misleading nor obscure. San Marino RA can inform the applicant about possible ambiguities and ask for a changed application.
NOTE: The domain name that is chosen for the registration must be similar to the applicant entity name or it must be similar to one of its services, products, trade-marks and so on in order to assure an easy identification of the name itself.
So, unless your last name is Orga, or own a company that's called that way, you're not gonna get it from them.
Dunno about sex sites though: "So what do you sell?", "Well, we provide people with orga's, don't know what they are?"
Competition for AskJeeves? (Score:2, Redundant)
.us domain: (Score:3, Funny)
finally (Score:4, Insightful)
Now if only
/Erik
Re:finally (Score:1)
Yeah, and reserve
.gov for the future world governement, and .mil for the global military that will defend us from evil Borgs...
Re:finally (Score:1)
Who let the optimist in here? (Score:3, Insightful)
First, we've have a top level domain like all the other contries have had -- each with their own rules and rulers -- it's just that ours were outstandingly misguided.
However, I have little confidence that the new ones will be any better. In any event, there is no chance that <big-american-corp> is going to give up <big-american-corp>.com -- they'll just have <big-american-corp>.com.us too -- wheeee, won't that be special!
Re:finally (Score:1)
/Erik
Weird (Score:1)
Hmmm, obviously I know less about the history of the DNS than I thought I did
Everytime this comes up on nanog, I tend to glaze over. I should pay more attention, I know...
*.co.us isn't what you think it is... (Score:2)
Re:Weird (Score:1)
please.stop.bombing.us
(will it be Funny or Flamebait?)
Knunov
Re:Open to Afghans? (Score:1)
Re:Open to Afghans? (Score:1)
Re:Open to Afghans? (Score:1)
Re:Open to Afghans? (Score:1)
Please try a different name, or press Here for help.
Is this really that important? (Score:1)
Re:Is this really that important? (Score:1)
Unfortunately,
.com and friends is much too polluted to be able to make that distinction. But then again, so are most the national domains as well, so I don't really see why this would be any better.
Perhaps we should just scrap the current DNS system and create a new domain structure from scratch? That would be something...
Wow. (Score:2, Interesting)
Then again, because certain municipalities were delegated to various ISP's it wasn't necessarily free... in Richmond, VA i2020.net wanted $200 per year for mydomain.richmond.va.us. This was only after 6 hours on the phone, trying to convince various people there that they had it delegated to them...
Maybe I take these things too seriously, but it makes me sick just thinking about it.
Fair. (Score:1, Flamebait)
I always thought it's fair that US companies register themselves as
.com without .country sufix. It's fair because internet has born in US.
But ITOH every country has its own sufix and US has none. Now each country has its own sufix and also can register a "country-sufix-less" domain. It's much more fair to everybody.
Well, it's good to have
.us domains, for me it sounds that www is becoming much more world than web.
Re:Fair. (Score:2)
I always thought it's fair that US companies register themselves as
.com without .country sufix. It's fair because internet has born in US.
That's not how it's set up.
.com is for companies. No mention of them being US, just commercial. If you want a regional domain for the us, use .us.
Re:Fair. (Score:2)
Here in brazil governmental sites uses
.gov.br what about in US? It's .gov and not .gov.us.
That's what I meant. Not only to commercial sites, but also for government, educational, etc
Re:Fair. (Score:1)
I've always thought of
.com/net/org as a international/US domain. Not just US. I think it would be good to have a .us. .coms are being used as an interational suffex. Even though a site may not be based in the US, they might have a .com to show that they are expect an international audiance.
It's as if the
Sometimes is's even sillier, like when people get somethingNZ.com, and don't even bother to get something.co.nz or somethingNZ.co.nz. When the site is clearly supposed to be local.
So, don't think that MS or Canon, IBM, even slashdot (any big company, or location irrelivent sites) will be getting a
.com.us for their main domain anytime soon. But maybe walmart.com.us might appear (or any other US only places).
(Sigh) (Score:4, Insightful)
You could also have
Still, one positive feature of the new setup is that there won't be artificial scarcity created underneath the
(Possible new business for Sealand: lawyer-proof
Re:(Sigh) (Score:3, Informative)
Umm.
Re:(Sigh) (Score:2)
I think the big secret about the TLD system is that it isn't pefect, but it works as well as any other system would. Your system would not stop the lawsuits due to trademark confusion just because someone registered it as mcdonalds.fcfs.com instead of mcdonalds.tm.com. The only real problem with the DSN system is that it is hard to get the exact name you want. But that is going to be the case with any number of TLDs I believe, because people will buy them up either way.
Making people prove they deserve a domain is even worse....it takes time, and would be an anchor on the internet. I don't want to have to wait 2 weeks to get some pinhead to approve my registration application.
Unfortunately, big corporations have an advantage over an individual. Such is life.
Re:(Sigh) (Score:2)
I agree, but some people will try to inflict PITA bureaucracy whether you ask for it or not. Witness the arbitration system for
.com (guinessbeersucks.com, gateway.com and so on).
My suggestion was aimed at keeping these people in a contained space. They can have their legally regulated
.tm and .com domains - where the rules are clear and explicit, and not as arbitrary as the current system - and everyone else can use .fcfs for first come, first served.
(Sigh) Ignorance must be bliss (Score:2)
It's so easy to be sure of oneself when ignorant.
Exactly which "Olympia" gets that olympia.us? Olympia Pizza down the street from me? One of the 1,000 other unrelated Olympia Pizza's across the US? Olympia Cruise Lines? Olympia Finance Corporation? Matt Olympia?
What about trademarks? NT goes to Microsoft or to Nortel (nee Northern Telecom)? What about the dozens of other trademarked NT's in various fields? NT adhesive or NT car parts?
Sigh - Ignorance must be bliss.
Re:(Sigh) Ignorance must be bliss (Score:2)
Re:(Sigh) Ignorance must be bliss (Score:2)
For
Re:(Sigh) Ignorance must be bliss (Score:2)
Tnus you need geographic parts (which is what exists now) and most likely "type" parts of the name. Certainly you need those for trademarks.
What about trademarks? NT goes to Microsoft or to Nortel (nee Northern Telecom)?
Most likely domains for companies should be restricted to legal or trading names. Rather than those of products. Also Nortel is a Canadian company IIRC.
and where do personal sites belong? (Score:2)
Where are the little guys supposed to go? they're not
Maybe this will get your collective minds going (Score:2)
Re:Maybe this will get your collective minds going (Score:2)
grep us\$
/usr/share/dict/words
You may wish to pipe the output into less, or redirect it into a file.
Re:Maybe this will get your collective minds going (Score:2)
And what this means is: (Score:3, Flamebait)
The last bit of organization associated with United States centric domain name organization is gone.
It sucked when
.net, .org, and .com were relegated to equals rather than their intended purpose.
Now
.us will just be the same.
The evolution of things? It's like this:
1. In the beginning, commercial companies who were not network infrastructure providers could only register
.com, thus leaving .org and .net free for nonprofit orgs and network providers.
2. Bill Clinton came along and gave the internet to the corporations, and suddenly U.S. companies registered their names in
.com, .net, and .org. Thus, using even more namespace.
3.
.biz comes along, and those same companies will now have FOUR names in the namespace.
4. Now
.us will be exactly the same. Now those companies will just have mytrademark.com, mytrademark.net, mytrademark.org, mytrademark.biz, and NOW mytrademark.us.
So, can anyone tell me what good this move is, rather than making them register under county.state.us?
Anyone else remember when domain names were free and you never got spam on usenet or e-mail? It was the giving up of
.org and .net brought about by Clinton-Gore that got us where we are today.
When Gore "invented" the internet, what he and Clinton invented was the destruction of it's beauty as a free exchange that wasn't dominated by giant corporations wielding laws like the DMCA.
Re:And what this means is: (Score:1)
The move was afoot long before Bill and Al came in.
Re:And what this means is: (Score:2)
Not just their names but also names of their products, even advertising slogans and misspellings. Also quite a few things ended up as
Yes, I remeber the net of the 70's (Score:2)
I have to assume, however, that you do not.
If you did you would be aware that the changes in the net that the original poster was bemoaning have nothing, I will repeat, emphatically, NOTHING, to do with technology.
They are all strictly factors orginizational, politcal and legal. They are *human* changes, and thus behavioural.
Thus, the proper tools for change and improvement are the tools of human interaction. Debate and disent being chief among these.
If I might paraphrase Linus Torvalds, if you wish to actually say something of value to the issue, show me the argument.
KFG
But will they be used? (Score:2)
I can imagine that some large companies will get the domain, simply to "collect the whole set", but do you seriously imagine that you will start to see adds for on the billboards? I just simply don't see it happening.
Although i would like to see who ends up with trust.us
;-)
Re:But will they be used? (Score:2)
To the Great Unwashed Masses, the only domain worth knowing about is ".com". I was trying to set up a "Reply-To" line for my SprintPCS mail. When I called their tech support, I was told that my email address <xxxxx@xxxxx.chi.il.us> was invalid 'cause it didn't end in ".com"! *sigh*
If it doesn't start with "www." and end with ".com", the muggles just can't cope with it.
NEUSTAR CAN KISS MY (Score:2, Interesting)
The
The United States is a LARGE, well-connected country. It is NOT practical to give 2nd-level domains (joeblow.us) out to the public. The system of org.locality.state.us is much more fair as there will be less disputes. Granted, companies and organizations that span more than one locality or state should be allowed to have lower-level (3rd or maybe 2nd) domains.
I emailed Neustar (that is the stupidest name of a company I have ever heard) last week about some of these issues I am concerned about, and never received a response.
As a
Overly confusing? (Score:3, Informative)
This also leads to another problem. Smaller sites don't want to have to manage two extentions (for the sake of costs and fragmentation). A few poltically-correct people will start typing in
Here's a scenaraio:
Small US based business with a website, does no international business. Clearly, Company X shouldn't have to buy a
Once the site has been up and popularized, a potential customer hears about the site; oxygenrx. He types oxygenrx.com into his browser... 404:not found. The potential customer releases a string of obcenities, then proceeds to a competitor's site. The opposite of this is true as well.
The obvious solution to this problem would be to buy a
Another way to put this into perspective is with the naming of a company.
For example, there is a company: Brooklyn Cheese House inc. From the name, you can tell it is strictly a small local business. One day, the managment changes it's name to Cheese House International. But, it's not an international business: it's still a small retail store in Brooklyn. Surely this will confuse customers (probably those who choose to patronize a local business over a large one). Same concept with the domains: a proper name prevents confusion and improves business.
Of course, this can't all be credited to the lateness in the availability of
Re:Overly confusing? (Score:2)
Utterly wrong, it was invented by a British man at CERN (an international organisation based in Europe.)
Take a great system... (Score:1)
BEWARE (Score:2)
Should've been from the start! (Score:5, Interesting)
It should have been this way right from the start. Every country should have its country code as its top level domain, and that should be subdivided as best convenient for that country. In the U.S., each state would be assigned a 2-letter name under
.us, and that state would be responsible for subdividing further. A big state like California might subdivide further by counties.
It should never have been simply "something.com"--this may have actually helped lead to the
.com mess of the past several years, which has screwed up the tech sector so badly. ("Hey! Here's a business idea! Better register that domain name NOW before someone gets it, write up some press releases, and we're millionaires!" It's all psychology. Make the system more organized and its users will have to be too.) From the very start, people would have gotten used to the fact that some company's domain name is something.county.state.us or something.city.state.us or whatever. (Subdividing by city actually makes more sense (to me) than by county, as your snailmail address includes your street address, city and state, not your county.)
Furthermore,
.net, .com and .org should only have existed for international entities; .net being for network providers; .com for multinational commercial entities and .org for multinational nonprofit organizations. ONLY! These domains, and only these domains, would be regulated by some international mess of a bureaucracy. Their rules would include a minimum number of countries you have to do business in before getting a domain like that. For example, you must do so many millions worth of business in, say, 10 countries in order to get a .com.
When limited to the U.S., these entities would have to get a
.com.state.us address, and the name must be the name of the business (or entity). Registered trademarks would get a .tm.us. Federal government sites would get a .gov.us. State governments would get a .gov.state.us. County and city governments would be further organized in a hierarchy.
In short, by using rules that make sense to KNOWLEDGEABLE computer folks, a very large mess wouldn't exist now. Huge technical problems would be reduced to nothing. Legal problems would nearly go away too--we wouldn't have people fighting over domain names and stupid stuff like that. (If there was a fight, it could only happen between people in the same city (or state in the worst case) and there would be no authority to handle it--all names are first-come-first-serve. (The protection is already in place, since you have to own the appropriate trademark or have the appropriately named business in order to have that domain name.) And if all else fails, one party could buy the name off the other, as was done in the past.)
The way the system is today causes another big HUGE chunk of bureaucracy that is totally unnecessary and costs a lot of money and headaches. OH WELL.
Yes yes, it already was, but got ignored (Score:2)
Web-era veterans might remember Netherlands BBS, originally at netherlands.ypsi.mi.us ('ypsi.mi' because it was located in Ypsilanti, Michigan). Eventually it was changed to nether.net. This of course worked in NetherNet's favor, because they then had a shorter hostname, and users did less typing, and there was much rejoicing.
Regardless, the current system is hardly bureaucratic -- its the opposite, uncontrolled and manipulated for profit over most beneficial function. And the solution of throwing more TLDs at the problem will only end up spanning the problem across TLDs. Sure, ICANN tells the TLD applicants who were lucky to win their lottery disguised as a review process that they have to limit who can get domains under their TLDs, but if ICANN's pattern of bending to commercial pressure continues, I expect that rule to hold for two years max.
Without a high-level directory, it was inevitable. (Score:2)
The problem is that the directory technology never matured fast enough, and was never adopted globally enough for it to serve that purpose. We're just now getting LDAP to start serving in the capacity of an e-mail directory, but short of the numerous incompatible search engines and proprietary "keyword" services out there, nothing has been able to do the same thing to sit above the DNS layer in a sane fashion.
If things had turned out the way they should have, DNS space wouldn't be traded like such a huge commodity, and we wouldn't have everybody and their freakin' pets with their own domain space sitting right off the top-level domains like we do today. It would end up as a nice hierarchy, but nobody but the techies would even care because it's not something generally exposed to the public. It's just ridiculous the way things are today.
Another poster mentioned that having his identity associated with a geographical domain name would suck since he'd have to rename everything when he moved. If things had been done right, this wouldn't really be an issue. The only naming that would need to change would be the naming of the Internet hosts that would move with him. If he was using a geographically-identified ISP and moved, he'd probably need to get a new ISP anyway, so his e-mail address would have changed. If he was hosting his own e-mail on his geographically-identified hosts, his hosts would have to move with him, so not only would he have to renumber, but yah, he'd have to rename as well. This really wouldn't have been as bad as it seems, since a higher-level directory would be what's linking his name and identity with his e-mail address, so after changing the address, a quick trip to the directory's update function would still allow him to receive his e-mail.
I really don't see a huge problem with the top-level generic domains like
But who knows, this may force a globally-recognized directory of proper names to services. As the number of "equivalent" top-level domain names increase, so does the ambiguity. Users are going to start using search engines more to locate an organization, which I see is a good thing, and the overall value of DNS space will begin to diminish.
Things will get there eventually, but many of us will be banging our heads on our desks in the mean time...
Corruption of US DoC (Score:2)
The United Nations World Intellectual Property Organization and the United States Department of Commerce are hiding the simple solution to trademark and domain name problem.
Virtually every word is trademarked - Alpha to Zeta or Aardvark to Zulu, most many times over (even in same country).
Trademarks are for the good of the people, as well as business. Most trademarks share the same words with many others in a different business and/or country. For example, 'cat' is used in 1746 trademarks in the USA alone. The authorities are allowing certain trademarks to be abused by their owners, giving them dominance over others. This is against unfair competition law.
The US DoC do this purposefully, also knowing they abridge peoples right to use these words - even the common words you learnt with your A B C's - like apple, ball and cat. This violates the First Amendment.
I have been in contact with various Government bodies (US and UK) and attorneys for quite some time now - they understand arguments perfectly. Nobody has denied the assertions made, not even UN WIPO.
Like I say, most trademarks share its name or initials with many others. When authorities could put trademark identity beyond shadow of doubt, they are either devoid of intelligence or corrupt. I have come to the logical conclussion, that they are corrupt.
Please visit WIPO.org.uk [wipo.org.uk] to see the simple solution.
Why is this good? (Score:2)
Suppose someone registers ibm.com.ru who isn't IBM. Suppose that country doesn't care about that person infringing on IBM's trademark? Now suppose someone in that country assumes ibm.com.ru is the country-specific site for IBM. What is IBM's country specific site is ru.ibm.com (which is how I think it SHOULD be). I can definitely see a problem here.
Does IBM register EVERY IBM.com.TLD as well as IBM.com? Should they have to do so? Seems ridiculous to me. TLDs should say something about the type/business of a company (which they no longer do) instead of stressing location. Furthermore, things will get muddier as the managing bodies decide to do force stupid things later on for "more organization" like "company.businesstype.city.state.country". Type rather than location isn't perfect, but it's BETTER than what is being proposed.
What we need are BETTER TLDs.
.media - for TV, Radio, Newspapers, and the like
.isp - for ISPs, since the
.retail - for retail businesses like Amazon, Sears, B&N, etc.
.pr0n - you get the idea
.linux - of course!
and, of course, some sort of governing body which FORCES the general business of the company to be related to the domain, or else forfeit their domain name (after a reasonable appeals process, of course). The existing
Re:Why is this good? (Score:2)
Timely (Score:2)
Now it's just a press release. Press Releases for Nerds, Stuff That Mattered Last Month.
Re:/.us (Score:1)
Re:Ack. (Score:2)
I'm not quite sure what the point of your post was, but assuming you think that online voting is a bad thing because of the above... well sir/ma'am, I guess we should outlaw telephones, too. They've been used countless times to defraud people who should know better into giving up personal information.
Then again, these days ignorance IS a legitimate excuse for stupidity. Sigh...
Re:I am pervert! (Score:2)
//rdj
Punishment. (Score:2)
Hey, what did I do?
(Thanks for the idea, though. Hmmm.)
--saint
Re:Seriously (Score:2)
ICANN dosn't enforce any use of country domains. Only few of them, like
Re:Why not localize the browsers instead? (Score:2)
You certainly don't dial the US country code to make a US to US call, neither should you do it for the web.
Might I remind you that the US country code is '1' and you dial it every time you make a long-distance call... | https://slashdot.org/story/01/11/25/1441200/us-domains-coming-in-2002 | CC-MAIN-2017-26 | refinedweb | 5,451 | 75.71 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.