text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
43 JSON Interview Questions And Answers 2019 JSON Interview Questions And Answers 2019. Here Coding compiler sharing a list of JSON interview questions & answers for freshers and experienced. This list JSON questions will help you to crack your next JSON job interview. All the best for your future and happy learning. JSON Interview Questions - What does JSON stand for? - What is JSON? - What programming languages supported by JSON? - Is JSON is a language? - What are the properties of JSON? - Why do we use JSON? - What is JSON data? - What is the difference between XML and JSON? - Why JSON format is better than XML? - Is JSON markup language? JSON Interview Questions And Answers JSON Interview Questions # 1) What does JSON stand for? Answer) JSON stands for “JavaScript Object Notation”. JSON Interview Questions # 2) What is JSON? Answer) JSON is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language, Standard ECMA. JSON Interview Questions # 3) What programming languages supported by JSON? Answer) JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. JSON Interview Questions # 4) Is JSON is a language? Answer) JSON is a data format. It could be classified as a language, but not a programming language. Its relationship to JavaScript is that it shares its syntax (more or less) with a subset of JavaScript literals. JSON Interview Questions # 5) What are the properties of JSON? Answer). JSON Interview Questions For Freshers JSON Interview Questions # 6) Why do we use JSON? Answer) The JSON format is often used for serializing and transmitting structured data over a network connection. It is used primarily to transmit data between a server and web application, serving as an alternative to XML. JSON Interview Questions # 7) What is JSON data? Answer) JSON, or JavaScript Object Notation, is a minimal, readable format for structuring data. In JSON data is nothing but a information. It is used primarily to transmit data between a server and web application, as an alternative to XML. JSON Interview Questions # 8) What is the difference between XML and JSON? Answer). JSON Interview Questions # 9) Why JSON format is better than XML? Answer) JSON and XML used different formats. When compared both JSON is easy to write and use it applications then XML. The XML format can also be determined by the XML DTD or XML Schema (XSL) and can be tested. The JSON a data-exchange format which is getting more popular as the JavaScript applications possible format. Basically this is an object notation array. JSON has a very simple syntax so can be easily learned. JSON Interview Questions # 10) Is JSON markup language? Answer) JSON is like XML in that it is used to structure data in a text format and is commonly used to exchange data over the Internet. JSON is not a markup language. JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. Top JSON Interview Questions 11) What is JSON Text? Answer) A JSON text is a sequence of tokens formed from Unicode code points that conforms to the JSON value grammar. The set of tokens includes six structural tokens, strings, numbers, and three literal name tokens. The six structural tokens: [ U+005B left square bracket { U+007B left curly bracket ] U+005D right square bracket } U+007D right curly bracket : U+003A colon , U+002C comma These are the three literal name tokens: true U+0074 U+0072 U+0075 U+0065 false U+0066 U+0061 U+006C U+0073 U+0065 null U+006E U+0075 U+006C U+006C Insignificant whitespace is allowed before or after any token. Whitespace is any sequence of one or more of the following code points: character tabulation (U+0009), line feed (U+000A), carriage return (U+000D), and space (U+0020). Whitespace is not allowed within any token, except that space is allowed in strings. JSON Interview Questions # 12) What is JSON Value? Answer) A JSON value can be an object, array, number, string, true, false, or null. 13) What is JSON Syntax? A) JSON syntax is derived from JavaScript object notation syntax. Data is in name/value pairs. Data is separated by commas. Curly braces hold objects. Square brackets hold arrays. JSON Interview Questions # 14) What is JSON Value? Answer) In JSON, value holds some data. A value can be a string in double quotes, or a number, or true or false or null, or an object or an array. These structures can be nested. Values in JSON must be one of the following data types: - a string - a number - an object (JSON object) - an array - a boolean - null 15) What is JSON Array? Answer) An array structure is a pair of square bracket tokens surrounding zero or more values. An array is an ordered collection of values. An array begins with [ (left bracket) and ends with ] (right bracket). Values are separated by , (comma). The values are separated by commas. The JSON syntax does not define any specific meaning to the ordering of the values. However, the JSON array structure is often used in situations where there is some semantics to the ordering. Interview Quetions And Answers on JSON JSON Interview Questions # 16) What is Number in JSON? Answer) JSON Numbers – A number is very much like a C or Java number, except that the octal and hexadecimal formats are not used. A number is a sequence of decimal digits with no superfluous leading zero. It may have a preceding minus sign (U+002D). It may have a fractional part prefixed by a decimal point (U+002E). It may have an exponent, prefixed by e (U+0065) or E (U+0045) and optionally + (U+002B) or – (U+002D). The digits are the code points U+0030 through U+0039. Numeric values that cannot be represented as sequences of digits (such as Infinity and NaN) are not permitted. 17) What is JSON String? Answer) A string is a sequence of zero or more Unicode characters, wrapped in double quotes, using backslash escapes. A character is represented as a single character string. A string is very much like a C or Java string. A string is a sequence of Unicode code points wrapped with quotation marks (U+0022). All code points may be placed within the quotation marks except for the code points that must be escaped: quotation mark (U+0022), reverse solidus (U+005C), and the control characters U+0000 to U+001F. There are two-character escape sequence representations of some characters. \” represents the quotation mark character (U+0022). \\ represents the reverse solidus character (U+005C). \/ represents the solidus character (U+002F). \b represents the backspace character (U+0008). \f represents the form feed character (U+000C). \n represents the line feed character (U+000A). \r represents the carriage return character (U+000D). \t represents the character tabulation character (U+0009). JSON Interview Questions # 18) What is JSON object? Answer) An object is an unordered set of name/value pairs. An object begins with { (left brace) and ends with } (right brace). Each name is followed by : (colon) and the name/value pairs are separated by , (comma). 19) What is JSON RPA Java? Answer) JSON-RPC is a simple remote procedure call protocol similar to XML-RPC although it uses the lightweight JSON format instead of XML (so it is much faster). JSON Interview Questions # 20) What is a JSON parser? Answer) JSON parser to parse JSON object and MAINTAIN comments. By using JSON, when receiving data from a web server, the data should be always in a string format. We use JSON.parse() to parse the data and it becomes a JavaScript object. The JSON.parse() method parses a JSON string, constructing the JavaScript value or object described by the string. An optional reviver function can be provided to perform a transformation on the resulting object before it is returned. Advanced JSON Interview Questions 21) Which browser provides native JSON support? Answer). JSON Interview Questions # 22) What is the difference between JSON parse and JSON Stringify? Answer) JSON.stringify() is to create a JSON string out of an object/array. They are the inverse of each other. JSON.stringify() serializes a JS object into a JSON string, whereas JSON.parse() will deserialize a JSON string into a JS object. 23) What is the MIME type of JSON? Answer) The MIME media type for JSON text is application/json . The default encoding is UTF-8. JSON Interview Questions # 24) What is the use of JSON Stringify? Answer) The JSON.stringify() method converts a JavaScript value to a JSON string, optionally replacing values if a replacer function is specified, or optionally including only the specified properties if a replacer array is specified. 25) What does JSON parse do? Answer) The JSON.parse() method parses a JSON string, constructing the JavaScript value or object described by the string. An optional reviver function can be provided to perform a transformation on the resulting object before it is returned. JSON Interview Questions # 26) What is serialization in Javascript? Answer). 27) What is Polyfill? Answer) The JSON object is not supported in older browsers. We can work around this by inserting a piece of code at the beginning of your scripts, allowing use of JSON object in implementations which do not natively support it (like Internet Explorer 6) is called Polyfill. JSON Interview Questions # 28) What is toJSON() method in JOSN? Answer) The toJSON() method returns a string representation of the Date object. 29) What is JSONP? Answer) JSONP stands for JSON with Padding. JSONP is a method for sending JSON data without worrying about cross-domain issues. JSONP does not use the XMLHttpRequest object. JSONP uses the <script> tag instead. JSON Interview Questions # 30) What is the difference between JSON and JSONP? Answer). JSON Interview Questions For Experienced 31) What is serialization and deserialization in JSON? Answer) JSON. JSON Interview Questions # 32) What is serialization of an object? Answer) To serialize an object means to convert its state to a byte stream so that the byte stream can be reverted back into a copy of the object. 33) What is the use of JSON in Java? Answer) The Java API for JSON Processing provides portable APIs to parse, generate, transform, and query JSON. JSON (JavaScript Object Notation) is a lightweight, text-based, language-independent data exchange format that is easy for humans and machines to read and write. JSON Interview Questions # 34) Why do we use JSON in PHP? Answer) A common use of JSON is to read data from a web server, and display the data in a web page. This chapter will teach you how to exchange JSON data between the client and a PHP server. 35) What is JSON Formatter? Answer) The JSON Formatter & Validator helps debugging JSON data by formatting and validating JSON data so that it can easily be read by human beings. JSON Interview Questions # 36) What is JSON Viewer? Answer) JSON Viewer – Convert JSON Strings to a Friendly Readable Format. 37) What is JSON Validator? Answer) The JSON Validator helps debugging JSON data by formatting and validating JSON data so that it can easily be read by human beings. JSON Interview Questions # 38) Why do we use JSON in Android? JSON stands for JavaScript Object Notation.It is an independent data exchange format and is the best alternative for XML. Android provides four different classes to manipulate JSON data. These classes are JSONArray,JSONObject,JSONStringer and JSONTokenizer. 39) Why do we use JSON in Python? Answer) Python programming language is used to encode and decode JSON objects. Python encode() function encodes the Python object into a JSON string representation. Python decode() function decodes a JSON-encoded string into a Python object. JSON Interview Questions & Answers JSON Interview Questions # 40) What is JSON in JavaScript? Answer) JSON is derived from the JavaScript programming language, it is a natural choice to use as a data format in JavaScript. JSON, short for JavaScript Object Notation. 41) What is JSON Schema? Answer) JSON Schema is a specification for JSON based format for defining the structure of JSON data. JSON Interview Questions # 42) What are the advantages of JOSN? Answer) -. 43) Can you write an example code in JSON? Answer) The following example shows how to use JSON to store information related to books based on their topic and edition. { "book": [ { "id":"01", "language": "Java", "edition": "third", "author": "Herbert Schildt" }, { "id":"07", "language": "C++", "edition": "second" "author": "E.Balagurusamy" } ] } RELATED
https://codingcompiler.com/json-interview-questions-answers/
CC-MAIN-2019-51
refinedweb
2,130
67.76
That it could put files wherever it wanted They could have implemented VirtualStore as early as Windows 95 as a stop-gap measure for write-anywhere programs. Sure, it's an approach with its own problems, but sometimes you have to trade something in for security. low-level hardware access for sound, graphics, etc Trap the hardware interrupts in software, then emulate the low-level I/O routines at the OS level. Possibly with a performance penalty, but again: you have to decide where your priorities are. And yes, I know hindsight is 20/20. Maybe not all these things were obvious back then. I still think that, security-wise, Microsoft spent the whole 90s and a good part of the 00s asleep at the wheel.: to this date, *nix does not support well the concept of application ownership of a file which leads to programs requiring their own user account, which is another kludge. Would you care to explain what is kludgey in using the uid namespace to also provide per-application ownership? Arguably, it is simply a matter of implementation simplicity; you have a single namespace instead of two. That a given uid might not correspond to an actual, physical user seems to be more of a semantic problem than a design one. I'm sorry, but getting paid for your work, in a world where money is necessary to survive, is NOT morally wrong. I'm no hardline stallmanite, but what does this have to do with free software? Nothing in the definition of free software precludes you from making money through it. At most, it forbids certain ways of making money with it. With "openness" the market will do the job just fine. The question is whether openness is possible (or rather: assured) in a free market without regulation. Actually, the problem is that "free market", as currently (ab)used, is an overloaded term that stands for two completely different concepts: absence of regulation, and perfect competition with no barriers to entry nor asymmetric information. That the second is a necessary consequence of the first is a highly contentious point, because, contrary to what libertarians believe, the state is not the only entity that is able to distort a market. My hypothesis is this: the more high-level information you can give your compiler, the better it can optimize your programs. It is worthwhile to point out that GCC actually adds some non-standard facilities to the C language that enable the programmer to do so; witness, for instance, the use of the likely() and unlikely() macros (the actual GCC builtins have different names) in the Linux kernel to nudge the compiler into optimizing for the common case of an if statement. Even if your argument held water (which I don't think it does in a properly managed network), it seems rather silly to trade global end-to-end connectivity and other IPv6 niceties such as autoconfiguration for the convenience of being able to memorize network addresses or pass them over the phone.. citizens of Libya technically declared war on the US. Citizens do not declare war. States do. Are you suggesting that, if a bunch of Americans vandalized a foreign embassy on US soil, that shold technically count as a declaration of war on that country by the United States of America? Computers are useless. They can only give you answers. -- Pablo Picasso
http://slashdot.org/~doshell/tags/notthebest
CC-MAIN-2015-35
refinedweb
569
59.23
I know this might not sound intelligent, but I am new to this and I really want to figure it out. I have been coding in JAVA recently and on the other hand have some function implemented within scala. I came across this article: Interop Between Java and Scala Which says it is possible to mix JAVA and scala. Since I am coding in IntelliJ IDEA, am wondering if there is anyway to bring in scala classes and use them within my JAVA code? I have already included scala-library.jar using: <dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-library</artifactId> <version>2.10.3</version> </dependency> - if there is anyway to bring in scala classes and use them within my Java code? - is it possible to use scala classes inside Java or there is no way at all? Yes and yes. There are two scenarios: First you have an existing Scala library, then you can just use it in your existing Maven build as you have shown with the standard Scala library. Second (and I assume that is your actual question?) you have a project which contains both Java and Scala source code. In that case you need to compile both types of source files. In IntelliJ that would require that you somehow add the Scala façade to your project. Unfortunately, I am not a Maven expert, so I cannot tell you how this works with a Maven build. But if you use sbt and an sbt-based IntelliJ project, then IntelliJ will build the entire project with an sbt build server which is capable of compiling both your Java and Scala sources. You would have a directory structure like this: project build.properties build.sbt src main scala foo Foo.scala java foo Bar.java For example as build.properties: sbt.version=0.13.11 As build.sbt: scalaVersion := "2.11.8" As Foo.scala: package foo class Foo { def test(): Unit = println("Hello from Scala!") } And as Bar.java: package foo; public class Bar { public static void main(String[] args) { final Foo f = new Foo(); f.test(); } }
https://codedump.io/share/nqw6mfgMJOMV/1/how-to-import-scala-class-into-intellij-idea
CC-MAIN-2018-22
refinedweb
351
76.32
test program dies with an error when used with the arguments shown -- but cutting back the argument lists by even one element allows it to work OK. So does taking out the hold(False). This is with Mac OS X, Apple Python 2.3, Matplotlib version 0.71, __revision__ '$Revision: 1.30 $' --Eliot Smith, Indiana University #------begin test program-------- from pylab import * a =[2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22] at=[2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21] b =[0.04, 0.06, 0.05, 0.07, 0.03, 0.07, 0.08, 0.05, 0.02, 0.07, 0.07, 0.04, 0.06, 0.03, 0.01, 0.03, 0.01, 0.04, 0.03, 0.04, 0.04] bt=[0.04, 0.06, 0.05, 0.07, 0.03, 0.07, 0.08, 0.05, 0.02, 0.07, 0.07, 0.04, 0.06, 0.03, 0.01, 0.03, 0.01, 0.04, 0.03, 0.04] print len(a), len(b) hold(False) figure(1) loglog(a, b, 'bo') show() # this fails with #File "/platlib/matplotlib/axes.py", line 1169, in draw #ValueError: Cannot take log of nonpositive value #replacing a, b in the loglog call with at, bt (lists just one #item shorter) it no longer fails # Mac OS X, Apple Python 2.3 # Matplotlib version 0.71 # __revision__ '$Revision: 1.30 $' #----------end test program-------------------- I am unable to get matplotlib animations to work properly. Some of the data from the previous frame is getting included. I have included some example code that shows this. When draw() is called in the loop, it draws the data from the previous frame for y<0.15 (ca), and from the current frame for larger y. The final show() does *not* lag even if set_ydata has not been called since the last draw(). This is with version 0.72. Any ideas about what is happening? -- code follows # animate increasing terms in fourier expansion of y=x from pylab import * from time import sleep samples = 1000 max_k = 3 x = arange(0.0, 1.0, 1.0/samples) s = zeros((max_k,samples), Float) for k in range(1,max_k): s[k] = s[k-1]+(-1)**(k+1)*sin(pi*x*k)/k ion() # to force window creation line, = plot(x, x*1.3, linewidth=1.0) grid(True) ioff() for k in range(1,max_k): line.set_ydata(2/pi*s[k]) title('k = '+str(k)) draw() sleep(1) show() -- j
https://sourceforge.net/p/matplotlib/mailman/matplotlib-users/?viewmonth=200503&viewday=14
CC-MAIN-2018-17
refinedweb
439
81.12
Minchan Kim <minchan@kernel.org> writes:> On Fri, Apr 21, 2017 at 08:29:30PM +0800, Huang, Ying wrote:>> "Huang, Ying" <ying.huang@intel.com> writes:>> >> > Minchan Kim <minchan@kernel.org> writes:>> >>> >> On Wed, Apr 19, 2017 at 04:14:43PM +0800, Huang, Ying wrote:>> >>> Minchan Kim <minchan@kernel.org> writes:>> >>> >> >>> > Hi Huang,>> >>> >>> >>> > On Fri, Apr 07, 2017 at 02:49:01PM +0800, Huang, Ying wrote:>> >>> >> From: Huang Ying <ying.huang@intel.com>>> >>> >> >> >>> >> void swapcache_free_entries(swp_entry_t *entries, int n)>> >>> >> {>> >>> >> struct swap_info_struct *p, *prev;>> >>> >> @@ -1075,6 +1083,10 @@ void swapcache_free_entries(swp_entry_t *entries, int n)>> >>> >> >> >>> >> prev = NULL;>> >>> >> p = NULL;>> >>> >> +>> >>> >> + /* Sort swap entries by swap device, so each lock is only taken once. */>> >>> >> + if (nr_swapfiles > 1)>> >>> >> + sort(entries, n, sizeof(entries[0]), swp_entry_cmp, NULL);>> >>> >>> >>> > Let's think on other cases.>> >>> >>> >>> > There are two swaps and they are configured by priority so a swap's usage>> >>> > would be zero unless other swap used up. In case of that, this sorting>> >>> > is pointless.>> >>> >>> >>> > As well, nr_swapfiles is never decreased so if we enable multiple>> >>> > swaps and then disable until a swap is remained, this sorting is>> >>> > pointelss, too.>> >>> >>> >>> > How about lazy sorting approach? IOW, if we found prev != p and,>> >>> > then we can sort it.>> >>> >> >>> Yes. That should be better. I just don't know whether the added>> >>> complexity is necessary, given the array is short and sort is fast.>> >>>> >> Huh?>> >>>> >> 1. swapon /dev/XXX1>> >> 2. swapon /dev/XXX2>> >> 3. swapoff /dev/XXX2>> >> 4. use only one swap>> >> 5. then, always pointless sort.>> >>> > Yes. In this situation we will do unnecessary sorting. What I don't>> > know is whether the unnecessary sorting will hurt performance in real>> > life. I can do some measurement.>> >> I tested the patch with 1 swap device and 1 process to eat memory>> (remove the "if (nr_swapfiles > 1)" for test). I think this is the>> worse case because there is no lock contention. The memory freeing time>> increased from 1.94s to 2.12s (increase ~9.2%). So there is some>> overhead for some cases. I change the algorithm to something like>> below,>> >> void swapcache_free_entries(swp_entry_t *entries, int n)>> {>> struct swap_info_struct *p, *prev;>> int i;>> + swp_entry_t entry;>> + unsigned int prev_swp_type;>> >> if (n <= 0)>> return;>> >> + prev_swp_type = swp_type(entries[0]);>> + for (i = n - 1; i > 0; i--) {>> + if (swp_type(entries[i]) != prev_swp_type)>> + break;>> + }>> That's really what I want to avoid. For many swap usecases,> it adds unnecessary overhead.>>> +>> + /* Sort swap entries by swap device, so each lock is only taken once. */>> + if (i)>> + sort(entries, n, sizeof(entries[0]), swp_entry_cmp, NULL);>> prev = NULL;>> p = NULL;>> for (i = 0; i < n; ++i) {>> - p = swap_info_get_cont(entries[i], prev);>> + entry = entries[i];>> + p = swap_info_get_cont(entry, prev);>> if (p)>> - swap_entry_free(p, entries[i]);>> + swap_entry_free(p, entry);>> prev = p;>> }>> if (p)>> >> With this patch, the memory freeing time increased from 1.94s to 1.97s.>> I think this is good enough. Do you think so?>> What I mean is as follows(I didn't test it at all):>> With this, sort entries if we found multiple entries in current> entries. It adds some condition checks for non-multiple swap> usecase but it would be more cheaper than the sorting.> And it adds a [un]lock overhead for multiple swap usecase but> it should be a compromise for single-swap usecase which is more> popular.>How about the following solution? It can avoid [un]lock overhead anddouble lock issue for multiple swap user case and has good performancefor one swap user case too.Best Regards,Huang, YingFrom 7bd903c42749c448ef6acbbdee8dcbc1c5b498b9 Mon Sep 17 00:00:00 2001From: Huang Ying <ying.huang@intel.com>Date: Thu, 23 Feb 2017 13:05:20 +0800Subject: [PATCH -v5] mm, swap: Sort swap entries before freeTo reduce the lock contention of swap_info_struct->lock when freeingswap entry. The freed swap entries will be collected in a per-CPUbuffer firstly, and be really freed later in batch. During the batchfreeing, if the consecutive swap entries in the per-CPU buffer belongsto same swap device, the swap_info_struct->lock needs to beacquired/released only once, so that the lock contention could bereduced greatly. But if there are multiple swap devices, it ispossible that the lock may be unnecessarily released/acquired becausethe swap entries belong to the same swap device are non-consecutive inthe per-CPU buffer.To solve the issue, the per-CPU buffer is sorted according to the swapdevice before freeing the swap entries. Test shows that the timespent by swapcache_free_entries() could be reduced after the patch.With the patch, the memory (some swapped out) free time reduced13.6% (from 2.59s to 2.28s) in the vm-scalability swap-w-rand testcase with 16 processes. The test is done on a Xeon E5 v3 system. Theswap device used is a RAM simulated PMEM (persistent memory) device.To test swapping, the test case creates 16 processes, which allocateand write to the anonymous pages until the RAM and part of the swapdevice is used up, finally the memory (some swapped out) is freedbefore exit.Signed-off-by: Huang Ying <ying.huang@intel.com>Acked-by: Tim Chen <tim.c.chen@intel.com>Cc: Hugh Dickins <hughd@google.com>Cc: Shaohua Li <shli@kernel.org>Cc: Minchan Kim <minchan@kernel.org>Cc: Rik van Riel <riel@redhat.com>v5:- Use a smarter way to determine whether sort is necessary.v4:- Avoid unnecessary sort if all entries are from one swap device.v3:- Add some comments in code per Rik's suggestion.v2:- Avoid sort swap entries if there is only one swap device.--- mm/swapfile.c | 43 ++++++++++++++++++++++++++++++++++++++----- 1 file changed, 38 insertions(+), 5 deletions(-)diff --git a/mm/swapfile.c b/mm/swapfile.cindex 71890061f653..10e75f9e8ac1 100644--- a/mm/swapfile.c+++ b/mm/swapfile.c@@ -37,6 +37,7 @@ #include <linux/swapfile.h> #include <linux/export.h> #include <linux/swap_slots.h>+#include <linux/sort.h> #include <asm/pgtable.h> #include <asm/tlbflush.h>@@ -1065,20 +1066,52 @@ void swapcache_free(swp_entry_t entry) } } +static int swp_entry_cmp(const void *ent1, const void *ent2)+{+ const swp_entry_t *e1 = ent1, *e2 = ent2;++ return (int)(swp_type(*e1) - swp_type(*e2));+}+ void swapcache_free_entries(swp_entry_t *entries, int n) { struct swap_info_struct *p, *prev;- int i;+ int i, m;+ swp_entry_t entry;+ unsigned int prev_swp_type; if (n <= 0) return; prev = NULL; p = NULL;- for (i = 0; i < n; ++i) {- p = swap_info_get_cont(entries[i], prev);- if (p)- swap_entry_free(p, entries[i]);+ m = 0;+ prev_swp_type = swp_type(entries[0]);+ for (i = 0; i < n; i++) {+ entry = entries[i];+ if (likely(swp_type(entry) == prev_swp_type)) {+ p = swap_info_get_cont(entry, prev);+ if (likely(p))+ swap_entry_free(p, entry);+ prev = p;+ } else if (!m)+ m = i;+ }+ if (p)+ spin_unlock(&p->lock);+ if (likely(!m))+ return;++ /* Sort swap entries by swap device, so each lock is only taken once. */+ sort(entries + m, n - m, sizeof(entries[0]), swp_entry_cmp, NULL);+ prev = NULL;+ for (i = m; i < n; i++) {+ entry = entries[i];+ if (swp_type(entry) == prev_swp_type)+ continue;+ p = swap_info_get_cont(entry, prev);+ if (likely(p))+ swap_entry_free(p, entry); prev = p; } if (p)-- 2.11.0
https://lkml.org/lkml/2017/4/26/316
CC-MAIN-2018-47
refinedweb
1,149
67.15
Java Hello World Program to Learn Java Programming We will start beginning to develop a program in Java. The first program will be printing Hello World. We will learn how to create a Java program in the editor and compile and run it using command prompt. We will show you a step by step process of creating Java Hello World Program. So, let us begin this tutorial. Keeping you updated with latest technology trends, Join TechVidvan on Telegram Simple Java Hello World Program We can simplify the process of creating and running a Java program into three steps: 1. Create the program by typing it into a text editor(like notepad). And, save it to a file named HelloWorld.java. 2. Compile the file by typing “javac HelloWorld.java” in the command prompt window. 3. To execute or run the program, type “java HelloWorld” in the command prompt window. Print Hello World in Java The below program is the simplest program of Java printing “Hello World” to the screen. Let us try to understand every bit of code step by step. /* This is a simple Java program. FileName : "HelloWorld.java". */ public class HelloWorld { //Your program begins by calling the main(). //Prints "Hello, World" to the terminal window. public static void main(String args[]) { System.out.println("Hello World"); } } Output: The above program consists of three primary components: the class definition, main() method and comments. The following section will give you a basic understanding of this code: 1. Class definition: We use the ‘class’ keyword to declare a new class. We can also use the access specifier like public before the class keyword: public class HelloWorld 2. Hello World is the name of the class that is an identifier in Java. The class definition contains the members of the class that are enclosed within the curly braces{}. 3. Java Main() method: Every application in Java programming language must contain a main() method whose signature is: public static void main(String[] args) - public: We declare the main method as public so that JVM can execute it from anywhere. - static: We declare the main method as static so that JVM can call it directly without creating the object of the class. Note: We can write the modifiers public and static in any order. - void: The main method does not return anything, therefore we declare it as void. - main(): main() is the name that is already configured in the JVM. - String[]: The main() method accepts a single argument which is an array of elements of type String. The main method is the entry point for any Java application as in C/C++. The main() method will subsequently invoke all the other methods required by the program. Below is the next line of code. It is present inside the main() method: System.out.println("Hello World"); This line actually prints the string “Hello World” on the screen, followed by a new line on the screen. We can get the output on the screen because of the built-in println() method. The System is a predefined class in Java. This class provides access to the system. out is the variable of type output stream that is connected to the console. 4. Java Comments: Comments in Java can either be multi-line or single-line comments. /* This is a simple Java program. Call this file "HelloWorld.java". */ This is a multiline comment. It must begin with /* and end with */. For a single line comment, we can directly use // as in C or C++. Important Points - The name of the class in the program is HelloWorld. This name is the same as the name of the file, HelloWorld.java. These same names are not a coincidence. In Java, the whole code must reside inside a class. And there must be at most one public class that contains the main() method. - By convention, the name of the class containing the main method should match the name of the file that holds the program. Compiling Java Program 1. Firstly, we need to set up the environment. After that, we can open a terminal or command prompt in both Windows or Unix and can go to the directory/folder where we have saved the file: HelloWorld.java. 2. Now, to compile Java HelloWorld program, we need to execute the compiler: javac, specifying the name of the source file on the command line, like: javac HelloWorld.java 3. The compiler creates the compiled file called HelloWorld.class in the present working directory. This class file that contains the bytecode version of the program. Running Java Program To execute Java program, we need to call JVM(Java Virtual Machine) using java command. After that, we specify the name of the class file on the command line, like: java HelloWorld This will print “Hello World” on the terminal screen. Steps to write HelloWorld Program in Windows Step 1: Open Command Prompt Window, Reach to the desired folder and type notepad HelloWorld.java, like this: Step 2: Now, hit Enter. As soon as you press enter, you will see a notepad editor screen: Step 3: Start typing the program: Step 4: Go to the File option and click the Save button to save the file. Close the notepad window. Move to the command prompt window again. Step 5: Type here javac HelloWorld.java, and enter. If the program compiles successfully, then the cursor will start blinking on the next line, otherwise, there will be an error message. Step 6: Now type java HelloWorld. Press Enter to get the output: You can see the printed output as Hello World on the screen. Conclusion Finally, you can start coding in Java with the editor. You just need to follow some easy steps and develop as many programs as you want. You just need to have Java installed in your system. This is the easiest way of learning programming in Java. You can also see our article on Developing programs on Eclipse IDE in the next tutorial. We hope you may now face no difficulties with coding in Java. Do share feedback in the comment section if you liked the article.
https://techvidvan.com/tutorials/java-hello-world-program/
CC-MAIN-2020-45
refinedweb
1,023
75.1
The "Game of Thrones" Roundtable | The Atlantic 2022 by The Atlantic Monthly Group. All Rights Reserved.tag:theatlantic.com,2013:50-276696?)</p><!-- START "MORE ON" SINGLE STORY BOX v. 2 --><!-- END "MORE ON" SINGLE STORY BOX v. 2 --><p>We've been here before, of course, in both Seasons 1 and 2: The penultimate episode overturns the <i>Game of Thrones </i>playing board—Ned loses his head, the Lannisters turn the tide on the Blackwater—and the season finale picks up the remaining pieces.</p><p>As I've mentioned throughout our roundtables, I continue to be baffled by the show's start-stop pacing. The latter half of this season consisted of <a href="">three</a> <a href="">consecutive</a> <a href="">episodes</a> that significantly pared down the number of storylines and developments (often to agreeable effect), followed by the final two, in which major twists—including <a href=""><i>the</i> major twist</a>.</p><p <i>that </i>stupid?) but the hint that maybe, just maybe, these two relatively decent characters could find a way to be happy as husband and wife gave another layer of resonance to the subsequent revelation that <i>his dad</i> had just engineered the brutal slaughter of <i>her mother and brother</i>. That's probably going to set back the clock a bit in the whole gradual-effort-to-win-her-heart scenario.</p><p.</p><p <a href="">incendiary capture of Astapor</a>—and featured substantially too much crowd surfing.</p><p>I <a href="">give way to no one</a>.</p><p?</p><p.)</p><p.)</p><p?).</p><p>But my dismay at the Ramsay adjustments is less narrative than philosophical. In the novels, Ramsay is both the figure who tempts Theon across a moral boundary from which he can never return (the killing of the peasant boys) <i>and </i.</p><p>Have I been waiting eight episodes to make that observation? Why, yes I have. How could you tell?</p><p>Now back to our regularly scheduled programming: I'd say tonight's episode provided a fitting, if imperfect, end to the season. It had its ups and downs, its clever infidelities to Martin's text and its moments when it should have left well enough alone.</p><p>Overall, I found the season to be a substantial improvement over Season 2 (which was still pretty damn good), even if it failed to rise to the near-perfection of Season 1 3 offered both reasons to be optimistic, and reasons to be less optimistic.</p><p 9.</p><p>So, what did you guys think?</p><hr><p><b>Kornhaber:< <i>the opening of mail.</i></p><p>As you point out, Chris, this <i>Thrones</i> finale was in full <i>Thrones</i>.</p><p 3?</p><p>“You disapprove?” Tywin asks his son, though he may well have been addressing <i>Thrones </i>fans who have sat happily through three seasons of murder and rape only to balk now at the loss of Young Wolf & co. “I'm all for cheating,” Tyrion says, “but to slaughter them at a wedding...”</p><p>?</p><p.”</p><p <i>Thrones’</i>.</p><p>That could shift, though. Even relatively little happened in this episode, the finale's many raven-delivered tidings of foreboding did impart a sense that change is coming. <i>Thrones’</i>.</p><p>Ross, your thoughts?</p><hr><p><b>Douthat:< <i>about</i>?</p><p>The strongest critique of <em>Game of Thrones</em>a review with some spoilers for non-book-readers</a>) “a repetition of intrigue, treachery, and the stab in the back.”</p><p.</p><p.</p><p?</p><p.</p><p.</p>The "Game of Thrones" Roundtable<em>Game of Thrones</em>' Hectic, Morally Complex, Crowdsurfing Season Finale2013-06-10T08:28:44-04:002019-04-24T17:58:12-04:00Our roundtable on “Mhysa,” the 10th episode in the HBO show's third season.tag:theatlantic.com,2013:50-276445> The scene! The scene!</p><!-- START "MORE ON" SINGLE STORY BOX v. 2 --><!-- END "MORE ON" SINGLE STORY BOX v. 2 --><p>Sorry to get all <a href="">Tattoo-from-<i>Fantasy-Island</i></a> on you, but it's been a <i>long</i> season of trying not to give away what was going to happen tonight to non-novel-readers such as you, Spencer. Did my various attempts at obfuscation—pretending that the Big Reveal would be that Cersei's a dude, etc.—succeed in maintaining your innocence?</p><p>The Red Wedding is one of the best scenes—arguably <i>the</i>) <a href="">one particular scene</a>; when it was revealed that the title of Episode Nine (customarily the dramatic climax of each <i>Game of Thrones</i> season) would be “The Rains of Castamere,” etc.</p><p”).</p>) <i>was</i> rocked when poor Ned Stark was parted from his head in Season One. Not having read the books at that point, I was completely stunned in a way that (obviously) wasn't really possible this time around.</p><p>More narrowly, we now have an answer for the <a href="">spoiler-y question I posed a few weeks back</a> <a href="">Lannister Honeypot thesis</a>, which imagined that Talisa had been in on the betrayal from the very beginning. Oh well.</p><p.”</p><p>Apart from its central massacre, this episode continued what seemed like a season-long experiment in alternating between hitting the gas and hitting the brakes. Episode 5 (“Kissed by Fire”) put the pedal to the metal, with excellent results, and then Episode Six (“The Climb”) slammed on the brakes, to comparably satisfying effect. Episodes 7 and 8 continued the relatively leisurely pace with mixed success, and now suddenly we're flooring it again—in the very episode one might have expected to devote itself primarily to one crucial, horrific storyline.</p><p.</p><p <a href="">forget-me-now</a> when you need one?)</p><p>What do you two say?</p><hr><p><b>Kornhaber:</b> Chris, I'm sorry to hear that you, as a book reader, felt underwhelmed. Judging by the knot in my stomach though the episode's final act, the way my roommate and I sat in dumb silence as the credits rolled, and the symphony of “<i>Game of Thones </i>guhhhh whut?!” Facebook statuses that then thundered across my newsfeed, the popular reaction among series newbies was to be heartbreakingly, satisfyingly overwhelmed.</p><p>You and I agree, though, about the wedding-party ambush in “The Rains of Castamere” being an “extremely striking bit of television.” Writers/showrunners David Benioff and D. B. Weiss stacked the episode with protagonists just escaping near-death confrontations—Jon Snow's reversal with the Wildlings, Bran's well-timed psychic breakthrough, Daenerys's posse's three-vs.-many infiltration—and then gave us a main-character bloodbath defined by its awful inescapability.</p><p.</p><p>From there out, Benioff, Weiss, and director David Nutter deserve applause for how slowly and artfully they, uh, twisted the knife. The sequence of transitional shots remains indelible even the morning after: Robb and Talisa's moment of affection gazed upon warmly by the Stark matriarch, who then turns her head to note, curiously, a guard closing the chamber doors. Cue cellos, cue whimpering dire wolf, cue the Hound perplexingly rejected at the gates. By the time we see Roose Bolton's chainmail and sick smile, it's already been made clear—something's up.</p><p>After that, savagery. I'm surprised to hear that Talisa's character in the books isn't present for the massacre; it's hard for me to imagine how she <i>wouldn't</i> be. If Walder Frey's mission—presumed Lannister bounty notwithstanding—is to make Robb pay, offing the wife he so publicly loved right before his eyes would seem like killing-spree priority No. 1. But maybe in the books it's Robb's mom who dies first, then? Here, Michelle Fairey's wits'-end performance made Catelyn's last stand riveting and sad; I barked a “fuck yes!” when she grabbed that hostage from under the table, hoping she'd find a way for her or her son to make it out alive. And it feels wrong to say this, but David Bradley's turn as the leering, kooky, vengeful Lord of the Twins deserves praise as well. His shrugged “I'll find another” demonstrated just how empowering amorality can be.</p><p>Was any of this spoiled for me prior to viewing, you ask? Well, kind of. It's tough to watch this show each week, and perhaps impossible to write about it each week, without some foreknowledge inadvertently seeping in. When checking spellings one time, I unwittingly glanced upon a fan page that referred to Robb in the past tense. I'd heard—maybe from commenters?—that this wedding was called “the Red Wedding.” And I knew that the ninth episodes of the two previous seasons were calamitous. Those facts led to some speculation in my mind, and one theory I had was that, in fact, Walder would kill Robb, Talisa, and/or Catelyn in revenge.</p><p>Still, that didn't prevent me from gasping, leaning forward, and filling with real sadness and dread at the episode's climax. Besides, it's not like it hadn't been foreshadowed: The manner and magnitude of the Stark slaughter shocked, yes, but the it also felt like the completion of a puzzle. Ever since Ned's death in Season One, the show has sketched out just how much Robb is his father's son, again and again making hard but “right” choices—and again and again sacrificing esteem from his allies when he does so. In this ruthless world, could the King in the North really hope to prevail by trying to be a good, honorable man who only breaks vows for love?</p><p>His comments to the mother at episode's start, and the convo with Talisa about naming their son Eddard, underlined the symmetry of this killing—symmetry that no doubt ran through Catelyn's mind in those horrid closing seconds when she stood, slack jawed, eyes vacant, presumably reflecting on the enormity of her family's ruin. Now, we're left with the weirder, wilier Starks to avenge the fallen. Four of those five (Jon, Bran, Arya... Rickon, technically), had uncommonly interesting storylines this episode, which gives hope that the show will carry on entertainingly, even with a chunk of its core cast now gone.</p><p>Speaking of entertaining cast members, it was only till an hour or so after the finish that I realized we spent zero time in King's Landing in this episode. I look forward to seeing the fallout of the Red Wedding unfold there. With these killings, Tywin's really sewn up the Risk board for himself, no? He's got the now-eldest Stark heir married to one son, the presumed loyalty of Houses Bolton and Frey, and his most effective rival gone. The seemingly less formidable Balon Greyjoy and Stannis Baratheon are all that appear to stand between the Lannisters and total rule. Boring. Daenerys, Mance Rayder, White Walkers: Would you please invade already?</p><p>Ross, are you with Chris on this great episode not being as great as the books could have allowed? And should we talk about the other bloodlettings—of the Wildlings, and in Yunkai?</p><hr><p><b>Douthat:</b> I feel bad joining Chris's camp, because it's so predictable for a fan of the book to decide that Benioff and Weiss fell short here. Of all the challenges of the first three seasons, bringing the Red Wedding to life in a way that would satisfy Martin's fans looms largest, because of all of the books' gamechanger moments this is the most impressively executed, the most wrenching and tragic and brilliantly cruel, the most difficult to re-read without hoping irrationally that this time things might turn out differently. And it occurred to me going in to last night that the structure of the show made the challenge even more difficult: In Martin's novels, the Red Wedding happens smack in the middle of a big fat book, <i>Storm of Swords</i>, when the reader isn't necessarily expecting a major shock, which makes it easier to build to it slowly and remorselessly, with what<em> Slate</em>'s Dan Kois <a href="">aptly describes</a> as a feeling of “slowly dawning awfulness.” But because that book has been cleaved in two for the show, we've reached the wedding at precisely the penultimate-episode-of-the-season moment when viewers—and HBO viewers, especially—are already primed to expect the tragic and horrific. So the show had to find a way to balance foreshadowing with surprise somewhat differently—giving us Arya staring at the Twins and other moments where dread crept in, but also leaning harder than the book on the deceptively happy moments (like when Catelyn's brother realizes that he's getting hitched to a nubile Frey rather than a hag) in order to keep the twist from becoming obvious to the uninitiated.</p><p>I liked how that balance was struck overall, even if it couldn't quite live up to book's devastating momentum, and I liked a number of small details: the interaction between Catelyn and Roose Bolton (in the book, she finds the chain mail under the sleeve of a random Frey), the false hope—extinguished by crossbow bolts—when Arya hears the barking Grey Wind and you think for a moment that she might let him out, and basically everything about David Bradley's performance as Walder Frey. I also thought that the closing moment, when Catelyn tries to make Walder's latest wife her hostage, was a good example of Benioff and Weiss's stakes-raising really working: In the novels, she grabs one of his many grandchildren, and a mentally deficient one at that, and so you just know it's never going to work. That plays well with the remorselessness of Martin's approach, but in the context of the show I liked having it seem <i>almost</i> plausible that the hostage-taking would work, and giving Frey a chance to deliver the awesomely awful line about finding “another” before the blades went home.</p><p>So why am I, like Chris, a little more let down than satisfied? Two reasons, I think. First, because my appreciation of the climax was diminished by the fact that I didn't really like the structure of the episode as a whole: There was just too much going on (especially a week after the show did a successful weeding of its plotlines) in an hour that should have all been pointing toward a single endpoint. I understand the choice to pair the events in the Twins with Bran and Jon's near-encounter in the tower, since both were cases of the long-separated Starks almost getting back together, and they featured some effective echoes and near-parallels. (The dire wolves saving the day in the north made it easier to hope that Grey Wind might do the same in the south, and Jon ditching Ygritte was an interesting counterpoint to Robb dying with Talisa.) But did we need an interlude with Sam and Gilly this week? And given that the sack of Yunkai mostly happened off-screen anyway, couldn't all the business with Daenerys and her jealous team of rivals have been shunted into another episode as well? (Better to give us a snippet with the Lannisters, since they're actually responsible for the events in the Twins, or to finally reveal the identity of Theon's torturer, since that—spoiler alert—connects to the Red Wedding as well.) Maybe making this a “bottle episode” would have made it that much more obvious that the wedding would end in blood. But this was just a much, much bigger sequence, for the show and the story, than even the Battle of the Blackwater last season, and I think Benioff and Weiss made a mistake cluttering things up with other scenes and other places that didn't even connect tangentially to what went down under Lord Frey's roof.</p><p>And then, secondly, did we really, really need to watch Lady Talisa get stabbed repeatedly in the uterus, until it looked like her entire stomach was falling out? I know I'm a tedious broken record on the subject of the show's penchant for exploitation, and this was a case where the violence was legitimately tragic rather than (as with, say, Theon's torture) just gore for gore's sake. But building up Talisa's character in order to reveal her as a Lannister honeypot would have been such, <i>such</i> a more interesting path for the showrunners to take. Instead, it was all so very predictable: Her character was altered and expanded from the books because Benioff and Weiss decided that the assassination of Robb Stark, his mother and his bannermen just didn't give them enough to work with, and what was needed was to throw in the double murder (if you'll forgive my pro-life premises) of a young mother and her unborn Eddard as well. If there's such a thing as “gore-ing the lily,” you can always rely on this show to do it, and the effect in this case was to weaken the primal horror of Catelyn's fate—a mother dying in the (mistaken) belief that all her sons have been killed—by giving us a competing primal horror to focus on as well.</p><p>In sum, it was too much to expect the show to quite match the horrific perfection of the book. But in trying to <i>eclipse</i> the awfulness of Martin's scene, they made this adaptation more imperfect than it needed to be.</p>The "Game of Thrones" Roundtable <em>Game of Thrones</em>' Crazy, Bloody Showdown ... Underwhelming?2013-06-03T09:35:50-04:002019-04-24T18:05:44-04:00Psychic powers, bathroom breaks, and a game-changing wedding: Our roundtable on “The Rains of Castamere,” the ninth episode in the HBO show’s third season.tag:theatlantic.com,2013:50-275999>' Worst Scene Yet </a></div> </div><!-- END "MORE ON" SINGLE STORY BOX v. 1 --><p><b>Kornhaber:</b> Samwell... Effing... Tarly! You didn't just leave that blade lying in the snow, did you? <i>It kind of looked like you did!</i>?</p><p>OK, calming down, calming down. White Walker aside, that was a pretty tranquil episode, right? Not boring, though. Just one of the few <i>Game of Thrones </i>installments that stuck to a mere number of storylines you can count on one hand, where the decapitations happened off screen, and where plots occasionally turned on people being <i>nice</i> to one another. It's funny: Last week, Theon pled for mercy and <a href="">so did we</a>. This week, showrunners David Benioff and D. B. Weiss kind of granted it.</p>.</p> <i>beyond</i>.</p><p.</p><p.</p><p.</p><p>Yes, no? Thoughts on the episode generally? Most importantly: Did my eyes betray me at the end of the hour, or did Sam really leave behind that blade?</p><hr><p><b>Orr: </b.</p><p.</p>-<i>Basic-Instinct </i>treatment from Melisandre only to have, in his tumescence, a leech attached to his junk. It's like the show's turning into a <i><a href="!" target="_blank">Scared Straight!</a></i>, <i>Game of Thrones</i> has fallen off the wagon in a big way.</p><p <a href="" target="_blank">nexus</a> in the <i>Spider-Man </i>movies: Because, yeah, I want to drink a bourbon that provokes sociopathic super-villainy.)</p><p.</p><p).</p><p>So, overall, a solid but unremarkable episode in my view, not as good as the extremely strong run from the third episode through the sixth, but not nearly as disappointing as <a href="" target="_blank">last week's installment</a>. Which is a decent accomplishment, considering that we were offered no Varys or Littlefinger (for the second consecutive week!), no Robb, and no Ygritte (or that sullen crow she's dating). On top of that, I can hardly <i>believe</i> that we weren't treated to any gratuitous Theon mutilations. I'd been counting on the removal of at least a couple of his molars, or maybe a swerve or two of intestine...</p><p>What about you, Ross? What did you like, and what did you miss? And are you looking forward to next week?</p><hr><p><b>Douthat: </b>Early in the season <a href="">I talked a bit</a> about Alan Sepinwall's theory that <i>Game of Thrones</i>.</p><p.</p> <i>is</i>.</p><p.</p><p.</p><p <a href="">Grantland recap</a>.</p><p>That's all from here—now I'm off to drop some leeches in the fire ...</p>The "Game of Thrones" Roundtable<em>Game of Thrones</em> Finally Takes Some Mercy on Its Viewers2013-05-20T09:00:15-04:002019-04-24T18:12:01-04:00Leeches, libations, and a dropped dagger: Our roundtable on “Second Sons,” the eighth episode of the HBO series’ third season.tag:theatlantic.com,2013:50-275772> Ditches the Book </a></div> </div><!-- END "MORE ON" SINGLE STORY BOX v. 1 --><p><b>Orr: </b>Last week, I applauded how nicely showrunners David Benioff and D. B. Weiss were putting their own mark on George R. R. Martin's source material. This week, alas, I'm forced to contemplate the flip side of that coin.</p><p>The pace of <i>Game of Thrones</i>, episode to episode, continues to be perplexing. Two weeks ago we received <a href="">an installment</a> crammed thick with developments and hewing very closely to Martin's books. Last week and this week, by contrast, the show has slowed to a crawl and repeatedly veered in non-canonical directions. The difference is that <a href="">last week's episode</a> was good (among the best, in my assessment), and this week's, “The Bear and the Maiden Fair,” was probably the worst of the season so far.</p><p—<i>toooot!</i>—in a twist sure to surprise no one (least of all Theon), Mad Mr. “Have You Guessed My Name Yet?” shows up with a silly horn and a wicked blade, like the infernal offspring of Harpo Marx and Jack the Ripper.</p><p>I invoke sumai! </i>encounter at the gates of Qarth in Season 2.</p><p.</p><p <i>weird</i>..</p><p <i>need </i.</p><p>The scene between Joffrey and Tywin, meanwhile, was not bad per se, but it certainly didn't live up to the expectations set by the latter's imperious “<a href="">I will</a>” when Cersei dared him to curb her son's monstrous appetites back in Episode 4..</p><p <i>Breaking Bad</i>, <i>The Walking Dead</i>, and other series) is taking the helm again next week.</p><p <a href="">here.</a></p><p <i>you</i> guys would keep <i>me</i> from being killed. It's a nice little variation on the old <a href=""><i>I don't need to run faster than the bear, I only need to run faster than you</i><.</p><p>What is perhaps most peculiar of all is that, for all its deviations from Martin's text, this episode was written (like “The Pointy End” in Season 1 and “Blackwater” in Season 2) <i>Martin is never going to finish the remaining novels</i>. <a href="">schedule</a>, his other side projects (the possible <a href="">Dunk & Egg</a> prequel, <a href=""><i>The Wit and Wisdom of Tyrion Lannister</i></a>), and his vague assertions regarding a target date for <i>The Winds of Winter</i>, and it seems that even if Martin eventually does manage to complete the final two (or maybe three?) books of <i>A Song of Ice and Fire</i>, it won't be until long after <i>Game of Thrones</i> has completed its run.</p><p>Which means that bringing this epic to its finale will likely be up to Benioff and Weiss—a prospect made slightly more tolerable by the fact that they wrote last week's (excellent) episode and Martin wrote this week's (disappointing) one.</p><p>Assuming, that is, that this week's episode was as disappointing as I found it to be. What do you guys think? Was I missing anything?</p><hr><p><b>Kornhaber: </b>Nope, I think you hit everything. That was a boring, frustrating episode, probably the worst of the season.</p><p.</p><p <i>aren't</i>. For the first time ever, for example, I felt unriveted by a Tywin Lannister browbeating.</p><p.</p><p <i>Thrones</i>'.</p><p>Another thing worth appreciating: After an <a href="">episode about the virtues of betrayal</a> and an <a href="">episode about circumstantial couplings</a>, the show spent some time with unions forged less by necessity than by affection—a Westerosi force seemingly even more rare than magic. Orell's "when-it-suits-them" spiel to Jon established the conventional, cynical reading of how <i>Thrones</i>' <i>Sex in the Seven Kingdoms</i> gabfests hinted that duty and desire sometimes coincide.</p><p.</p><p <i>Thrones</i> would I be unable to speculate on which will be uglier.</p><p>Ross, as Chris mentioned, this episode doubled down on most of the things you've disliked about Benioff and Weiss's methods. Are you even grumpier about “The Bear and the Maiden Fair” than we are?</p><hr><p><b>Douthat:</b> Well, since you guys were so tough on this episode, I suppose it falls to me to say something positive about. So here's what I've got:</p><p>1) Charles Dance is a very good actor.</p><p>2) Brienne's "escapee from Valhalla" look was, if possible, even more awesome than usual when she was fighting the bear in that dress.</p><p>3) I, too, am a big fan of Rose Leslie as Ygritte.</p><p>4) Dragons! Everybody likes dragons.</p><p>5) It made last week's episode, which at the time I found moderately disappointing, seem brilliant by comparison.</p><p.</p><p.</p><p>But let me try to conclude with some optimism. The first two seasons both dragged in places—Season 1 in the first half, Season 2.</p><p>Except for Theon in the torture chamber. For that, there is no forgiveness.</p>The "Game of Thrones" Roundtable<em>Game of Thrones</em>' Worst Scene Yet?2013-05-13T09:00:10-04:002019-04-24T18:17:15-04:00Dismemberment, disappointment, and dragons: Our roundtable on “The Bear and the Maiden Fair,” the seventh episode of the HBO series’ third season.tag:theatlantic.com,2013:50-275557<p.</p> Increasingly Egalitarian Nudity on <i>Game of Thrones</i> </a></div> </div><!-- END "MORE ON" SINGLE STORY BOX v. 1 --><p><b>Kornhaber</b>: If <a href="">last week's</a> duel-, death-, and devirginization-packed episode showed <i>Thrones</i>.</p><p.</p><p “<i>probably” </i <i>deliver</i>: Anyone else feel as vertiginous as Jon Snow during those climbing scenes?</p><p <i>Thrones</i>' guiding philosophies.</p><p.</p><p>Second: <a href="">A few episodes back</a>, I puzzled over Benioff and Weiss shoehorning in a few offhanded references to homosexuality. Last night, they further fleshed out their and <i>Thrones</i>.</p><p, <i>Thrones </i>intentionally or not has been propagating a cliché about there being only one kind of gay person. Here's hoping for more diversity of depiction going forward.</p><p>Anyways, that's merely a political quibble with an otherwise wonderful episode. What did you guys think?</p><hr><p><b>Orr:</b>.</p><p.</p><p>So my initial response to the episode, especially considered in conjunction with the previous one, "Kissed by Fire" (the script of which was delegated to another writer and was extremely faithful to the novel), was: <i>Uh-oh</i>, Benioff and Weiss have started speeding through Martin's material so they have more time to explore their own variations on his theme.</p><p>But my second response, which registered seconds after the first, was: Who cares? If Benioff and Weiss can come up with material as strong as this on a regular basis, let them go for it. While a straight page-to-screen translation of <i>Storm of Swords</i> would make for terrific television (it's the best of the novels by a significant margin), in future seasons the showrunners' ability to save Martin from his own excesses will become evermore crucial.</p><p <i>her </i>above all, was the high point of this storyline to date, lending it a new gravity and urgency. Their closing kiss atop the Wall made official that they're one of the very few couples we've met in Westeros worth actively rooting for.</p> <i>him</i>.)</p><p.</p><p...).</p><p.</p><p."</p><p>Ross?</p><hr><p><b>Douthat:<.</p><p.</p><p.</p><p.</p><p>But that last scene on the Wall was pretty excellent. So I'll end there, as they did, and note that for all my carping I can't wait for next week.</p>The "Game of Thrones" Roundtable<em>Game of Thrones</em> Ditches the Book2013-05-06T10:00:02-04:002019-04-24T18:25:12-04:00Magic, homosexuality, and departures from the novels: Our roundtable discusses “The Climb,” the sixth episode of the HBO series’ third season.tag:theatlantic.com,2013:On <em>Game of Thrones</em>, Two Savage, Spectacular Game Changers </a></div> </div><!-- END "MORE ON" SINGLE STORY BOX v. 1 --><p><b>Orr:</b> Ygritte, light of my life, fire of my....</p><p>Where was I? Oh yes, Episode 5, "Kissed by Fire."</p><p>A couple of <a href="">recaps ago</a>, Ross noted that he would "respect <i>Game of Thrones</i>' <em>Hair</em>.</p><p>And who can blame them? The storylines in which characters declined to disrobe had an unpleasant tendency to resolve themselves in murder, execution, compelled marriage, and, in one case, a (temporarily) fatal instance of bisection by sword.</p><p>It was, in short, an episode crammed with developments almost across the board. While there were no appearances by Theon, Sam, or Bran, and the <a href="">Mystery of Pod's Penis</a>.</p><p.</p><p."</p><p.)</p><p>As we discussed <a href="">earlier in the season</a>,.</p><p?</p><hr><p><b>Kornhaber:</b> Too... many... <i>developments</i>? No. I show up to an episode wanting to see forward movement, and unlike in <a href="">the talky, table-setting</a> opening hours of this season, the bulk of the scenes here provided an "oh SHIT!" moment of one sort or another. Count this newbie as satiated.</p><p <i>Thrones</i> can be not only pragmatic, but principled.</p><p?</p><p>I'm not sure I like whatever that "something" is, though. As we saw so many times last night, much of <i>Thrones</i>?</p><p>Onto the sex scenes. <a href="">At Think Progress recently</a>, <i>just knows</i> <a href="">long-maligned</a> deficit in equal-opportunity ogling.</p><p.</p><p.</p>The "Game of Thrones" Roundtable Increasingly Egalitarian Nudity on <em>Game of Thrones</em>2013-04-29T09:00:00-04:002019-04-24T18:28:21-04:00Our roundtable on “Kissed by Fire,” the fifth episode of the HBO show’s third seasontag:theatlantic.com,2013:50-274976 3: Finally, Something Happens </a></div> </div><!-- END "MORE ON" SINGLE STORY BOX v. 1 --><p><b>Kornhaber:</b> Talk about captive audience: In this good but often hard-to-watch episode, it was almost <i>too</i>.</p><p>All microcosms of the <i>Thrones</i>-watching experience, no?</p><p>Sorry—that sounds harsh. It's just this installment focused on the agony of imprisonment and the ecstasy of freedom... and, as is typical for <i>Thrones</i>,.</p><p.</p> <i>not</i> condescending to him ("He didn't have to. All he had to do was... <i>be</i>.").</p><p.</p> <i>Thrones</i> example of someone plotting an escape that (ideally, so far) doesn't involve bloodshed.</p><p <i>Rambo</i>.</p><p>Chris, last week <a href="">you floated the idea that</a> <i>Thrones</i>.</p><p>Chris and Ross, were you as alternately icked-out and hopped up by that episode as I was?</p><hr><p><b>Orr:</b> [Blink.]</p><p>[Blink.]</p><p>Got that, Spencer? I'll have more to say about Theon's tormenter—well, when the show has more to say about him.</p><p.</p><p>Getting back down in the weeds, I too didn't much like the first scene of the episode, in which the newly left-handed Jaime falls from his mount, drinks horse piss, etc., etc. My <a href="">complaints from last week</a>.</p> <i>A Christmas Story</i>: "<a href=""><i>Frah-jee-lay?</i></a> It must be Italian!") Given good material to work with, Conleth Hill, who plays Varys, is as good as anyone on the show. (See, for example, his <a href="">scene in the dungeons</a> with Ned Stark in Season One or his <a href="">disquisition on the origins of power</a> with Tyrion in Season Two.) And while he does better than most at transcending middling material such as this, it's a shame to see him have to try.</p><p.</p><p>Alas, when Cersei takes Lady Olenna's feminist blandishments to heart, and suggests to her father Tywin that <i>she</i>.</p>.</p><p <a href="">raised the intriguing possibility</a> that there may be more to this mini-plot than meets the eye—specifically, that Tyrion, having paid the whores in advance, <i>told <.</p><p?</p><p <i>khaleesi</i>!)</p>: <i>Really</i>? You think that this is a world in which you can count on evil acts being punished?</p><p>You know nothing, Jon Snow.</p><hr><p><!--?p ?--><b>Douthat:<.</p>.</p><p>I suppose I could have done with a few more of those special effects dollars funneled into the sacking of Astapor: I loved that <i>Rambo</i>.</p><p>Dracarys!</p>The "Game of Thrones" Roundtable <em>Game of Thrones</em>, Two Savage, Spectacular Game Changers2013-04-21T22:54:01-04:002019-04-24T18:31:29-04:00Our roundtable on “And Now His Watch Is Ended,” the fourth episode of the HBO show’s third seasontag:theatlantic.com,2013:50-274909 2: A Feminist Episode, a Gay Episode, or a Dull Episode? </a></div> </div><!-- END "MORE ON" SINGLE STORY BOX v. 1 --><p><b>Douthat:<.</p> <i>Rome</i>)..</p><p.</p><p.</p><p>Not every plotline saw the momentum pick up, alas. Stannis's whining to Melisandre and her opaque replies probably <a href="">set Spencer's teeth on edge</a>,.</p><p <em>The Sopranos</em> <i>Game of Thrones</i>' <i>Game of Thrones</i> uses its whorehouse excursions often feels worthy of a Seth MacFarlane monologue.</p><p.</p><hr><p><b>Kornhaber:</b> Ross, you're right. Hands down, the best episode so far this season. Or should I say <em>hand</em> down? Cuz... like... Jaime... hand... down... <i>get it</i>?</p><p>Sorry for the loopiness. I'm still buzzing from what was one of the funnier, more kinetic, more cleverly constructed <em>Thrones</em> installments yet—where the plot not only started to move in interesting ways, but the camera as well. The change of pace might be credited to this being the first and only episode to be directed by showrunner David Benioff (his copilot D.B. Weiss still has yet to direct). <a href="">We've discussed</a> how the series largely occupies itself with talk, but it appears the guy in charge has a flare for the nonverbal, too.</p><p>The opening two minutes set the agenda: No words, just the sublime choreography of Edmure failing to alight his father's pyre. I was giggling nervously along with the rest of the funeral party, and the grimly comic tone lasted through the three scenes that followed. Robb, in what may be Richard Madden's best moment yet, laconically struggled to contain his anger as he told off his condescending and disobedient uncle. Then we saw the small council awkwardly and silently selecting its own seating arrangement. The next sequence opened with a slow pan down on Jaime and Brienne's captors, paying musical homage to a homo sapien/ursine relationship.</p><p>The rest of the episode delivered as cinema, as well. Theon's chase offered a bona fide, will-he-die-slash-get-raped-or-won't-he thrill. The hour's other instance of threatened sexual violence was even more tense and horrifying, although the action—and Brienne's screams—came from off screen. I loosed a cackle when the tension was cut at the same time as Jaime's wrist and the episode itself, leaving the Hold Steady's punk-rocking version of aforementioned drinking song to play us out. All in all, Benioff, nice job.</p><p>And even though the showrunner is, <a href="">as I pointed out last week, on record as anti-theme,</a> over and over in "Walk of Punishment" characters were met with the limits of familial privilege. Dynasty matters in Westeros, but how much? Not enough to give Hoster Tully a dignified funeral—his son's ineptitude prevents that. Not enough to prepare Tyrion to be master of coin—"a lifetime of outrageous wealth hasn't taught me much about managing it." Probably not enough for Theon to make good on his promise of lordship to his rescuer, who reminds him that "we're not in the Iron Islands." Not enough for Lady Brienne of Tarth herself to convince her would-be rapists that she's anything but a "a big dumb bitch from who cares where."</p><p>And, of course, not enough to prevent Jaime from dismemberment. In fact, it was his smarmily displayed privilege that endangered his hand in the first place—that made his tormentor want to brand Jaime with a lifetime reminder of just how little his parentage matters to the wider world. Already, we'd seen the gap between reputation and reality interrogated by Brienne: "All my life, I've been hearing Jaime Lannister, what a brilliant swordsman ... Maybe people just love to praise an over-famous name." Now, deprived of his physical ability and with his over-famous name exposed as a liability, I'll be fascinated to see Jaime need to resort to his not-inconsiderable remaining asset: his wits.</p><p>But I'm with you, Ross, on the episode's strangest subplot: the deflowering of Podrick. To be sure, Benioff's flair for pacing and camerawork again showed itself in Tyrion's circus-master introduction of his squire's companions for the afternoon, but yes, the old exploitative vibes are back. Worse, though, I just didn't get the point. My terrible working theory is that this is the introduction of a supernatural storyline—is Podrick, like, the unrealized, soon-to-be-world-conquering avatar of the Lord of Libido or something?—but it could have been just the writers attempt to lighten the mood. Chris, you've read the books. Any way to illuminate just what was going on here, without spoiling what's ahead? And we still haven't talked about the surprising flash of humanity we saw from Jaime when he spoke up for Brienne. Psychoanalyze away.</p><hr><p><b>Orr:</b> Well, since you've already guessed it, Spencer, I suppose it doesn't count as a spoiler. I happen to have gotten my hands on an early draft of Martin's not-yet-published sixth tome, in which it is revealed that Melisandre<i> did</i> misread what she saw in the flames. Remember that scene early in Season Two, when she had Stannis pull the burning sword, Lightbringer, from the pyre as proof that he was truly the reincarnation of Azor Ahai and chosen vessel of the Lord of Light? Well, she was wrong. It turns out that the divine vessel is actually Podrick, and Lightbringer is his penis.</p><p>Gives new meaning to the whole "the night is dark and full of terrors" bit, doesn't it?</p><p>But returning to the episode itself, I agree with almost everything you guys have written. As in Season One, the show seems to be gathering momentum, gradually but (I hope) inexorably. And Benioff's rookie outing as director was indeed excellent.</p><p>The opening scene that introduced Edmure and the Blackfish was a tour de force—much more vivid and memorable than in the book. (And thank you, Ross, for reminding me why Tobias Menzies, who plays Edmure, was so familiar. Now that he and Ciaran Hinds have joined the cast, maybe HBO should bring aboard a few more refugees from <i>Rome</i>? I imagine that Kevin McKidd would be delighted to be rescued from the televisual purgatory of <i>Grey's Anatomy</i>...)</p><p>I also couldn't agree more that Richard Madden's Robb grows more kingly with every episode, and that his dressing-down of Edmure was his best scene yet. If only Kit Harrington were making comparable strides as Jon Snow; rather, he seems headed in the opposite direction, growing ever more listless as winter approaches.</p><p>The bit of physical comedy with the small council table was deftly handled, and is another great example of Benioff and Weiss taking something implicit in the book (there's some vague jockeying for seats, as I recall) and expanding it into a genuine <i>scene</i>. I felt the same way about the farewell to Hot Pie: It's a minor moment, of course, but the showrunners gave it room to breathe, and I found it unexpectedly endearing, clumsily sculpted wolf-loaf and all.</p><p>The aforementioned joke about Pod's carnal precocity, by contrast (no, it's not actually in the books), fell a bit flat for me; here's hoping it's a one-off gag. And I agree with everything you both wrote about the nudity quotient—or perhaps I should say "quota." In the show's early days, I presumed that its frequent recourse to sex was a borderline-defensive advertisement of its adultness. (Just because we're a fantasy show, it doesn't mean we're for kids!) But honestly, aren't we beyond that by now?</p><p>Regarding the episode's main event, the behanding of Jaime, I have a few thoughts. First, yes, Nikolaj Coster-Waldau has been, and continues to be, absolutely terrific—even better than he was at <a href="">hawking salami on German television</a>. The evolution of Jaime as a character is one of the best threads in Martin's novels—remember, this is a guy whom we first met throwing an innocent child to his presumed death—and far be it from me, Spencer, to offer any hints of where that evolution is headed.</p><p>I was disappointed, though, with the decision to make Jaime's maimer some semi-anonymous (or perhaps literally anonymous: do we even learn his name?) vassal of House Bolton. The character(s) responsible in the books have been mostly written out of the show, which is fine. But this felt to me like too momentous an event to be delegated to a complete nobody. (We had the same problem last season, in which Theon was persuaded to kill the two farm boys—and thus crossed a moral threshold from which he could never return—by some generic Ironborn named "Dagmer"; in the books, he was ushered into damnation by ... well, by someone that viewers of the show will get to know soon enough.)</p><p>Moreover, I didn't buy the motivation of the nameless hand-hacker. Would some mid-level (at best) flunky really mutilate such an important captive, on his own authority, just because he didn't like being condescended to? His lord, Roose Bolton—sigil: The Flayed Man—doesn't strike me as the kind of boss you'd want to risk displeasing. And remember, <i>his</i> boss is Robb Stark: Hard to imagine he'd be pleased to get back only 90 percent of the Kingslayer his mother set free. (Plus, is it just me, or is does our anonymous new friend bear a unsettling resemblance to Christopher Guest in <i>The Princess Bride</i>? Quick: Count his fingers!)</p><p>Regarding the episode's multiple iterations of the song "The Bear and the Maiden Fair": It's a recurring leitmotif for multiple relationships in the story, notably Ser Jorah and Danaerys, and, in an ironic reversal, Brienne and Jaime (he's the "maiden fair"). It's also the title of Episode 7 of this season, which is exciting news for fans of the books—though not as exciting as the fact that Episode 9 is named after another easy-listening Westeros favorite, "The Rains of Castamere."</p><p>One final observation: You note, Ross, that the Bran storyline is likely to be a bit of a dud, and I agree (though I still think the casting of Thomas Brodie-Sangster as Jojen Reed will help mediate the dud-ness). But it occurs to me that this isn't just a weakness of "On the Road with Jojen and Bran"; it's a weakness of "On the Road with <i>Anyone</i>." If there's one type of storyline that George R.R. Martin (and, as a consequence, Benioff and Weiss) has had trouble with, it's that staple of the fantasy genre: the quest. Throughout books and show alike, a high proportion of the duller narrative segments occur when some character—be it Bran or Brienne, Danaerys or Arya, Jon Snow or Samwell Tarley—sets off in search of someone or something. By contrast, it's when we return to the central plots and counter-plots, alliances and betrayals, open machinations and secret motivations that the story really gets humming. In this way, <i>Game of Thrones</i> almost resembles a crime or espionage saga more than a typical fantasy yarn.</p><p>It strikes me that this is one reason that the scenes featuring members of House Lannister are almost always good ones: The Lannisters have no interest in <i>going</i> anywhere. They already have pretty much everything they want, and are thus content to stay in King's Landing (or off-screen at Casterly Rock) scheming about how to keep it. Apart from the trip to Winterfell and Tyrion's corollary jaunt to the Wall early in Season One, the only time any Lannisters seem to hit the road at all is when there's a battle that needs immediate fighting or when someone takes one of them captive. Finally, I think this relative mediocrity of Martin's quest storylines also helps explain why the fifth book of the series, <i>A Dance With Dragons</i>, was such a great disappointment: It's not merely that he introduced 47 new characters that we didn't care about; it's that very nearly all of them were undertaking some quest.</p><p>In any case, sorry to bring all this up at the end of this week's installment, but if either of you has thoughts on the subject (or any other) I'd love to hear them next week. Until then...</p>The "Game of Thrones" Roundtable<em>Game of Thrones</em>' Problem With Quests2013-04-14T22:00:01-04:002019-04-24T18:34:31-04:00Our roundtable discusses “Walk of Punishment,” the mostly excellent third episode of the HBO show’s third season.tag:theatlantic.com,2013:50-274734 <em>Game of Thrones</em> Season 3 Premiere: All Talk, and What's Wrong With That? </a></div> </div><!-- END "MORE ON" SINGLE STORY BOX v. 1 --><p><b>Orr:</b> Ask and ye shall receive, Ross. <a href="">Last week</a>, you rued the absence of a certain character from the Theon storyline, and lo, this week he appears. As he has yet to identify himself, I won't do the honors. But I expect to have further thoughts on his belated arrival as that storyline progresses.</p><p>You also noted the way <i>Game of Thrones</i> showrunners David Benioff and D. B. Weiss had tweaked Martin's novels to present Margaery's charity work as, at least in part, a political tactic to exploit the Lannisters' incomprehension of "soft power." And here we have King Joffrey explicitly confirming this blindness. When his mother, Cersei, who senses something afoot, suggests that Margaery's "concern with the wellbeing of common people is <i>interesting</i>, " he replies curtly: "Not to me."</p><p>Indeed, this is an episode in which the female characters consistently seem a step ahead of their male counterparts. There's Margaery redirecting Joffrey's confused sexual ire (the angle at which he holds his crossbow when firing will be immediately recognizable to anyone who sat through one of those "subliminal sexual imagery in advertising" movies in school); Meera Reed operating as her brother Jojen's protector; Shae playfully taunting Tyrion; and, most literally, Brienne giving Jaime a lesson in steel at the episode's conclusion.</p><p>Perhaps more interesting still is that the earlier intimations of sisterhood (e.g., last episode's scene with Shae and Ros at the port of King's Landing) are becoming more explicit. Shae lectures Sansa that "Men only want one thing from a pretty girl." Catelyn confides in Talisa, her son's potentially costly new bride. And, of course, Lady Olenna, the Queen of Thorns (played beautifully by Dame Diana Rigg) gets Sansa to say what she <i>really</i> thinks of her rotten little ex-fiancé, Joffrey.</p><p>I have a few more thoughts on these last two scenes, beginning with the sitdown between Olenna, Sansa, and Margaery. This is another early-season scene (we cited a few last week) that is lifted directly—and just about perfectly—from the novel, with little changed apart from the setting. I had worried that the casting of Rigg might be largely a sop to aging fanboys, such as myself, who loved her in <i>The Avengers</i> (the British <a href="">spy show</a>, not Whedon's <a href="">cash cow</a>) and <i>On Her Majesty's Secret Service</i>. But she's terrific here, and never more so than when Sansa reveals that her soon-to-be son-in-law is a "monster," and she replies with only mild disappointment: "Hmm. That's a pity." Natalie Dormer's Margaery, too, is unfolding delightfully as the season progresses. I'd put her on the short list, with Bronn and Tywin, of the characters who've been most improved in the translation from page to screen.</p><p>I was less thrilled with the scene between Catelyn and Talisa. I understand Benioff and Weiss's desire to make Robb's wife a more prominent character than she is in the book (in which she's a young lady named Jeyne Westerling who barely registers at all), but I fear they may have overshot the mark a bit when it comes to the screen time afforded Talisa. Also, in changing the match from one made for honor—in the book Robb marries Jeyne because he feels duty-bound to do so after sleeping with her in a moment of weakness—to one made for love, they give the relationship a somewhat jarringly modern feel. (She's even a career woman!)</p><p>What bothered me in this scene though was not Talisa, but Catelyn, who explains that she blames all the tragedies that have befallen her House on her inability to love her husband's bastard son, Jon Snow, as her own. This is a bit that's been added by Benioff and Weiss, and while it's nicely written, it rings false to my sense of Catelyn, who is pretty much defined by a kind of righteous obstinacy, especially where Jon is concerned. Perhaps more to the point, it seems a little odd to go looking for distant sins that could explain her family's misfortunes when her own recent actions offer explanation enough: Her arrest of Tyrion did, after all, start the war with the Lannisters and lead to her husband getting stabbed through the leg; and her unsanctioned release of Jaime has already sown dissent among Robb's men. Maybe <i>those</i> bear more blame for the family's predicament than her inability "to love a motherless boy?" Now it could be that Benioff and Weiss are planting a seed with this scene that will blossom into something interesting later. But if it's merely a one-off, it's one I think the show could have done without.</p><p>There are plenty of other good moments in the episode, including the introductions of Jojen Reed (who seems perfectly cast in Thomas Brodie-Sangster, the voice of <a href="">Ferb</a> and long-ago moppet of <i>Love Actually</i>) and his sister Meera, as well as that of Thoros of Myr (here evidently combined with the Tom Sevenstrings character of the novels). And Brienne and Jaime have two of their best scenes together to date: his enunciation of a decidedly liberal (and entirely self-serving) philosophy of sexual freedom—straight, gay, twincestual—and her rather persuasive drubbing of him on the bridge. Finally, I'll note that, for the eagle-eyed, there's a subtle clue to be found in one of the banners we see this episode.</p><p>But enough from me. What did you guys think?</p><hr><p><b>Douthat:</b> In the run-up to this season of GoT, the talented critic (and fearsome recapper) Alan Sepinwall <a href="">had a piece</a> in which he mused on how much he loved last season's "Blackwater" episode because it showed how good this show can be when it doesn't have to hopscotch from character to character, setting to setting, but can set all of a week's action in basically the same place. That episode, he wrote, opened up "a host of possibilities" that the showrunners could potentially explore—like, say, concentrating individual characters' adventures and arcs into a few episodes (or even just one) rather than catching up with each cast member for five minutes every week. But he also noted that Benioff and Weiss seem to feel that those possibilities are, well, mostly impossible—that "it simply isn't practical to do a "Blackwater"-style episode focusing on fewer characters more than once a season. There are too many stories and too many characters to keep track of ... and this is the only realistic way to do it."</p><p>Reading that piece, I thought Benioff and Weiss basically had it right and that Sepinwall's idea wouldn't work. I would be frustrated (and my wife would probably stop watching the show) if Daenerys didn't make an appearance at least every other week, and right now the idea of a Jon Snow-only episode sounds about as fun as a night on sentry duty atop the Wall. Overall, the show's hopscotching approach to its sprawling story is problematic but probably necessary: It reminds us where everyone is from week to week, gives us the fix we need from our favorite characters, and guarantees that even when things get dull (as they do for Spencer when Stannis shows up on screen) we're only a few minutes away from a change of scenery and hopefully a more engaging plotline.</p><p>But the approach is problematic, nonetheless, mostly because it means that some episodes just feel like themeless puddings—and though I appreciate your struggle to read a sisterhood motif into this week's installment, Chris, it mostly just felt like a series of disconnected events to me, with too many scenes that existed to bridge us to the next one (that's my interpretation of Catelyn's maunderings about her relationship with Jon—they were there because the writers felt like it would make a nice dissolve to beyond-the-Wall) or ensure that we didn't forget what happened last week (Shae's conversation with Tyrion was mostly just a long reminder about Littlefinger's interest in Sansa). Even the climax had relatively low stakes: The Jaime-Brienne duel was fun, like all their scenes, but since they both want to go to the same place, it wasn't even all that clear how much Jaime would gain from killing her. (Though the men who came galloping onto the bridge at the last should raise the stakes a bit more next week, if they are who I think they are ...)</p><p>There was still plenty to like, of course: The welcome return of Arya and the Hound, Diana Rigg (as a child of the '80s, she'll always be the host of <i>Mystery!</i> to me) classing things up as the Queen of Thorns, the continued development of Margaery Tyrell into a full-fledged player of the Game, and yes, the first appearance of Theon's mysterious captor. But overall, this reminded me of some of the early episodes in Season 1: There was a lot of necessary scene-setting and a few excellent moments, but a palpable lack of momentum as well.</p><hr><p><b>Kornhaber:</b> You're onto something, Ross, with the "themeless pudding" line. And the showrunners are with you: Benioff <a href="">recently told</a> Grantland's Andy Greenwald that "Themes are for eighth-grade book reports." But pudding can be quite satisfying, no? When we tune into <em>Game of Thrones</em>, we're looking for a somewhat more primal fix than we get from a lot of prestige TV shows—what Tolkien called <a href="">"the enchanted state"</a> and Peter Dinklage (on the <em>Daily Show</em>) <a href="">called</a> "nerd glaze," both of which result from escaping to another universe to binge on glorious plot, plot, plot.</p><p>Still, I think that you and Benioff may be underselling the literary aspirations of the show. As it collages together more and more storylines, each <em>Game of Thrones</em> episode usually manages to draw out some underlying truth about its world—a world that shares DNA with our own—without belaboring the point. (Had only the Wachowskis taken notes before filming <a href=""><em>Cloud Atlas</em></a>.) So I'd argue Chris is right to sense a girl-power vibe in this latest installment: Nearly every storyline featured women acting as—or at least trying to act as—protectors of one sort or another.</p><p>Look: Catelyn fashioning what she hopes is a life-saving talisman for her kids; Margaery and Olenna promising no harm will come to Sansa in exchange for the dirt on Joffrey; Shae telling Tyrion that they need to shield Sansa from Little Finger; Osha leaping to Bran's defense and Meera in turn leaping to Jojen's; Arya defiantly greeting the Brotherhood as her buddies cower; Brienne's continued furtive escort of Jamie; and even (OK, this may be a stretch) the apparent rescue of Theon by an agent of his sister's. The methods, goals, and outcomes in each of these situations vary widely, but they nevertheless prove out one of <em>Game of Thrones</em>' guiding philosophies—that the people on the margins matter more than the people at the center realize.</p><p>My favorite example of this came in the Joffrey storyline. We first see Cersei attempting to advise her son, prodding him to be wary of his betrothed: "Margaery Tyrell dotes on filthy urchins for a reason. She dresses like a harlot for a reason. She married a traitor and known degenerate like Renly Baratheon for a reason. " Joffrey, in the grand tradition of teenage petulance, informs his mother that this is "one of the most boring conversations I've ever had," and in the less-grand tradition of patriarchy, says Margaery "married Renly Baretheon because she was told to. That's what intelligent women do: what they're told."</p><p>But it turns out the young king had been listening to his mother after all. He summons Margaery to his chambers and questions her about her relationship with Renly, recycling Cersei's exact phrasing: "He was a known degenerate." Margaery handles the situation with characteristic, savvy deference, and soon she's got her finger on Joffrey's crossbow's trigger—an image that can be read as sexual, as Chris pointed out, but also as an expression of the soon-to-be queen getting a grip on the instruments of power.</p><p>That "known degenerate" line strikes me as the writers trying, perhaps clumsily, to work another theme into the episode: attitudes about homosexuality. Joffrey says he's considered making Renly's "perversion" punishable by death, which to my ear is the hardest-line statement of sexual intolerance yet offered over the course of the series. Usually, characters have reacted to Renly's proclivities in roughly the same manner as Jaime does in this episode: Wink, make gross jokes, and then look the other way. But Jaime too gets an un-<em>Thrones</em>-y line of dialogue about sexuality. In any other context (say, <a href="">a Macklemore song?</a>), "we don't get to chose who we love" sounds like equal-rights boilerplate, but coming from a twincest-partaking villain and in a realm where every marriage (other than Robb's, as Chris pointed out) is pre-chosen and unrelated to love, it's ... curious.</p><p>Is the show up to anything with these two moments? Not sure. Could just be a few more ingredients in the pudding.</p>The "Game of Thrones" Roundtable<em>Game of Thrones</em>: A Feminist Episode, a Gay Episode, or a Dull Episode?2013-04-07T22:00:09-04:002019-04-24T18:37:52-04:00Our roundtable on “Dark Wings, Dark Words,” the second episode of the HBO show’s third seasontag:theatlantic.com,2013:50-274472>Kornhaber</b>: Sex and death get the headlines, but anyone who's binge-watched the first two seasons of <em>Game of Thrones</em> in a few-weeks span—guilty here—knows that the show spends most of its time with chitchat. So it's a good sign that last night's Season Three premiere had some of the show's best all-talk scenes yet.</p><p>Chris, you've <a href="">marveled at</a> how <em>Game of Thrones</em>.</p><p.</p><p."</p><p?</p><p?</p><hr><p><b>Orr:</b> Ah, Spencer. You have wisdom beyond your education, my scarcely housebroken sellsword.</p><p).</p><p <a href="">fantasy disappointment</a>...</p><p>But having consumed the novels after Season 1, 3.)</p><p, <i>A Storm of Swords</i>, and I was amazed at how much Benioff and Weiss have cut, while seeming to cut so little. As a magazine editor for more years than I like to remember, I am suitably awed.<>: A Storytelling Triumph </a></div> </div><!-- END "MORE ON" SINGLE STORY BOX v. 1 --><p <i>Rome </i>and <i>Munich</i>).</p><p <i>more</i> prominent than he is in the books—should have this season. As an aside, I'll mention that I've been a big Charles Dance fan going all the way back to <i>White Mischief</i>, <i>Alien 3</i>, and, yes, Schwarzenegger's <i>The Last Action Hero</i>, in which Dance played the villain. I can still hear the bottomless contempt he managed to pour into his delivery of the word "cretin" (pronounced with a short "e").</p><p>Regarding your disappointment with Stannis: I wasn't particularly happy with the casting of Stephen Dillane (I think Stannis should be a larger and more formidable figure), but the character is <i>meant</i>.</p> <i>way</i> too old to be a squire—and, ultimately, a very satisfying revelation. I can see how Benioff and Weiss might feel that his character was peripheral enough during Season 1.</p><p, <i>don't kill him</i>.</p><p>But I've gone on too long already. Ross, I remember you hated that Season Two closing scene, too. What did you think of the opening episode of Season Three?</p><hr><p><b>Douthat:</b> First, a hearty Dothraki thank you (you don't want to know what that involves) to you both for having me along for a season's worth of obsessive <i>Thrones</i>ing. Second, you remember correctly, Chris: I thought that the first season got better as it went along, and the second season (mostly) got worse. Coming to the show having read all of the books (or all of the books then extant, since <i>A Dance With Dragons</i>.</p><p>So that's the baggage I'm bringing to this season—well, that and the feeling of long-term dread, instilled in me by the disappointments of <i>A Dance With Dragons</i>, 1,.</p><p.</p><hr><p><a href="">Read all of <em>The Atlantic</em>'s <em>Game of Thrones</em> coverage.</a></p>The "Game of Thrones" Roundtable <em>Game of Thrones</em> Season 3 Premiere: All Talk, and What’s Wrong With That?2013-03-31T22:45:53-04:002019-04-24T18:41:16-04:00Our roundtable on “Valar Dohaeris,” the first episode of the HBO show’s third season
https://www.theatlantic.com/feed/author/game-thrones-roundtable/
CC-MAIN-2022-05
refinedweb
11,186
59.53
A few weeks back I wrote about the barcode encoding solution thats now available for use with XML Publisher for advanced barcode encoding calculations prior to applying the bar code font to the output. I found some time to take a look at some of the 2D barcodes and their requirements for encoding and rendering. Once you have the encoding class set up in your environment its pretty simple to add the 2D barcode encoding code. XML Publisher does not yet license fonts for ... they're coming, although no 2D barcodes. So I have been working with the PDF417 and Data Matrix fonts available from IDAutomation, along with the fonts they also provide neat java classes to do all the encoding you need. Its just a case of making their classes available in your environment and calling them in your encoding class. Then with a neat trick in your RTF template et voila you can have funky 2D barcodes in your output. So you'll need to license the fonts from the vendor, in this example I worked with the sample fonts available from the IDAutomation site. PDF417 Font - - you'll get a zip file with the font and java class plus a bunch of other useful integration tools. All you need is the PDF417Encoder.class and the IDAutomationPDF417nX.ttf font from the AdditionalFonts.zip where 'X' is the X to Y ratio of the font shape, 3 is the standard. This is explained better on the web site, Now you need to add the encoding code to your encoding class, take a look at our entry from a while back on Advanced Barcode Support. In the encoder class we created we import the font vendors encoding class and add an entry for the pdf417 font import IDautomationPDFE.PDF417Encoder; ... ... ENCODERS.put("pdf417",mUtility.getClass().getMethod("pdf417", clazz)); and create a pdf417 method that just calls the class provided by the font vendor: public static final String pdf417( String DataToEncode ) { PDF417Encoder pdfe = new PDF417Encoder(); return ( pdfe.fontEncode(DataToEncode) ); } Compile and deploy to the server as normal. Install your font into the <<WINDOWS_HOME>>/fonts directory and fire up MSWord. used XMLPBarVendor is the name of our encoder (b) Mark the formfield or text with the PDF417 font Now save your template and deploy. Now when you run the template the data will be encoded and the font applied. I have uploaded a sample encoding class and template to help you out. The same process can be applied to the Datamatrix barcode to generate that output. We had a customer test the result of the PDF417 and Datamatrix barcodes and their 2D barcode hardware read the resulting barcodes perfectly. Hi Bsaha The IDAutomation font should be fine ... we have other customers using them successfully. You can send me your template, font and sample data to tim.dexterAToracleDOTcom. Tim I should use pdf417 font on dispatch labels, but I have not got it working properly. The encoder is invoked, but the output of the 2D barcode is not correct. There is horizontal white space in the code. I have allready logged two service requests (last one:3-1075385333) regarding. We are using the code from IDAutomation. Where and how to format the output? Best regards, Lüüli Thanks for sharing. Hi, Here is another cracking tech to share: 1. Dowload Free Evaluation version from my home page 2.Put it into C# IDEs Make buttom to generete QR Code using the generator. 3.Press the bottom several times till the water marks disappeared. Hi Tim, I am trying to use IDAutomationHC39M font. It shows up, but it is very large and I cannot adjust it down and our scanner won't pick it up. Any suggestions? Thanks, Arlene The font sized used in the RTF template should be being respected in the final output. It might be worth creating a new template with just the bar code so you can test it. Something may have gotten corrupted in the template.
https://blogs.oracle.com/xmlpublisher/2d-barcodes-cracked
CC-MAIN-2018-09
refinedweb
665
73.17
A widget that displays data in a table. More... #include <Wt/Ext/TableView> A widget that displays data in a table. This class is an MVC view widget, which works in conjunction with a WAbstractItemModel for the data. The model may be set (and changed) using setModel(). The widget may be configured to allow the user to hide or resize columns, sort on column data, or reorder columns using drag&drop. By default, the table is not editable. Use setEditor() to specify a form field that may be used for inline editing for a particular column. Changes are then reflected in the model(). The table supports single and multiple selection modes, that work on a row-level, or cell-level. The latter option is enforced when the table is editable. By default, the data of the model is stored client-side, but this may be changed using setDataLocation() to be server-side. The latter option allows, in conjunction with a paging tool bar (see createPagingToolBar()) to support viewing (and editing) of large data sets. Although TableView inherits from Container (through Panel), specifying a layout for adding or removing widgets is not supported. The Panel methods to specify tool bars, titles, and buttons are however supported. A TableView has the table.x-grid3-row-table style classes. Create a new table view. You should specify a model using setModel(WAbstractItemModel *). Return if rows are rendered with alternating colors. Signal emitted when a cell is clicked. The signal arguments are row and column of the cell that is clicked. Clear the current selection. Return the horizontal content alignment of a column. Return if columns are movable. Return the column width. Create a paging tool bar. Create a toolbar that provides paging controls for this table. You should configure the page size using setPageSize(int). Signal emitted when a new cell received focus. This signal is only emitted when selectionBehavior() is SelectItems. The four arguments are row, column, prevrow, prevcolumn which hold respectively the location of the new focussed cell, and the previously focussed cell. Values of -1 indicate 'no selection'. Return the index of the column currently selected. Return the index of the row currently selected. Create a date renderer for the given format. The result is a JavaScript function that renders WDate (or more precisely, Ext.Date) values according to the given format, for use in setRenderer() Allow a column to be hidden through its context menu. Hide a column. Return if a column is hidden. Return if a column may be hidden through its context menu. Return if a column is sortable. Signal emitted when the selection changes. Return the model. Return the page size. Refresh the widget. The refresh method is invoked when the locale is changed using WApplication::setLocale() or when the user hit the refresh button. The widget must actualize its contents in response. Reimplemented from Wt::Ext::Panel. Let the table view resize columns to fit their contents. By default, columns are sized using the column sizes that are provided. Using this method, this is changed to let columns expand to fit the entire table. By setting onResize, this is done also whenever the entire table or one of the columns is resized. The list of rows that are currently selected. This is the way to retrieve the list of currently selected rows when selectionBehavior() is SelectRows. This list is always empty when selectionBehavior() is SelectItems and you should use currentRow() and currentColumn() instead. Return the current selection behaviour. Return the current selection mode. Render rows with alternating colors. By defaults, all rows are rendered using the same color. Set the column which will auto-expand to take the remaining space. By default the last column will do that. Change the visibility of a column. Allow the user to move columns using drag and drop. Setting movable to true, enables the user to move columns around by drag and drop. Note: this currently breaks the CellSelection mode to record the view column number, but not the data column number. Allow a column to be sorted by the user. Set the column width (in pixels) for a column. Give a cell focus. When selectionBehavior() is SelectRows, only the row argument is used, and the effect is to select a particular row. Even when selectionMode() is ExtendedSelection, this method will first clear selection, and the result is that the given row,column will be the only selected cell. Configure the location of the data. By default, data is stored at the client, and therefore entirely transmitted when rendering the table for the first time. Alternatively, the data may be kept at the server. Unless a paging tool bar is configured however, this will still cause the entire table to be anyway, after the table is rendered. When a paging tool bar is configured, only a single page of data is displayed, and transmitted, giving the best performance for big data sets. Configure an editor for the given column. Sets an inline editor that will be used to edit values in this column. The edited value will be reflected in the data model. When configuring an editor, the selectionBehaviour() is set to SelectItems mode. Configure if the row under the mouse will be highlighted. By default, the row under the mouse is not highlighted. Specify the model. You can change the model at any time, with the contraint that you should keep the same column configuration. You may also reset the same model. This will result in retransmission of the model from scratch. In some cases, this could result in a higher preformance when you have removed many rows or modified a lot of data. Configure a page size to browse the data page by page. By setting a pageSize that is different from -1, the table view will display only single pages of the whole data set. You should probably add a paging tool bar to allow the user to scroll through the pages. Configure a custom renderer for the given column. Sets a JavaScript function to render values in the given column. The JavaScript function takes one argument (the value), which has a type that corresponds to the C++ type: An example of rendererJS for numerical data, which renders positive values in green and negative values in red could be: Set the selection behaviour. The selection behavior defines the unit of selection. The selection behavior also determines the set of methods that must be used to inspect the current selection. You may either: Set the selection mode. The selection mode determines if no, only one, or multiple items may be selected. When selectionBehavior() is SelectItems, ExtendedSelection is not supported. Show a column.
http://webtoolkit.eu/wt/doc/reference/html/classWt_1_1Ext_1_1TableView.html
CC-MAIN-2014-10
refinedweb
1,115
60.11
On Tue, Jun 18, 2002 at 02:24:59AM -0400, Paul Kienzle wrote: > On Fri, Jun 07, 2002 at 02:25:17PM -0500, Marco Boni wrote: <snip> > For example, in hist: > > for i = 1:n-1 > cutoff (i) = (x (i) + x (i + 1)) / 2; > endfor > > is simply translated into > > cutoff = ( x(1:n-1) + x(2:n) ) / 2; > > The next loop in hist.m is more tricky: > > freq(1) = sum(y < cutoff (1)); > for i = 2:n-1 > freq (i) = sum (y >= cutoff (i-1) & y < cutoff (i)); > endfor > freq(n) = sum (y >= cutoff (n-1)); > > You can get part way there with sort: > > [s, idx] = sort ( [cutoff(:); y(:)] ); > pt = [0; idx > n; 0]; > > This gives you a sequence of 0's and 1's where the zeros represent > bin boundaries and the ones represent bin contents. Because sort > is 'stable' all y values which are identical to an x value will > appear after the x value, so the semi-open [x-1, x) logic will > be preserved. The leading and trailing zeros capture all the y's > which are not otherwise in a bin. > > Using cumulative sum, you can turn the ones into frequency counts > > chist = cumsum(pt); > > But you only need the counts at the bin boundaries: > > chist = s(find(diff(chist) == 0)); > > Now you have a 'cumulative histogram'. Differentiate it and you > should have the histogram: > > freq = diff(chist); Oops! This will need to be: freq = [chist(1); diff(chist) ] > > I may messed up something along the way, but hopefully you get > the idea. Could you fix it, test it and submit it to bug-octave? > > Since you are dealing with really large numbers of bins (up to 2^20 if > I'm reading your code correctly), fixing hist should address most of > the speed problems. I was bothered by having to compute with all those empty bins. Here is a version for n equal sized bins which is independent of the number of bins: bins = zeros(1,n); q = sort(y(:).'); L = length(q); if (L == 0) return; elseif (q(1) == q(L)) bins(n) = L; return; endif q = (q - q(1))/(q(L)-q(1))/(1+eps); # set y-range to [0,1) q = fix(q*n); # split into n bins same = [ q == [q(2:L),-Inf] ]; # true if neighbours are in the same bin q = q(~same); # q lists the 'active' bins (0-origin) f = cumsum(same); # cumulative histogram f = f(~same); f = [f(1), diff(f)] + 1; # cumulative histogram -> histogram # we need to add 1 since we did not count the point at the boundary between # histograms (it was turned into zero) # distribute f to the active bins, leaving the remaining bins empty bins(q+1) = f; Paul Kienzle address@hidden ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: How to fund new projects: Subscription information: -------------------------------------------------------------
https://lists.gnu.org/archive/html/help-octave/2002-06/msg00037.html
CC-MAIN-2021-25
refinedweb
482
64.27
Never too late for Santa! The project based on M5stack devices (Grey Core + Faces +RFID) is a wifi controller for Dji Tello Drones. (some specs here) Below an extract Yes there are a lot of apps for smartphone and PC software, but it was for me nice to try to control through a very compact unit for basic movements only. I'm currently studying the Tello SDK to work with Python and Open CV for Face Detection (and, why not, for mask wear or not detection on this pandemic era) and my idea is to separate movement control from video recognition. In my idea (for joke) Santa can use a drone to give the gifts away, and to access to the drone need a personal ID tag (RFID) So after the authorization, starts the sequence (with the underlined state by changing the rgb color at the base of Faces module) to take off and perform some movements before land First of all you can retrieve info on Tello sdk 1.3 (not for Tello EDU) here Tello act as Soft AP mode WI-FI, (his ip is 192.168.10.1) and the M5 Core take the ip 192.168.10.2 All the commands must be sent in UDP, there are interesting features to retrieve some information as Battery level, Barometric pressure, altitude, dinstance etc, but it require to execute a webserver to listen the answer on to another port than the 8889 Using Python is quite easy to retrieve a lot of projects based on this SDK, but using micropython and ESP32 is rare, I found only a great example on (i would thanks him) that inspired me, and I decide to try to make a porting to M5Stack devices. This is my fork from his code i'm evolve The first step was to add a working lib (with Tello commands) to the M5 Grey structure (i'm using UiFlow firmware v 1.6.3), after some fails, I have success using a trick with Thonny Ide, simply rename the lib "__init__.py" to "tello.py" and transfer it on M5 Core at root level (not /apps !) of Uiflow structure. The main program (i simply named Tellorun.py) once tested, has been transferred to /apps path of M5 Core. Once M5 core starts, you can select Tellorun.py from apps menu, after few second show a picture and code wait for a RFID tag near the sensor to start sequence to join wifi of drone (remember to light on Tello :-) ) Enjoy the demo! This is part of the code in Tellorun.py: from m5stack import * from m5ui import * from uiflow import * import tello import time import network import wifiCfg import face wifi = network.WLAN(network.STA_IF) wifi.active(True) setScreenColor(0xe9efd9) image1 = M5Img(2, 12, "res/santa_m5.jpg", True) label0 = M5TextBox(120, 140, "Status:", lcd.FONT_DejaVu24, 0xFF0000, rotate=0) label1 = M5TextBox(90, 190, "", lcd.FONT_DejaVu24, 0xFF0000, rotate=0) label2 = M5TextBox(50, 18, "SANTA FLY DRONE", lcd.FONT_DejaVu24, 0xFFFFFF, rotate=0) label1.setColor(0xFF0000) label1.setText('Preparing..') time.sleep(6) rgb.setColorAll(0x000099) # Wait for a RFID tag near sensor faces_rfid = face.get(face.RFID) while True: if faces_rfid.isCardOn(): label1.setText('Fly Authorized..') break wait_ms(500) rgb.setColorAll(0xFFFF99) wait_ms(500) rgb.setColorAll(0x00FF99) # RFID tag recognized, start WIFI connection to Tello drone (light on before!) rgb.setColorAll(0xFFFF00) wifiCfg.doConnect('TELLO-xxxxx', '') #Check your Tello ssid # wait for wifi connection and retrieve ip address from drone attempt = 1 while not wifi.isconnected(): label1.setText('Connecting...'+ str(attempt)) attempt += 1 time.sleep(0.5) # link to drone made, start to send commands drone = tello.Tello('192.168.10.2', 8888) drone.command('command') label1.setText('Initialized') rgb.setColorAll(0xff0000) time.sleep(6) drone.takeoff() label1.setText('Takeoff') time.sleep(7) label1.setText('Rotate LEFT') drone.rotate_ccw('30') time.sleep(4) drone.land() label1.setText('Landing') time.sleep(2) label1.setText('Landed!') rgb.setColorAll(0x000099) time.sleep(3)
https://www.hackster.io/gperrella/m5stack-christmas-tello-drone-for-santa-abcbaf
CC-MAIN-2022-27
refinedweb
660
63.29
I have a base class called LabFileBase. I have constructed a List and have added my derived classes to it. I want to search the List for a particular object based on a key I've defined. The problem I'm having is how do you downcast in a LINQ expression? Here is some sample code: public abstract class LabFileBase { } public class Sample1 : LabFileBase { public string ID {get;set;} public string Name {get;set;} //.. } public class Sample2 : LabFileBase { public string ID {get;set;} public string Name {get;set;} //.. } I want to search for a particular Sample2 type, but I need to downcast if i used a regular foreach loop like this: foreach(var s in processedFiles) //processedFiles is a List<LabFileBase> if (s is Sample2) var found = s as Sample2; if (found.ID = ID && found.Name == "Thing I'm looking for") //do extra work I would much rather have something like this: var result = processedFiles.Select(s => s.ID == SomeID && s.Name == SomeName); Is this possible and what syntactic punctuation is involved, or is the foreach my only option because of the different objects. Sample1 and Sample2 only have ID and Name as the same fields. EDIT: Thanks to all for your support and suggestions, I've entered almost everything into the backlog to implement. To elaborate on Jon's answer, OfType is almost certainly the correct operator if you aren't willing to put these fields into your base class. For example: foreach(var s in processedFiles.OfType<Sample2>()) if(s.ID == ID && s.Name == "Thing I'm looking for") // whatever foreach(var s in processedFiles.OfType<Sample1>()) if(s.ID == ID && s.Name == "Some other thing") // and so on Why not put ID and Name into the base class? Then you don't need any downcasting. If you do need casting though, the Cast and OfType operators may be helpful to you. Each transforms a sequence into a sequence of the specified type: Cast assumes that each element is the right type (and throws an exception if that's not the case); OfType works more like the C# as operator, only returning elements which happen to be the right type and ignoring others. Interesting question. In addition to Jon Skeet's (which is better than this option) note you could try a more manual alternative. Add an additional property containing an identifier for the type of object it is. public abstract class LabFileBase { public string LabFileObjectType { get; } public string ID {get;set;} public string Name {get;set;} } public class Sample1 : LabFileBase { public string LabFileObjectType { get { return "Sample1"; } } //.. } public class Sample2 : LabFileBase { public string LabFileObjectType { get { return "Sample2"; } } //.. } Then you could query against that property as well: var result = processedFiles.Select(s => s.ID == SomeID && s.Name == SomeName && s.LabFileObjectType == "Sample2");
http://www.dlxedu.com/askdetail/3/3765a09a3a41a539f3c9e349ccbb7b6a.html
CC-MAIN-2018-43
refinedweb
462
56.76
In order to illustrate the same task can be carried out using different programming languages here is my loop program in Scratch. And in Python import turtle import time for x in range(0,72): turtle.left(5) for n in range(0,5): turtle.forward(150) turtle.left(90) time.sleep(60) As mentioned before the time module is only included so a delay is included before the canvas is removed. And Ruby (this works in kidsruby at least) Turtle.draw do background white pencolor black 72.times do 4.times do forward (100) turnright (90) end turnright(5) end end So you can do the same task in a very similar way in different languages.
http://zleap.net/python-vs-scratch/
CC-MAIN-2019-04
refinedweb
117
67.15
(language) (language.constructs) The use of all these constructs will be covered in the next several chapters. (language.placeholders.syntax) Placeholders follow the same syntax rules as Python variables except that they are preceded by {$} (the short form) or enclosed in {${}} (the long form). Examples: $var ${var} $var2.abc['def']('gh', $subplaceholder, 2) ${var2.abc['def']('gh', $subplaceholder, 2)} We recommend {$})] { Note:} Advanced users can change the delimiters to anything they want via the {#compiler} directive. { Note 2:} The long form can be used only with top-level placeholders, not in expressions. See section language.placeholders.positions for an elaboration on this. To reiterate Python’s rules, placeholders consist of one or more identifiers separated by periods. Each identifier must start with a letter or an underscore, and the subsequent characters must be letters, digits or underscores. Any identifier may be followed by arguments enclosed in () and/or keys/subscripts in []. Identifiers are case sensitive. {$var} does not equal {$Var} or {$vAr} or {$VAR}. Arguments inside ()} and {False}. Examples: $hex($myVar) $func($arg=1234) $func2($*args, $**kw) $func3(3.14159, $arg2, None, True) $myList[$mySubscript] Trailing periods are ignored. Cheetah will recognize that the placeholder name in {$varName.} is {varName}, and the period will be left alone in the template output. The syntax {${placeholderName, arg1=”val1”}} passes arguments to the output filter (see {#filter}, section output.filter. The braces and comma are required in this case. It’s conventional to omit the {$} before the keyword arguments (i.e. {arg1}) in this case. Cheetah ignores all dollar signs ({$}) $$ (language.placeholders.positions)} bottles of beer on the wall. {$count} bottles of beer! Take one down, pass it around. {$after} bottles. ... (language.placeholders.dollar-signs) {$}} is a regular {#set} variable. {$range} is a Python built-in function. But {x} is a scratch variable internal to the list comprehension: if you type {$x}, Cheetah will miscompile it. (language.namemapper) ‘self’’ {‘useNameMapper’} compiler setting. But it’s doubtful you’d ever want to turn it off. (language.namemapper.example) ‘customers’ method that returns a dictionary of all the customer objects. Each customer object has an ‘address’. (language.namemapper.dict) NameMapper syntax allows access to dictionary items with the same dotted notation used to access object attributes in Python. This aspect of NameMapper syntax is known as ‘Unified Dotted Notation’. For example, with Cheetah it is possible to write: $customers()['kerr'].address() --OR-- $customers().kerr.address() where the second form is in NameMapper syntax. This works only with dictionary keys that also happen to be valid Python identifiers. (language.namemapper.autocalling) Cheetah automatically detects functions and methods in Cheetah $variables and calls them if the parentheses have been left off. Our previous example can be further simplified to: $customers.kerr.address As another example, if ‘a’ is an object, ‘b’ is a method $a.b is equivalent to $a.b() If b returns a dictionary, then following variations are possible $a.b.c --OR-- $a.b().c --OR-- $a.b()['c'] where ‘c’ is a key in the dictionary that a.b() returns. Further notes: (language.searchList) When Cheetah maps a variable name in a template to a Python value, it searches several namespaces in order: The first matching name found is used. Remember, these namespaces apply only to the { first} identifier after the {$}. In a placeholder like {$a.b}, only ‘a’ is looked up in the searchList and other namespaces. ‘b’ is looked up only inside ‘a’. ‘self’: {$myAttr}. However, use the ‘self’ ‘myObject’ in the searchList, you { cannot} look up {$myObject}! You can look up only the attributes/keys { inside} ‘myObject’. Earlier versions of Cheetah did not allow you to override Python builtin names, but this was fixed in Cheetah 0.9.15. If your template will be used as a Webware servlet, do not override methods ‘name’ and ‘log’ in the {Template} instance or it will interfere with Webware’s logging. However, it { is} OK to use those variables in a higher namespace, since Webware doesn’t know about Cheetah namespaces. (language.namemapper.missing) If NameMapper can not find a Python value for a Cheetah variable name, it will raise the NameMapper.NotFound exception. You can use the {#errorCatcher} directive (section errorHandling.errorCatcher) or { errorCatcher} Template constructor argument (section howWorks.constructing) to specify an alternate behaviour. BUT BE AWARE THAT errorCatcher IS ONLY INTENDED FOR DEBUGGING! To provide a default value for a placeholder, write it like this: {$getVar(‘varName’, ‘default value’)}. If you don’t specify a default and the variable is missing, {NameMapper.NotFound} will be raised. (language.directives.syntax)] The expression is ignored, so it’s essentially a comment. (language.directives.closures) Directive tags can be closed explicitly with {#}, or implicitly with the end of the line if you’re feeling lazy. #block testBlock # Text in the body of the block directive #end block testBlock # is identical to: output.slurp).
http://packages.python.org/Cheetah/users_guide/language.html
crawl-003
refinedweb
810
52.26
) Mike Gold(7) Shivani (5) Pramod Singh(4) Rajesh VS(4) TimothyA Vanover(3) Saurabh Nandu(2) Catalini Tomescu(2) Sanjay 0(2) Neelam Iyer(2) Filip Bulovic(2) Abrar Hussain(2) Ricardo Federico(2) Shripad Kulkarni(2) TH Mok(2) John O Donnell(2) Giuseppe Russo(2) andrew.phillips (1) Suresh S(1) sayginteh (1) Ravi Shankar(1) Arnold Park(1) Maheswara Rao(1) Ashish Banerjee(1) gary sun(1) paulyau (1) mingyongy (1) Sudhakar Jalli(1) Srinivasa Sivkumar(1) Dipal Choksi(1) Bulent Ozkir(1) Srinivas Kandru(1) Patrick Lam(1) Levent Camlibel(1) Luke Venediger(1) David Talbot(1) Zhanbo Sun(1) Kamran (1) Chris Harrison(1) Kunle Loto(1) Shamim Ahmed(1) Ashish Jaiman(1) Ammar Gangardiwala(1) Robert Hinrichs(1) Shantanu Bhattacharya(1) Ajit Kanada(1) Lee SangEun(1) Tin Lam(1) Joe Miguel(1) ivar (1) Avinash Pundit(1) Baseem George(1) Chandra Hundigam(1) Michael Evans(1) Chandrasekaran Sakkarai(1) Patrick Wright(1) Resources No resource found Read Microsoft Access Database in C# Jan 01, 2000. How to connect and read data from a Microsoft Access (.mdb) database using ADO.NET and. Add, Edit, Delete, View data using ADO+ Jan 04, 2001. Free to use Tutorials on using ADO+ on Microsoft's .NET Platform in C# as programming language.. Get a database table properties Jan 22, 2001. Get a table properties such as column names, types etc using DataColumn and DataTable. ADO.NET Database Explorer with Query Analyzer : Part 3 Jan 24, 2001. This is the part three of the Article Database Explorer. This parts adds some more and valuable functionality.. Effective C#: Working with Strings Mar 12, 2001. Using string might degrade the performance of your application. This article explains about what precautions you should take when you are going to use strings in your application.. Connect to an Oracle Database May 03, 2001. This sample code shows you how to connect to an Oracle database using C#. SQLDataReader Vs. DataSet May 11, 2001. To compare and contrast SQLDataReader and SQLDataSetCommand. Transaction Web Site May 16, 2001. When I started working with this technology I faced a problem dealing with session as in any transaction or Database oriented portal this is a must requirement to deal with... Basic Database operations using ADO.NET Aug 21, 2001. I found very interesting database features incorporated into the .NET as ADO.NET.. Passing Const Parameter to Functions in C#/C++/VB Compared Aug 30, 2001. Parameter passing to a function is extremely important in all programming languages. The desire to keep the passed parameter intact forced the compiler designers to add various keywords to the programming languages. Accessing Oracle Database Sep 21, 2001. This source code shows you how to connect to an oracle database and do operations such as select, insert, update and delete.. Oracle Database Connectivity Sep 26, 2001. This is a GUI based data entry application which shows how to add, modify and delete records using Oracle Database. File Uploading using XML Oct 09, 2001. This application is in ASP.NET which will allow you to upload images to an XML file which serves as the database for the uploaded files.. Network Programming in C# - Part 1 Nov 12, 2001. The .NET framework provides two namespaces, System.Net and System.Net.Sockets for network programming. Debug Troelsen's .NET books (1): IEnumerator Dec 10, 2001. This article examines code problem in chapter 4 (for C#)/chapter 5(for VB.Net) of the two .Net books. Observer and .NET event delegates Dec 17, 2001. The purpose of this article is to try to introduce observer pattern and compare it to .NET event delegate handling of notifications.. Deevloping a Banking System using VS.NET and Windows Forms Dec 26, 2001. This article also focuses on deploying multiple windows forms and shows how to navigate between them.. Bridge Patterns in C# Jan 17, 2002. Bridge Pattern is commonly known as Handle/Body idiom in C++ community. This pattern is used for decoupling an abstraction from its implementation so that the two can vary independently.. Working with Strings in VB.NET Feb 05, 2002. This article is VB.NET version of Working with Strings in .NET using C#...... A Shaped Windows Forms Application with Variable Opacity Mar 11, 2002. This application demonstrates two simple techniques that beginners might find useful in developing creative new looks for Windows applications. Creating a SQL Server Database Programmatically using SQLDMO Mar 12, 2002. The attached source code creates a SQL Server database programmatically using SQLDMO. Boxing and Performance of Collections Mar 14, 2002. In this article, I will compare some performance issues of values and references types during boxing and unboxing operations. Sending Windows Message in C# Mar 14, 2002. This sample code shows how to send Window messages between two forms using C#... Creating Graphics with XML Apr 09, 2002. This article shows how to create images on the fly and uses XML to specify the properties of the images. XML Schema Validator Apr 16, 2002. The XML Schema Validator checks if a given XML document is well formed and has a valid schema model.. ADO.NET From Windows DNA's Perspective Jun 12, 2002. Windows DNA is a framework to build multi-tier, high performance, scalable distributed applications over the network. This article takes a Windows DNA perspective and compares how ADO.NET fits in Windows DNA. Understanding Destructors in C# Jun 18, 2002. This article is about understanding the working concept of destructor in C#. As you read this article you will understand how different is C# destructor are when compared to C++ destructors.. Communication Between Two Forms: Part-II Jun 24, 2002. Last time I wrote about the possibility of sending a string from a textbox in one form to another. Communication Between Two Forms Jun 24, 2002. The aim of the program is to send a message between different forms. Editable GridView Control in C# and .NET - Part-III Printing the GridView Jun 24, 2002. In our last two articles, we talked about how to create an editable GridView and how to make it persistent in XML.. Writing a Generic Data Access Component Jul 17, 2002. OK, I've received couple of emails people asking me how can they use a common data provider to access various types of data sources without loosing the power and flexibility of native data provider libraries.. RSS Feed Project in .NET Aug 19, 2002. The RSS Feed project is aimed as demonstrating writing C# code to consume RSS feeds from the internet and putting the data from these RSS feeds into a database for you to use in your own applications. DataGrid Customiztion Part-III:Implementing Search Feature in a DataBound Grid Aug 21, 2002. In this article, I will show you how to exchange two DataGrid columns by dragging and dropping. About Compare-the-Schemas-of-two-Databases.
http://www.c-sharpcorner.com/tags/Compare-the-Schemas-of-two-Databases
CC-MAIN-2016-30
refinedweb
1,150
59.3
A Django application to manage user Datebook Project description Introduction Django datebook is.. a datebook ! This aims to manage user datebooks by months. A datebook contain day entries where you can add details, start and stop working hours, vacation, etc.. This does not aims to reproduce some advanced apps like Google calendar or alike, datebook is simple and will have a particular workflow for our needs at Emencia. Links - Download his PyPi package; - Clone it on his Github repository; Requires - Django >= 1.5; - autobreadcrumbs >= 1.0; - django-braces >= 1.2.0,<1.4; - crispy-forms-foundation >= 0.3.5; - Arrow; Optionnally - South to perform database migrations for next releases; - If you want to use the shipped Text markup integration : - rstview >= 0.2; - Django-CodeMirror >= 0.9.7; Install Install it from PyPi: pip install django-datebook Add it to your installed apps in settings : INSTALLED_APPS = ( ... 'autobreadcrumbs', 'datebook', ... ) Add its settings (into your project settings): from datebook.settings import * (Also you can override some of its settings, see datebook.settings for more details). Finally mount its urls in your main urls.py : urlpatterns = patterns('', ... (r'^datebook/', include('datebook.urls', namespace='datebook')), ... ) Text markup Default behavior configured in settings is to not use any Markup syntax usage. But if you want you can configure some settings to use a Markup syntax renderer and a form field to use a specific editor. This can be done with the following settings: # Text markup renderer DATEBOOK_TEXT_MARKUP_RENDER = None # Default, no renderer # Field helper for texts in forms DATEBOOK_TEXT_FIELD_HELPER_PATH = None # Default, just a CharField # Template to init some Javascript for texts in forms DATEBOOK_TEXT_FIELD_JS_TEMPLATE = None # Default, no JS template # Validator helper for texts in forms DATEBOOK_TEXT_VALIDATOR_HELPER_PATH = None # Default, no markup validation They are the default values in the datebook settings. Explanations - DATEBOOK_TEXT_FIELD_HELPER_PATH a function that will be used to define a form field to use for text. Signature is get_text_field(form_instance, **kwargs) where: - form_instance is the Form instance where it will be used from; - kwargs is a dict containing all default named arguments to give to the field. These default arguments are label for the field label name and required that is True (you should never change this); This should return an instanciated form field that must act as a CharField. DATEBOOK_TEXT_VALIDATOR_HELPER_PATH A function that will be used to clean value on the form field text; Signature is clean_restructuredtext(form_instance, content) where: - form_instance is the Form instance where it will be used from; - content is the value to validate; Act like a Django form field cleaner method, this should return the cleaned value and eventually raise a validation error if needed. DATEBOOK_TEXT_MARKUP_RENDER_TEMPLATE A template to include to render text value with some markup syntax. It will have access to the page context with an additional value named content that will be the text to render; DATEBOOK_TEXT_FIELD_JS_TEMPLATE A template to include with forms when your custom form field require some Javascript to initialize it. It will have access to page context with an additional value named field that will be the targeted form field; All these settings are only used with forms and template managing Datebook.notes and DayBase.content models attributes. Example There are the settings to use the shipped Markup syntax renderer and editor, disabled by default but that you can easily enable in your settings: # Field helper for texts in forms DATEBOOK_TEXT_FIELD_HELPER_PATH = "datebook.markup.get_text_field" # Use DjangoCodeMirror # Validator helper for texts in forms DATEBOOK_TEXT_VALIDATOR_HELPER_PATH = "datebook.markup.clean_restructuredtext" # Validation for RST syntax (with Rstview) # Template to init some Javascript for texts in forms DATEBOOK_TEXT_FIELD_JS_TEMPLATE = "datebook/markup/_text_field_djangocodemirror_js.html" # Use DjangoCodeMirror # Text markup renderer DATEBOOK_TEXT_MARKUP_RENDER_TEMPLATE = "datebook/markup/_text_markup_render.html" # Use Rstview renderer Read their source code to see how they work in detail. Warning Before enabling these settings you must install rstview and Django-CodeMirror, see optional requirements to have the right versions to install.. Also, future days (days that are bigger or equal to the current day) are not used to calculate month totals (worked hours, overtime and vacations).. Day models Often you would need to repeatedly fill your days with the approximately same content and so to avoid this there is Day models. You can create a Day model from an existing day in your calendars, its content will be saved as a model and then you can use it to fill any another days in your calendar. You can have multiple models, but they are allways for an unique user, models are not shareable through other users. To fill days with a model, just go into a month calendar, open the models menu, select the day to fill, select the model to use and submit, existing days will be overwrited with model contents and empty selected days will be created with the model contents. When filling days, default behavior does not use the model content text to fill the days, use the checkbox within the assignment form to use it. Credits - Collaborators - slothyrulez for Spanish translation; - For the “Sun umbrella” icon in webfont - Icon made by Freepik from is licensed under CC BY 3.0. - Other icons in webfont - Comes from various sets on IcoMoon. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-datebook/0.9.3/
CC-MAIN-2019-13
refinedweb
878
52.19
Ad hoc polymorphism support for Java programmers Barry is a member of the IBM Worldwide Accessibility Center, where he is part of a team that helps IBM make its products accessible to people with disabilities. He also serves as an Adjunct Assistant Professor of Computer Science at the University of Texas, Austin, and is a Sun Certified Java Programmer, Developer, and Architect. You can contact him at feigenbaus.ibm.com. Java is a strongly typed language, which means all variables must have a type and one can only call (through such variables) methods defined for that specific type. While this ensures that type errors are caught at compile time, it restricts certain types of method calls supported by other popular object-oriented languages such as Smalltalk and Python. The alternative approach is to have typeless variables (in Java this is the equivalent of declaring all variables to be of type java.lang.Object). With typeless variables, when a method is invoked, the runtime determines if the method is implemented by the receiving object. If it is, the method is executed; otherwise, a runtime error (similar to Java's java.lang.NoSuchMethodException) is generated. This approach to method invocation, which I call "ad hoc polymorphism," has several advantages: - Variables do not need to be typed. All variables can reference any type. The actual type can change at will during a program's execution. This allows great flexibility of programming. - Any class that implements the set of required methods can be used to implement a type. This means descriptions like Java's interface are not needed to describe sets of supported behaviors. Only classes are required to define types and implementations. If a method is defined, it is implicitly in the class's interface. - Multiple inheritance is not required. It is replaced simply by implementing all of the methods that are included in each desired superclass. This can be done without inheriting from any of the superclasses. In this article, I show how you can use the Java Reflection APIs to build a simple library that provides ad hoc polymorphism support for Java programmers. In the process, I present examples that compare and contrast this new type of programming with Java's style. For example, say you have several classesBird, Plane, Angel, and Leaf. In some sense, they can all fly. To be able to polymorphically create a fly method in Java, they would all need a common superclass or interface that defines a fly method. Because they have little in common, it would be difficult to create a reasonable class inheritance hierarchy to position the fly method in. But with ad hoc polymorphism, this is trivialyou simply define the fly method in each of these classes. Once this is done, you can invoke the fly method on any instance of these classes. Example 1(a) continues this example using these class definitions (in pseudocode), so that in Example 1(b), all of these method calls are valid. Ad Hoc Polymorphism in Python Using a more concrete example, consider the Python code in Listing One (excerpted from Reflect.py, which is available electronically; see "Resource Center," page 3) that defines five classes, Test1 through Test5. - Test1 and Test2 both define method1, but are not related by inheritance. - Test3 defines a different method2. - Test4 inherits method1 and adds method2. - Test5 overrides method1. To avoid redundancy, Listing Two uses two Python featureslambda (or anonymous) functions and first-class functions that can be passed as parameters. The testit method calls the supplied function. Listings Three through Eight are the main code. Listings Four through Eight use the testit method to demonstrate how methods are invoked. This results in the output: For value: This is a test string. Next, invoke the find (similar to Java's java.lang.String.indexOf) method to search for a substring in a string (Listing Four), which demonstrates method dispatching similar to Java's. (Output, presented in italics, follows each segment of code.) Listing Five illustrates ad hoc polymorphism-based dispatching. Notice that only one (untyped) variable, t, is used for all references, regardless of the referenced object's type. For both Test1 and Test2, the existing method1 is called, and the missing method2 generates an error. Although there is no relationship between classes Test1 and Test2, method1 is successfully found and called. For Test3 in Listing Six, method1 generates an error and the existing method2 is called. Similar to Listing Six, except for Test4, both existing methods are successfully called in Listing Seven. For Test4, method1 is resolved by inheritance. Listing Seven shows that ad hoc polymorphism still respects inheritance. For Test5, method1 is overridden, while the missing method2 is not (Listing Eight). Whenever a default value is provided and the target method is not defined, the default is returned. This shows that ad hoc polymorphism still respects method overriding. Ad Hoc Polymorphism in Java Now that I've introduced the concept of ad hoc polymorphism from a pseudocode point of view and through a real language (Python) that supports it directly, let's look at adding this capability to Java. Consider the library of functions defined in ReflectionUtilities.java. All the methods are public. The call (for void methods) or invoke (for methods other than void) methods use ad hoc polymorphism. The ...Static forms invoke static methods, the others call instance methods. Although not shown in Listing Nine, all the call and invoke methods throw InvokeException. For zero argument methods, the args parameter may be null or omitted. The def parameter provides a default value if the method cannot be found. (The curly braces "{" and "}" indicate an optional argument.) The canInvoke... and findMethod methods are used for testing if the target method can be called with the specified arguments. These methods (Listing Ten) are used to prevent InvokeExceptions for undefined methods. Using the Library To illustrate how you can use this library with Java, I use the inner class definitions in Listing Eleven (test code is contained in ReflectionUtilitiesTest.java) and the testit method definition in Listing Twelve. Listing Thirteen presents examples of invoking some String methods, while Listing Fourteen (as in the Python code) illustrates invoking methods on the sample Test... classes. The same comments apply. What About Performance? How does all this use of reflection perform? In general, it slows down execution. Thus, this technique may not be appropriate for performance-critical code. Still, its flexibility can far outweigh this performance cost in most types of code, particularly with user-interface code. You can see in Tables 1 and 2 that much of the lost time is in the method lookup. In the example code, I have used a simple linear search to find methods. As expected, this has poor performance. Many opportunities exist to cache found methods to significantly reduce this time. Also, you can find methods by using method findMethod in advance. This can be especially helpful if the method is going to be used frequently, such as in a loop. I have included the results of some simple measures of this library. These measures were taken on an IBM ThinkPad T23 with an 1.3-GHz Intel Pentium III Mobile. This was using the Java 1.4.1_01 JRE. Each test performed two (nearly) identical actions per loop (so the time per action is 1/2 the time shown) and the loop was executed 100,000 times. From the tables, it is clear that the Hot Spot optimizer can save lots of execution time. I noticed that the Reflection APIs caused lots of garbage collectionsthese times are included. The number of garbage collections appear to be the same with or without the optimizations. Conclusion In this article, I've shown how a simple library can be defined that will provide ad hoc polymorphism support for Java programmers. I've provided some examples of its use, and I've also compared this library with the native ad hoc polymorphism support in the Python language. Except for some slightly less desirable syntax that Java exhibits, the support is identical. References Java Reflection API tutorial (). Java Reflection API JavaDoc ( reflect/package-summary.html). Java Reflection API code samples ( samples/refl.html). Python (). Jython (Python running on a JVM) (). SmallTalk (). DDJ class Test1: def method1 (self): return "Test1" class Test2: def method1 (self): return "Test2" class Test3: def method2 (self): return "Test3" class Test4(Test1): def method2 (self): return "Test4" class Test5(Test1): def method1 (self): return "Test5"Back to article Listing Two def testit (name, receiver, func, default=None): o = None print print "trying", name try: o = func(receiver) except Exception, e: print "Exception:", e.__class__, '-', e if e.__class__ == AttributeError and default is not None: o = default print "result of", name, '=', o return oBack to article Listing Three if __name__ == "__main__": s = "This is a test string" print "For value:", sBack to article Listing Four testit('s.find("xxx")', s, lambda x: x.find("xxx")) testit('s.find("test")', s, lambda x: x.find("test")) testit('s.find("xxx","yyy")', s, lambda x: x.find("xxx", "yyy")) <i>trying s.find("xxx")</i> result of s.find("xxx") = -1 <i>trying s.find("test")</i> result of s.find("test") = 10 trying s.find("xxx","yyy") exception: exceptions.TypeError - 2nd arg can't be coerced to int result of s.find("xxx","yyy") = NoneBack to article Listing Five t = Test1() testit('t.method1()', t, lambda x: x.method1()) testit('t.method2()', t, lambda x: x.method2()) <i>trying t.method1()</i> result of t.method1() = Test1 trying t.method2() Exception: exceptions.AttributeError - method2 result of t.method2() = None t = Test2() testit('t.method1()', t, lambda x: x.method1()) testit('t.method2()', t, lambda x: x.method2()) <i>trying t.method1()</i> result of t.method1() = Test2 trying t.method2() Exception: exceptions.AttributeError - method2 result of t.method2() = NoneBack to article Listing Six t = Test3() testit('t.method1()', t, lambda x: x.method1()) testit('t.method2()', t, lambda x: x.method2()) <i>trying t.method1()</i> Exception: exceptions.AttributeError - method1 result of t.method1() = None trying t.method2() result of t.method2() = Test3Back to article Listing Seven t = Test4() testit('t.method1()', t, lambda x: x.method1()) testit('t.method2()', t, lambda x: x.method2()) <i>trying t.method1()</i> result of t.method1() = Test1 trying t.method2() result of t.method2() = Test4Back to article Listing Eight t = Test5() testit('t.method1()', t, lambda x: x.method1()) testit('t.method2()', t, lambda x: x.method2()) testit('t.method2()', t, lambda x: x.method2(), "default") <i>trying t.method1()</i> result of t.method1() = Test5 trying t.method2() Exception: exceptions.AttributeError - method2 result of t.method2() = None trying t.method2() Exception: exceptions.AttributeError - method2 result of t.method2() = defaultBack to article Listing Nine void call(Object instance, String name, {Object[] args}); void callStatic(Class owner, String name, {Object[] args}); Object invoke(Object instance, String name, {Object[] args, {Object def}}); Object invokeStatic(Class owner, String name, {Object[] args, {Object def}});Back to article Listing Ten boolean canInvoke(Object instance, String name, {Object[] args}); boolean canInvokeStatic(Class owner, String name, {Object[] args}); Method findMethod(Class owner, String name, {Object[] args});Back to article Listing Eleven class Test1 { public String method1() { return "Test1"; } } class Test2 { public String method1() { return "Test2"; } } class Test3 { public String method2() { return "Test3"; } } class Test4 extends Test1 { public String method2() { return "Test4"; } } class Test5 extends Test1 { public String method1() { return "Test5"; } }Back to article Listing Twelve Object testit(ReflectionUtilities ru, String msg, Object receiver, String name) { return testit(ru, msg, receiver, name, null); } Object testit(ReflectionUtilities ru, String msg, Object receiver, String name, Object[] args) { return testit(ru, msg, receiver, name, args, null); } Object testit(ReflectionUtilities ru, String msg, Object receiver, String name, Object[] args, Object def) { Object o = null; try { System.out.println(msg); o = ru.invoke(receiver, name, args, def); } catch (InvokeException e) { System.out.println(e.getMessage()); } System.out.println(msg + ": " + o); return o; }Back to article Listing Thirteen ReflectionUtilities ru = ReflectionUtilities.getDefault(); String s = new String("This is a test string"); System.out.println("For value: " + s); For value: This is a test string testit(ru, "length()", s, "length"); testit(ru, "length(\"bad\")=-1", s, "length", new Object[] {"bad"}, new Integer(-1)); <i>length()</i> length(): 21 length("bad")=-1 length("bad")=-1: -1 testit(ru, "indexOf(\"test\")", s, "indexOf", new Object[] {"test"}); testit(ru, "indexOf(\"test\", 10)", s, "indexOf", new Object[] {"test", new Integer(10)}); testit(ru, "indexOf(\"test\", 1f)", s, "indexOf", new Object[] {"test", new Float(1f)}); testit(ru, "indexOf()", s, "indexOf", new Object[] {}); <i>indexOf("test")</i> indexOf("test"): 10 indexOf("test", 10) indexOf("test", 10): 10 indexOf("test", 1f) method not found - java.lang.String.indexOf(java.lang.String,java.lang.Float) indexOf("test", 1f): null indexOf() method not found - java.lang.String.indexOf() indexOf(): null testit(ru, "toUpperCase()", s, "toUpperCase"); <i>toUpperCase()</i> toUpperCase(): THIS IS A TEST STRING testit(ru,"startsWith(\"This\")", s,"startsWith", new Object[] {"This"}); testit(ru,"startsWith(\"That\")", s,"startsWith", new Object[] {"That"}); <i>startsWith("This")</i> startsWith("This"): true startsWith("That") startsWith("That"): falseBack to article Listing Fourteen Object t; t = new Test1(); testit(ru, "t.method1()", t, "method1"); testit(ru, "t.method2()", t, "method2"); <i>t.method1()</i> t.method1(): Test1 t.method2() method not found - com.ibm.reflect.ReflectionUtilitiesTest$Test1.method2() t.method2(): null t = new Test2(); testit(ru, "t.method1()", t, "method1"); testit(ru, "t.method2()", t, "method2"); <i>t.method1()</i> t.method1(): Test2 t.method2() method not found - com.ibm.reflect.ReflectionUtiltiesTest$Test2.method2() t.method2(): null t = new Test3(); testit(ru, "t.method1()", t, "method1"); testit(ru, "t.method2()", t, "method2"); t.method1() <i>method not found - com.ibm.reflect.ReflectionUtiltiesTest$Test3.method1()</i> t.method1(): null t.method2() t.method2(): Test3 t = new Test4(); testit(ru, "t.method1()", t, "method1"); testit(ru, "t.method2()", t, "method2"); <i>t.method1()</i> t.method1(): Test1 t.method2() t.method2(): Test4 t = new Test5(); testit(ru, "t.method1()", t, "method1"); testit(ru, "t.method2()", t, "method2"); <i>t.method1(): Test5</i> t.method2() method not found - com.ibm.reflect.ReflectionUtiltiesTest$Test5.method2() t.method2(): nullBack to article
http://www.drdobbs.com/jvm/java-reflection-smalltalk-like-method-d/184405725
CC-MAIN-2016-50
refinedweb
2,358
60.11
Hi all. We are using WebSphere Applicacion Server, migrating from WAS 6.1 (EJB 2.1) to WAS 7 (EJB 3.0). We have a problem with dependency injection in a plain java class. Our chain of invocations is: Servlet --> EJB --> plain java class --> EJB (JPA class). We have put @EJB annotation inside de plain java class in order to access the JPA object, but our object is always null: public class MyClass1{ @EJB private MyJPAClass1 jpa1; public void method1() { // here the object "jpa1" is null } } We've read that the dependency injection engine doesn't really inject anything in plain java classes (without considering main methods).Is that true? It doesn't seem a logical technical limitation (in my humble opinion). If it's true, is there a way to put that reference in the plain java class (without the need of introducing an EJB reference inside the EJB which is previous to our java class)? Thanks in advance.
http://www.theserverside.com/discussions/thread.tss?thread_id=61846
CC-MAIN-2014-10
refinedweb
160
64.51
I’m currently helping a government client move a SharePoint 2010 farm over to SharePoint 2013, but this post is just as valid if you move solutions over from 2013 to 2016. The backdrop for the migration is that they have numerous WSP’s with server side code, and when migrating to SharePoint 2013 they decided to keep the server side code instead of moving over to a client side model. A decision made before I was involved, but it’s a valid one for many scenarios. Before delving into some upgrade paths you can take and the one chosen, let’s sum it up! Lesson learnedBe smart, plan ahead and think well before you involve your favorite consultant on the migration path – it might be one of the last tasks he or she will help out with, regardless how nice and forthcoming they are. When migrating from one SharePoint version to another keeping server side code, do the simplest thing possible with the least amount of moving parts. Do not mix in code refactoring's and changes as part of the process. If you do feel the urge to do so, then create new solutions which can co-exist side by side with the 2010 packages, and save yourself a lot of grief and swear words if you’re inclined to those. Planning a SharePoint upgrade and want help? Feel free to contact me or Puzzlepart and we’ll be happy to forward you to someone who likes these kinds of tasks. If you however want to move from WSP’s to a client side model as part of the migration we’d be happy to assist :) The easiest possible path..was not chosen. The easiest path is to take the 2010 solutions, open them in Visual Studio, upgrade to 2013, and rebuild. This keeps the solution id’s and dll versions for all web parts. Which means you can migrate like this: - Mount 2010 database in 2013 - Install 2010 WSP’s in compatible mode for 14 and 15 hive Install-SPSolution -CompabilityLevel {14,15} - Upgrade site collections to 2013 mode - Upgrade all solutions to the 2013 compiled ones The switch path..was not chosen. This is where you rebuild your 2010 solutions into new 2013 solutions with new solution id’s and new web parts. With this approach the migration can be accomplished like this: - Mount 2010 database in 2013 - Install 2010 WSP’s in compatible mode for 14 and 15 hive - Install 2013 WSP’s - Upgrade site collections to 2013 mode - Activate 2013 features if needed - Loop over all sites and web pages, replacing the existing web parts with the new ones - Deactivate 2010 features - Uninstall the 2010 WSP’s The custom, not well planned path..was the chosen one. This is where some of the WSP’s kept their solution id’s, but they decided to change namespace and assembly names as part of the 2013 code upgrade process. In order to switch out the 2010 web parts with the 2013 equivalent ones and keep the web part properties you need to have both the 2010 dll’s and 2013 dll’s present. If not you cannot export a web part from the page and keep the web part definition. So we ended up with the following steps: - Mount 2010 database in 2013 - Install 2010 WSP’s in compatible mode for 14 and 15 hive - Deactivate 2010 features (neither had deactivate code present) - Upgrade site collections to 2013 mode - Remove 2010 WSP’s with same solution id’s as 2013 - Install 2013 WSP’s - Manually GAC the 2010 dll’s - Manually add safe control entries to web.config for the 2010 web parts - Loop over all sites and web pages, replacing the existing web parts with the new ones - Un-GAC the 2010 dll’s and replace the safe control entries how to upgrade wsps to SharePoint 2016? should i open them with Visual studio 2015 and assign the framework to 4.5.2 or no need for this? You could open them and recompile, or just leave them as is. 2013 WSP's should install just fine in 2016.
http://www.techmikael.com/2016/04/considerations-when-upgrading-wsps-from.html
CC-MAIN-2018-09
refinedweb
694
64.54
Jun 10, 2017 10:37 AM|mgebhard|LINK poongsangalaHi, I have a page name Main.aspx and class file name populateCountry.cs, inside the page I have an HTML <select id="ddlCounty" runat="server"></select>. How can I call the ddlCountry inside the populateCountry? Pass the ddlCountry to the class as an argument to either the class constructor, a property or a method. Jun 10, 2017 10:59 AM|mgebhard|LINK poongsangala How to pass the ddlCountry to the Class? Please give me a sample Pass the HtmlSelect control to a class using a constructor. public class populateCountry { private System.Web.UI.HtmlControls.HtmlSelect _ddlCounty; public populateCountry(System.Web.UI.HtmlControls.HtmlSelect ddlCounty) { _ddlCounty = ddlCounty; } } Implementation protected void Page_Load(object sender, EventArgs e) { populateCountry pc = new populateCountry(ddlCounty); } However, I'm not sure this is really what you want. Rather than asking how to perform a specific task, explain what you are trying to do. For example, are you trying to populate a select from a database table? Jun 10, 2017 11:31 AM|mgebhard|LINK poongsangalaThe value of the html Select will use in the query inside my class Then simply pass the value of the HTML select to the class not the entire object. public class populateCountry { public string DoSomethingWithTheValue(string value) { return value; } } Implementation protected void Page_Load(object sender, EventArgs e) { populateCountry pc = new populateCountry(); pc.DoSomethingWithTheValue(ddlCounty.Value); } Jun 10, 2017 12:03 PM|mgebhard|LINK poongsangalaYou access the ddlCountry because you are in the code behind of the page, what if you have a another class? How to call the HTML Select id in the other class? It is simply not possible due to how a web site works. What exactly are you trying to do? The members of an ASP page which include buttons, links, dropdowns, textboxes, etc are create during the request and torn down when once the server responds. The information sent by the user from clicking a submit button or clicking a link is also only available during the time between the request and response. References. Jun 10, 2017 12:31 PM|mgebhard|LINK poongsangalaSo, there is no way to access the HTML Select in other class? Of course it is possible and I showed you two ways to access an HTML Select in another class! Explain what your are trying to do rather than how you think it should be done. 10 replies Last post Jun 10, 2017 12:31 PM by mgebhard
https://forums.asp.net/t/2123160.aspx?How+to+call+the+HTML+Select+inside+the+Class
CC-MAIN-2019-13
refinedweb
413
55.64
After spending some time writing Rust I have found myself yearning for a good type system whilst writing front-end code. After a brief spell messing around with TypeScript, I stumbled across Reason or ReasonML (not really sure which one is right!). Reason is a new syntax based on OCaml which has a very good type system. For me, the syntax sits nicely between JavaScript and Rust and the project was started by Jordan Walke which means the language also has first-class support for React! On top of this, the compiler is crazy fast and starting a new project takes mere seconds. I have been meaning to make myself a simple profile site for a while and I thought it would be the perfect excuse to give Reason and ReasonReact a try. I've spent the last week making the site, during which I have run into a couple of issues which I found difficult to resolve; the language is quite new so finding the information you need can be a little tricky at times, therefore I thought it would be a good idea to make a note of them (and my solutions to them) here. Pipes work by taking the output of a function and piping it into the input of another function. They are quite a nice piece of syntax which can change some code which looks like this: // JavaScript:lastDoThis(thenDoThis(firstDoThis(value))) Into this: value |> firstDoThis |> thenDoThis |> lastDoThis; If you spend some time looking at examples of code written in Reason you are likely to see three different pipe operators and at first I wasn't sure why: |. ->( -> ) |>( |> ) First we have Reason's old syntax for its Pipe First operator |., this was replaced by the thin arrow symbol at number 2 -> but you may still come across it when looking for tutorials online. Number 3 |> is OCaml's own Pipe operator, this is also valid in Reason and you will likely see it being used in places because the Pipe |> and Pipe First -> have different behaviours. If you compare a standard library function like List.map with a function from Bucklescript such as Belt.List.map, you will see that List.map takes the list as its second argument whereas Belt.List.map takes the list as its first. The former order is the order in which Ocaml functions expect arguments, therefore OCaml's Pipe operator adds the value being piped as the righter-most argument. Pipe First (as its name suggests) puts it in as the first argument; I'm guessing that when Bucklescript was being created, they decided to go with the latter order because that is what would feel most familiar to developers coming from JavaScript. let items = [1, 2, 3];let double = number => number * 2;let doubled = List.map(double, items);/* or */let doubled = items |> List.map(double);let doubled = Belt.List.map(items, double);/* or */let doubled = items->Belt.List.map(double); You can use either operator if the function you are piping to only takes a single argument. Writing lists ( <ul> and <ol>) is a pretty common occurrence when you are creating a user interface, I suspect that if you are using React regularly that you would be able to write the following with your eyes closed: // JavaScript:function UnorderedList({ items }) {return (<ul>{items.map(item => (<li key={item}>{item}</li>))}</ul>)} It might then, come as something of a surprise to struggle to get this to work in Reason: module UnorderedList = {[@react.component]let make = (~items) => {<ul>{items->Belt.List.map(item => {<li key=item>{item->React.str}</li>})}</ul>;}} This has type: ('a => 'b) => Belt.List.t('b) But somewhere wanted: ReasonReact.reactElement (defined as React.element) This is because List.map returns a list and not a React element which is what the compiler is expecting to be returned from the make function. In order to get this to work we need to jump through some hoops, first we need to map through our list, then we need to convert our list to an array and finally we can convert that array to a React element or group of. This is something which you may find yourself doing several times whilst developing an application, for that reason I found it best to create a function which could be reused any time I needed to convert a list of data into some elements: let mapElements = (list, callback): React.element =>list->Belt.List.map(callback)->Array.of_list->React.array; I found that it was best to put this function and some other common patterns in a file called Utils.re. That way I could just write open Utils at the top of any file to get access to all of my helpful utilities: open Utils;module UnorderedList = {[@react.component]let make = (~items) => {<ul>{items->mapElements(item => {<li key=item>{item->React.str}</li>})}</ul>;}} If you want to get the value from an input (or any property from the target of an event) then you can't just do this as you would normally do: // JavaScript:function Input() {const onChange = event => console.log(event.target.value)return <input onChange={onChange} />} Instead you need to pass the synthetic event to a function which will return an object of type Js.t, this is a type provided by Bucklescript which wraps a Reason object (different to a record) and is accessed in a slightly strange way: let make = _ => {let onChange = event => Js.log(React.Form.target(event)##value);<input onChange />;} React.Form.target is the function which takes the event and returns the Js.t object. The double hash ( ##) is the accessor which can be used to get any property which you would normally get from event.target when using JavaScript. Normally, properties on objects in Reason are accessed using a single hash, I guess the double hash is needed because first you need to access the object within the Js.t and then you access the property you want on the Reason object. It looks a bit strange at first but I soon got used to it and find I don't have to think about what I need to type any more! Here's some further reading which might help: Sticking with inputs, if you try to set the type of a HTML input like so: <input type="number" /> Then you will get an error message telling you that type is a reserved word. You can get around this problem easily by just adding an underscore to the end of the word: <input type_="number" /> One of the things I enjoy about Reason is that the compiler is really fast, no more waiting for a bundler when developing! I did want to bundle my files for deployment though so I created a webpack config and set the entry to the Index.bs.js file. Because modules are commonJs by default, the size of my bundle was really large, so I changed the bsconfig.json to use es6 modules instead: {"package-specs": [{"module": "es6","in-source": false}]} Unfortunately this meant that my app would no longer work in development mode without also using a bundler, which I really didn't want to do. I wasn't able to find any documentation on this (it probably exists somewhere, I'm just not very good at finding stuff!) but I figured I would see what would happen if I added both module types to the array: {"package-specs": [{"module": "commonJs","in-source": false}{"module": "es6","in-source": false}]} Because I have "in-source" set to false, the generated JavaScript files are put in the lib folder instead of alongside the Reason files. I prefer it this way if the app is entirely written in Reason, I can see why it would be better to keep the JS and Re files together if you're converting a codebase though. With both module types in the array, the lib folder get a js directory and an es6 directory. This means that the index.html file used for development can use the commonJs files from the js directory and the Webpack config for production can use the es6 files. I also wanted to bundle the CSS files, I found the easiest way to do this was to just add an an extra index.js file to the Webpack entry array and in this array import the CSS files that I need. Hopefully this has been helpful, I've really enjoyed my first foray into Reason and I look forward to making more with it in future. Watch this space for more on the subject. Bye! If you've found this helpful then let me know with a clap or two!
https://blog.matt-thorning.dev/reason
CC-MAIN-2021-04
refinedweb
1,457
67.28
Introduction to Laravel Join The Laravel framework is one of the most sought after of frameworks in the tech world currently. The reason being its ease of usage and scalability. Its robust and yet simple methods of fetching functions have been winning hearts for some time now. Because of this reason, Laravel Framework have become the favourite for the creation of E Commerce sites. Clients have a wide variety of functionalities to choose from. The other reason Laravel framework is a sought after platform is because of its ability to facilitate third party integrations. It works seamlessly to create entire systems. In this article we will take a quick look into Laravel Join and see how helpful it is to create reports from multiple data points. What is Laravel Join? With the help of join queries in Laravel, one can add two or more tables. It is helpful in integrating large disparate sets of tables into one singular point of reference. It is broken up into different segments: Laravel Inner Join: This clause selects the records only if the selected values of the given column matches both the tables. It is imperative that we are extremely careful with what we choose. - Laravel Left Join Clause: The Laravel Left Join Clause will return all the values which are present in the left table, even though there are no matches in the right table. The right table will hence return a NULL. - Laravel Right Join Clause: The Laravel Right Join clause will return all the values which are present in the right hand table, even though there are no matches in the table on the Left hand. The Left table will hence return a NULL. - Laravel Cross Join Clause: The Laravel Cross Join clause allows every row from Table 1 to join with every row from Table 2. - Laravel Advanced Join Clauses: In this clause, other functionalities can be included into the command set. Queries like “where” and “orwhere” can be used to compare column with values. - Laravel Sub Query Joins: It is a continuation of the Advanced Join Clause, where a sub query is inserted to extend the functionality of the main query. All these joins make the Join clause one of the most used queries. Examples to Implement of Laravel Join Let us look at an example that will make it amply clear. Example #1 The Advanced join clause: Code: <?php namespace App\Http\Controllers; use Illuminate\Support\Facades\DB; use Illuminate\Http\Request; class getqueryController extends Controller { public function index(){ $students=DB::table('students') ->join('contacts', function ($join) { $join->on('students.id', '=', 'contacts.student_id') ->where('contacts.student_id', '>', 3); }) ->get(); echo "<pre>"; print_r($students); echo "</pre>"; } } Output: Example #2 Left Join Clause Code: <?php namespace App\Http\Controllers; use Illuminate\Support\Facades\DB; use Illuminate\Http\Request; class getqueryController extends Controller { public function index(){ $students = DB::table('students') ->leftJoin('contacts', 'students.id', '=', 'contacts.student_id') ->select('students.id','students.name','contacts.phone','contacts.email') ->get(); echo "<pre>"; print_r($students); echo "</pre>"; } } Output: Now this example was about integrating disparate set of tables and integrating it. Join table returns all the records or the rows which are present in both the tables. Example #3 Let us look at the syntax : Join(‘order’, ‘users.id’,’=’,’orders.user_id’) - ‘order’: Identifier of the second table - ‘users,id’: Key of the First Table - ‘=’: operator - ’orders.user_id’: Key of the Second Table This is the breakup of the syntax of Inner Join clause: Code: //(); 1 select * from `users` inner join `orders` on `users`.`id` = `orders`.`user_id` Output: Let us look at the syntax of the Left Join clause: leftJoin(‘order’, ‘users.id’,’=’,’orders.user_id’) - ‘order’: Identifier of the second table - ‘users,id’: Key of the First Table - ‘=’: operator - ’orders.user_id’: Key of the Second Table This is the breakup of the Left Join Clause, Let us now have a look at an example signifying the same: Code: //(); 1 select * from `users` left join `orders` on `users`.`id` = `orders`.`user_id` Output: Left joins are applicable especially in those cases when you want to have all the records from the first table even though none of records in the second table match. The Join clause can be used across a wide spectrum of functionality. The same logic is applied in SQL. In fact, SQL being the precursor to PHP and the Laravel framework the idea is similar. The Join functionality will assimilate all the disparate tables and bring them together to get the relevant output. This can the: Left Join: SELECT Orders.OrderID, Customers.CustomerName, Orders.OrderDate FROM Orders LEFT JOIN Customers ON Orders.CustomID=Customers.CustomerID; Right Join: Code: SELECT Orders.OrderID, Customers.CustomerName, Orders.OrderDate FROM Orders RIGHT JOIN Customers ON Orders.CustomID=Customers.CustomerID; Conclusion From the above examples it is quite evident the efficacy of the Laravel Join clause. It is a helpful clause that enables the developer to fetch values from disparate sources of the table and create the required result. This is also another example of the flexibility of the Laravel framework. Recommended Article This is a guide to Laravel Join. Here we discuss the Introduction and its different examples along with Code Implementation. You can also go through our other suggested articles to learn more –
https://www.educba.com/laravel-join/?source=leftnav
CC-MAIN-2022-05
refinedweb
873
55.13
Skip to 0 minutes and 5 secondsWIM: In this tutorial, I want to explain how you can use the Parsec parser combinator library for practical parsing tasks. Skip to 0 minutes and 16 secondsYou have learned before how to use the Show typeclass and the show function to print out Haskell data structures. So in this tutorial, I want to show you how you can create a parser for the output from the Show function and then turn it into XML, for example. Skip to 0 minutes and 36 secondsSo let's start by creating a few data structures. Skip to 0 minutes and 43 secondsSo we create a data structure PersonRecord just as an example, where we have the name of the person, the address, some integer identifier, and a few labels. So the Address is a data structure in its own right and so are the labels. They are an algebraic data structure like this. Skip to 1 minute and 8 secondsNote that we have used deriving Show so that you don't have to write your own Show function. By using this deriving clause, GHC will automatically create the show function for these data structures. And it is the output of this automatically created show function that we are going to parse. So let's create a few instances of this data type that we have here. So let's say record 1 is, and record 2. And now we can use show on these two records, and this is the output that show gives us. So it's this output that we now want to transform into XML using the Parsec library. So let's go and build a parser using the Parsec library. Skip to 2 minutes and 17 secondsI'm creating a module showParser, which exports a single function ParseShow, which is going to parse the output from Show and turn it into XML. To use Parsec in practice, there are a number of modules that you usually need. First, we have the actual parser combinators, then we have a library of token parsers, and then we have a handy support library called Parsec.Language, which defines the basics of the programming language by defining what is an identifier, what is a comment, and things like that. But the Parsec.Token library exports a function makeTokenParser and this function takes a language definition. Skip to 3 minutes and 1 secondNow, because we're not really having a programming language where we use emptyDef, if you want to know more about the language definitions, I suggest you read the Parsec manual. So then, given this lexer, so basically, the definition of what are identifiers and so on in our system, we can now build a number of parsers very conveniently like this. So the library finds a number of parsers for a given lexer, and now you can actually parse parentheses, brackets, braces, comma-separated lists, white spaces, symbols being predefined strings, identifiers being whatever is an identifier in your language, integers, and so on. There are a lot more of them, and again, you can look at the Parsec library. Skip to 3 minutes and 48 secondsSo with this boilerplate, we can start making the actual parser, and this will be a parser that takes a string, being the output from the Show function, and it produces a new string, which is the XML representation of this input string. So we have a function with signature, ParseShow, string to string. And the way this works is that we have a function that creates-- that is our actual parser, and this parser is a monadic parser, which means, basically, that it consists of functions that you put together. So you actually produce one big combined function, and then you apply that function to the string. And to apply this function to the string, we use another function, run_parser, that looks as follows. Skip to 4 minutes and 43 secondsSo run_parser is independent of the actual input of the parser. So you have a parser of a certain data type and a string, and it produces this output data type that you desire. So in our case, the data type that we desire is a string, so see here is a string. And what it does is it applies our parser to an input string, and then if there's an error, it reports an error. Or if there's no error, it returns the result. So this function, run_parser, is kind of generic. So you can always reuse it. In our case, the parser that we want to build, we will call showParser, and that's a parser of string. So it returns a string. Skip to 5 minutes and 28 secondsSo with this showParser and run_parser, we can now build our actual ParseShow function. So we say, ParseShow equals simply like this. So maybe we want to make this a little bit more complete. To emit valid XML, we can tag this with an actual XML header string. So let's say, we define the XML header string as, and then we can actually change this function a little bit. Skip to 6 minutes and 7 secondsIn the opening tag, we must actually create, not just the name of the tag, but also the list of attributes. So the list of attributes is a list of tuples for the name of the attribute and the value of the attribute. And so what we do is we apply a map of a function that turns this tuple into a string, where we have the key, and then equals quote, , value, quote. Now, so this approach here for string concatenation is very convenient, rather than using the ++ in terms of readability. So concat will simply concatenate all of the elements of a list. With these tag manipulation functions, we are almost ready to create the actual parses. Skip to 6 minutes and 52 secondsThere's another function that is very handy to use. It's a function that will join elements of a list using newline. So let's build that function. We have join. And I use the function intercalate. So intercalate is a function that takes all the elements of a list and joins them together, interspersed with the particular character that is the first argument of this function. But this function intercalate is not part of the standard prelude, so we have to add it to our import list here. With all this machinery ready, we can now create the rest of our parsers. Skip to 7 minutes and 32 secondsFor instance, if we want to parse a list, so we parse brackets, and then a comma-separated value of whatever can be inside the list till we call showParser again. And the result is the list of the elements that we parsed. What we do then is we tag this list with the type list, and then we join them with newlines, and each element is surrounded by a tag list element. The other parsers are quite similar. So I'll just put them all here. We have a tuple_parser, which parses parentheses and then comma-separated. So that's a tuple, and then drops it into tuple tags. Skip to 8 minutes and 15 secondsWe have the record_parser, which first parses the name of the record, and then it has braces, and then it has a comma-separated list of key values. So for key values, I have an additional parser, kvparser, here, which parses an identifier, equal sign, and then whatever. And then it returns this wrapped in a tag element, and then it has keys and values. And finally, we parse the type_identifier. So type_identifiers are defined as strings that start with an uppercase letter and then one or more alphanumeric letters. This is our Parser that we use to parse the labels. So with all these things together, we can actually parse a show output and turn it into XML. Skip to 9 minutes and 0 secondsSo of course, we need to change our main program a little bit to make that happen. First of all, we must import the parser that we just wrote. And then, obviously, we must use it, but that's very simple. So we change the main program just a little bit. Skip to 9 minutes and 22 secondsSo we call show on the records that we created and put the result in the record string. We call ParseShow on the record string and that's our main program. Skip to 9 minutes and 35 secondsSo now we can try this out. And, indeed, we have created an XML document that represents the output of show called on our custom data structure. So this is an example of how you can use Parsec to easily transform strings into different strings, for instance, XML documents. Parsing using Parsec: a practical example The aim of this tutorial it to explain step by step how to build a simple parser using the Parsec library. The source code can be found on GitHub Parsing the output of derived Show The polymorphic function show returns a string representation of any data type that is an instance of the type class Show. The easiest way to make a data type an instance of type class is using the deriving clause. show :: Show a => a -> String data D = D ... deriving (Show) d :: D d = D ... str :: String str = show d -- string representation of the instance of the data type In this tutorial we will show how to create a parser that will parse the output of a derived show and return it in XML format. parseShow :: String -> String xml = parseShow $ show res Example data type First we create a data type PersonRecord data PersonRecord = MkPersonRecord { name :: String, address :: Address, id :: Integer, labels :: [Label] } deriving (Show) The types Address and Label are defined as follows: data Address = MkAddress { line1 :: String, number :: Integer, street :: String, town :: String, postcode :: String } deriving (Show) data Label = Green | Red | Blue | Yellow deriving (Show) We derive Show using the deriving clause. The compiler will automatically create the show function for this data type. Our parser will parse the output of this automatically derived show function. Then we create some instances of PersonRecord: rec1 = MkPersonRecord "Wim Vanderbauwhede" (MkAddress "School of Computing Science" 17 "Lilybank Gdns" "Glasgow" "G12 8QQ") 557188 [Green, Red] rec2 = MkPersonRecord "Jeremy Singer" (MkAddress "School of Computing Science" 17 "Lilybank Gdns" "Glasgow" "G12 8QQ") 42 [Blue, Yellow] We can test this very easily: main = putStrLn $ show [rec1,rec2] This program produces the following output: [wim@workai HaskellParsecTutorial]$ runhaskell test_ShowParser_1.hs [MkPersonRecord {name = "Wim Vanderbauwhede", address = MkAddress {line1 = "School of Computing Science", number = 17, street = "Lilybank Gdns", town = "Glasgow", postcode = "G12 8QQ"}, id = 557188, labels = [Green,Red]},MkPersonRecord {name = "Jeremy Singer", address = MkAddress {line1 = "School of Computing Science", number = 17, street = "Lilybank Gdns", town = "Glasgow", postcode = "G12 8QQ"}, id = 42, labels = [Blue,Yellow]}] The derived Show format can be summarized as follows: - Lists: [… comma-separated items …] - Records: { ... comma-separated key-value pairs ...} - Strings: “…” - Algebraic data types: variant type name Building the parser We create a module ShowParser which exports a single function parseShow: module ShowParser ( parseShow ) where Some boilerplate: import Text.ParserCombinators.Parsec import qualified Text.ParserCombinators.Parsec.Token as P import Text.ParserCombinators.Parsec.Language The Parsec.Token module provides a number of basic parsers. Each of these takes as argument a lexer, generated by makeTokenParser using a language definition. Here we use emptyDef from the Language module. It is convenient to create a shorter name for the predefined parsers you want to use, e.g. parens = P.parens lexer -- and similar The parser The function parseShow takes the output from show (a String) and produces the corresponding XML (also a String). It is composed of the actual parser showParser and the function run_parser which applies the parser to a string. parseShow :: String -> String parseShow = run_parser showParser showParser :: Parser String run_parser :: Parser a -> String -> a run_parser p str = case parse p "" str of Left err -> error $ "parse error at " ++ (show err) Right val -> val The XML format We define an XML format for a generic Haskell data structure. We use some helper functions to create XML tags with and without attributes. Header: <?xml version="1.0" encoding="utf-8"?> xml_header = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n" <tag> ... </tag> otag" ctag" tag t v = concat [otag t,v,ctag t] Attributes: <tag attr1="..." attr2="..."> tagAttrs :: String -> [(String,String)] -> String -> String tagAttrs t attrs v = concat [ otag (unwords $ [t]++(map (\(k,v) -> concat [k,"=\"",v,"\""]) attrs)) ,v ,ctag t ] We also use some functions to join strings together. From the Prelude we take: concat :: [[a]] -> [a] -- join lists unwords :: [String] -> String -- join words using spaces We also define a function to join strings with newline characters: joinNL :: [String] -> String -- join lines using "\n" This is identical to unlines from the Prelude, just to illustrate the use of intercalate and the Data.List module. Parsers for the derived Show format Lists [ ..., ..., ... ] XML : <list> <list-elt>...</list-elt> ... </list> list_parser = do ls <- brackets $ commaSep showParser return $ tag "list" $ joinNL $ map (tag "list-elt") ls Tuples : ( ..., ..., ... ) XML: <tuple> <tuple-elt>...</tuple-elt> ... </tuple> tuple_parser = do ls <- parens $ commaSep showParser return $ tag "tuple" $ unwords $ map (tag "tuple-elt") ls Record types Rec { k=v, ... } XML: <record> <elt key="k">v</elt> ... </record> key-value pairs: k = v -- v can be anything record_parser = do ti <- type_identifier ls <- braces $ commaSep kvparser return $ tagAttrs "record" [("name",ti)] (joinNL ls) kvparser = do k <- identifier symbol "=" t <- showParser return $ tagAttrs "elt" [("key",k)] t type_identifier = do fst <- oneOf ['A' .. 'Z'] rest <- many alphaNum whiteSpace return $ fst:rest Algebraic data types e.g. Label XML: <adt>Label</adt> adt_parser = do ti <- type_identifier return $ tag "adt" ti Quoted strings and numbers quoted_string = do s <- stringLiteral return $ "\""++s++"\"" number = do n <- integer return $ show n Complete parser Combine all parsers using the choice combinator <|>. showParser :: Parser String showParser = list_parser <|> -- [ ... ] tuple_parser <|> -- ( ... ) try record_parser <|> -- MkRec { ... } adt_parser <|> -- MkADT ... number <|> -- signed integer quoted_string <?> "Parse error" Parsec will try all choices in order of occurrence. Remember that try is used to avoid consuming the input. Main program Import the parser module import ShowParser (parseShow) Use the parser rec_str = show [rec1,rec2] main = putStrLn $ parseShow rec_str Test it: [wim@workai HaskellParsecTutorial]$ runhaskell test_ShowParser.hs <?xml version="1.0" encoding="UTF-8"?> <list><list-elt><record name="MkPersonRecord"><elt key="name">"Wim Vanderbauwhede"<">557188</elt> <elt key="labels"><list><list-elt><adt>Green</adt></list-elt> <list-elt><adt>Red</adt></list-elt></list></elt></record></list-elt> <list-elt><record name="MkPersonRecord"><elt key="name">"Jeremy Singer"<">42</elt> <elt key="labels"><list><list-elt><adt>Blue</adt></list-elt> <list-elt><adt>Yellow</adt></list-elt></list></elt></record></list-elt></list> Summary - Parsec makes it easy to build powerful text parsers from building blocks using predefined parsers and parser combinators. - The basic structure of a Parsec parser is quite generic and reusable - The example shows how to parse structured text (output from Show) and generate an XML document containing the same information. © Wim Vanderbauwhede
https://www.futurelearn.com/courses/functional-programming-haskell/3/steps/333262
CC-MAIN-2018-30
refinedweb
2,532
60.55
Five years ago I strongly criticized the OpenDocument standard for being critically incomplete for spreadsheets since it left out the syntax and semantics of formulas. As a consequence it was unusable as a basis for creating interoperable spreadsheets. Off the record several ODF participants agreed. The explanation for the sorry state of the matter was that there was a heavy pressure for getting the ODF standard out of the door early. The people working on the text document part of the standard were not willing to wait for the spreadsheet part to be completed. That was then and this is now. Five years have passed and there has been no relevant updates to the standard. However, one thing that has happened is that is that Microsoft started exporting ODF documents that highlight the problems I pointed out. ODF supporters cried foul when it turned out that those spreadsheets did not work with OpenOffice. In my humble opinion, those same loud ODF supporters should look for the primary culprit at the nearest mirror. You were warned; the problem was obvious for anyone dealing with programming language semantics; you did nothing. So given the state of the standard, where does that leave ODF support in spreadsheets? Microsoft took the least-work approach and just exported formulas with their own (existing) syntax and semantics. Of course they knew that it would not play well with anyone else, but that was clearly not a priority in Redmond. Anyone else at this point realizes that ODF for spreadsheets is not defined by the standard, but by what part OpenOffice happens to implement. Just like XLS is whatever Excel says it is. One implications is that ODF changes whenever OpenOffice does. For example, OpenOffice has changed formula syntax at least once — a change that broke Gnumeric’s import. If you follow that link, you can see that OpenOffice did precisely the same thing that Microsoft did: introduce a new formula namespace. Compare the reactions. For the record, in Gnumeric the work involved in supporting those two new namespaces were about the same. For Gnumeric the situation remains that we will support ODF as any other spreadsheet file format. Until and unless the deficiencies are fixed, ODF is not suitable as the native format for Gnumeric or any other spreadsheet. (There are other known problems with ODF, but those are somewhat technical and not appropriate here.) Note: I want to make clear that the above relates to spreadsheets only. I know of no serious problems with ODF and text documents, nor do I have reason to believe that are any.
https://blogs.gnome.org/mortenw/2010/02/10/odf-plus-five-years/
CC-MAIN-2019-09
refinedweb
433
63.09
Products and Services Downloads Store Support Education Partners About Oracle Technology Network Name: gm110360 Date: 04/01/2002 FULL PRODUCT VERSION : java version "1.4.0" Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.0-b92) Java HotSpot(TM) Client VM (build 1.4.0-b92, mixed mode) FULL OPERATING SYSTEM VERSION : Linux 2.4.18 glibc-2.2.4-13 Red Hat Linux release 7.2 (Enigma) EXTRA RELEVANT SYSTEM CONFIGURATION : KDE 2.2 XFree86 4.1.0 ATI Rage LT Pro video card (8M VRAM, using 'Mach64' driver) A DESCRIPTION OF THE PROBLEM : Full screen exclusive mode and display change don't work in Linux. The attached program, using to GraphicsDevice methods isDisplayChangeSupported() and isFullScreenSupported(), reports that neither is supported. Display change isn't supported by the JDK, in spite of the fact that I have four video modes set up in X, which I can switch any time by pressing Ctrl-Alt-KeypadMinus/KeypadPlus. GraphicsDevice reports one GraphicsConfiguration, for the current mode only. But it does get that right, it correctly reports what mode I am in. STEPS TO FOLLOW TO REPRODUCE THE PROBLEM : Execute the attached code. See that fullscreen exclusive mode and display change are not supported. EXPECTED VERSUS ACTUAL BEHAVIOR : When I run the example program attached, I get this output: Device 0: ID string=:0.0 Available accelerated memory: -1 Fullscreen supported: false Display change supported: false The expected output would indicate that fullscreen and display change are supported (=true). This bug can be reproduced always. ---------- BEGIN SOURCE ---------- import java.awt.*; public class FSETest { public static void main(String[] args) { GraphicsDevice[] devices = GraphicsEnvironment.getLocalGraphicsEnvironment().getScreenDevices(); for (int i = 0; i < devices.length; i++) { GraphicsDevice device = devices[i]; System.out.println("Device " + i + ": ID string=" + device.getIDstring()); System.out.println(" Available accelerated memory: " + device.getAvailableAcceleratedMemory()); System.out.println(" Fullscreen supported: " + device.isFullScreenSupported()); System.out.println(" Display change supported: " + device.isDisplayChangeSupported()); System.out.println(); } } } ---------- END SOURCE ---------- (Review ID: 143928) ====================================================================== CONVERTED DATA BugTraq+ Release Management Values COMMIT TO FIX: mustang EVALUATION The implementation for this feature would most likely require use of DGA2.0, which is implemented in XFree86 4.x. Basically, it has almost all functionality we need: One of the caveats is: " XDGAOpenFramebuffer() maps the framebuffer memory. The client needs sufficient privledges to be able to do this." So it looks like, unfortunately, it still requires priveleged access to run to use direct access, and the drivers support is rather limited. Another possibility is to use VidMode extension. ###@###.### 2002-10-03 We will be using the VidMode extension, as that is the accepted approach to doing display mode switching on X11 (used by games such as Quake and many media playing apps). This mechanism will work equally well with both the X11 and OGL-based pipelines. The VidMode extension is only responsible for querying and switching between display modes. The only display modes available to the application are those listed in the XF86Config file (or xorg.conf, in the case of Xorg). We have to employ a few tricks in order to make it appear as if the fullscreen window is in "fullscreen exclusive mode", as there is no such concept on X11, even with the VidMode extension. The biggest problem is that XFree86 automatically pans the desktop if you switch to a display mode that is larger than the "viewport" setting in the XF86Config file. So if we simply created a window the size of the new display mode, if one moves the mouse cursor to the edges of that window, the desktop would pan to reveal the rest of the virtual desktop. There is no way to disable this behavior in XFree86, so we have no choice but to workaround it. One way is to do an XGrabPointer and XGrabKeyboard so that all mouse and keyboard events are confined to the active fullscreen window. This approach makes it difficult to trigger the auto-panning behavior and actually works fairly well. This is the same approach used in other fullscreen applications on Linux, such as Quake. There are a few restrictions for using the fullscreen and display mode APIs on Linux. It will only be available if: - the VidMode extension is available - the VidMode function symbols can be dynamically loaded (*) - Xinerama is not enabled (there are potentially bad interactions between these two extensions) - you are on the primary screen (in a multiscreen environment) The last two restrictions are there to prevent any major problems in multiscreen configurations. It's too difficult and error-prone to support these configurations at this time, which shouldn't be much of an issue since multiscreen Linux configs are relatively rare. (*) XFree86 only ships with .a (static) version of the libXxf86vm library, but we attempt to dlopen the .so (shared) version of that library. This shouldn't be a problem by the time Mustang ships since most people should be using the Xorg server by that point, and the Xorg server includes the .so library. But for legacy configurations, there is a workaround to create a .so from the .a: % cd /usr/X11R6/lib % ld --whole-archive -shared -o libXxf86vm.so.1 libXxf86vm.a % ln -s libXxf86vm.so.1 libXxf86vm.so % /sbin/ldconfig So we just need to document that issue in the release notes... It's not pretty, but it's better than having a build time dependency on Linux only, and it will allow this stuff to work on Solaris (x86 and sparc) once the Xorg server is included on those platforms. ###@###.### 2004-08-26 While investigating this RFE, a couple issues have surfaced that should be addressed in the Mustang timeframe (outside the scope of this RFE, but important to have fixed): 4941351: OGL: detect pbuffer clobber events This is important because pbuffer clobber events are more likely to occur now that it is possible for an app to change the display mode (which tends to invalidate VRAM resources). The current display mode switching implementation is prepared to restore managed images and VolatileImages in the event of a display change initiated by the current JVM process. However, a fix for 4941351 is necessary for the OGL pipeline to restore pbuffer-based surfaces in the event of a display change initiated by another process (or the operating system). 5094347: clarify GraphicsDevice.setDisplayMode() spec regarding BIT_DEPTH_MULTI This is important because existing fullscreen apps might be making some bad assumptions about the meaning of BIT_DEPTH_MULTI (which is used on Linux/Solaris), and we should update the spec/implementation to account for those issues. ###@###.### 2004-08-30 Just a quick update on the progress of this RFE... While the code has been ready for a couple months, there was a prolonged internal review that caused a delay in its integration into Mustang. Now that the review has been completed, I'll sync up the code with the latest Mustang sources and do some final testing. That way we can get this into an upcoming Mustang snapshot and get some feedback from developers. ###@###.### 2004-12-21 23:30:14 GMT The fix has finally been putback to the 2D workspace and should appear soon in Mustang b39 (if all goes well). ###@###.### 2005-05-18 22:53:02 GMT
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=4661156
CC-MAIN-2014-42
refinedweb
1,200
54.63
The weaver (also code weaver or enhancer) is the component that transforms and recompiles user code before it is loaded in the database. The weaver allows developers to write plain, ordinary business-focused source code in their language of choice and transparently enjoy the power of the Starcounter database engine. For example, the simplest transformation might look something like this: Before: [Database]public class Person{public string Name { get; set; }} After: [Database]public class Person{public string Name{get { return Db.ReadString(123); }set { return Db.WriteString(123, value); }}} One of the design criteria of the weaver was to make it opaque to developers. Just as you don't have to care about the inner transformation details of some high-level language feature, such as C# delegates - where the compilers and the common language runtime (CLR) do a lot of behind-the-scenes processing to hide the complexity - the weaver also handles transformation of user code in the background and on-the-fly. As a developer, you should generally never have to see or care about this transformation. Starcounter applications can be started in several ways: Using F5 in Visual Studio. Using star.exe from a command-prompt. From Starcounter Administrator. Behind the scenes the actual bootstraping of the application is done via the Starcounter server. Part of this bootstrapping is weaving. The server hands the path to the compiled executable to the weaver, and the weaver processes it. If weaving succeeds, the transformed code is loaded in the Starcounter database. If it fails, the relevant error(s) are reported back to the agent that tried to start the executable (e.g. Visual Studio) and the start attempt is cancelled. When the weaver is given the path to the compiled executable, it checks that it can resolve and process all dependencies of the executable. By default, the weaver consider the following files to be a dependency of the executable: all .dll files in the same folder as the executable all .NET assembly references from the executable (recursively) that reference Starcounter.dll and is not considered part of the Starcounter installation (i.e. Starcounter system files are ignored). The weaver will try process (load and transform) any file considered a dependency of the given executable. In certain cases, you might want to exclude files from being processed by the weaver. If, for example, you have a native library deployed in the same directory as the executable you have a strong-named assembly in the same directory as the executable you have a file that you know doesn't need any transformation - either because it does not use any database classes or it does not even reference Starcounter - and you want the weaver to perform faster. To instruct the weaver to exclude files, Create a plain text file in the same directory as your executable. Name it weaver.ignore (without any other extension) Specify the name of each file on a single line in the file. A simple weaver.ignore file: foo.dllbar.dll Regular expressions are allowed and is matched according to the following pattern: new Regex("^" + specification.Replace(".", "\\.").Replace("?", ".").Replace("*", ".*"), RegexOptions.IgnoreCase); If you are using Visual Studio to build your application, you can add the weaver.ignore file to the same project that builds your executable, and instruct Visual Studio to copy it to the output directory on every build. Right-click the project in the Solution Explorer window and select Add... and then New Item..., or alternatively select PROJECT | Add New Item... from the main menu. In the Add New Item dialog window, select C# and General and then Text File. Name the file weaver.ignore and click OK. In the Solution Explorer, locate the weaver.ignore file. Right-click it and select Properties. In the Properties dialog window, set the Copy To Output Directory property to Copy if newer. The next time you build, the weaver.ignore file will end up next to your compiled executable and thereby found by the weaver the next time you run the application in Starcounter. In some situations, weaving fails. Failures normally come in one of two categories: The application contains some feature Starcounter does not support yet. The application contains, or references some code that contains, a reference that can not be resolved or properly analyzed. The first category of errors are generally easier to resolve and the error information we can provide to you as a developer is often quite concise. As an example, if you define a private database class in an application targeting Starcounter 2.x, you'll get a clear message informing you this is not supported, and the identity of the class that was private. The way to resolve it is to make it public. For the second category, or for any error that does not include a specific error condition, detecting what is actually wrong can be harder. As an example, dependency resolution failures can occur deep in a long chain of dependencies, and hence not trivial to fully comprehend. The resolution is normally to exclude some file part of your application from being weaved/analyzed, as was described above in How to exclude a file from being processed But how should you know what file you need to exclude? One way to diagnose any failing application is to invoke the weaver in isolation. This is done by executing the weaver executable from a command-line environment. It's in the PATH, so what you have to do is simply: Open a command-prompt Run scweaver path/to/your/app.exe. The effect of that is that the weaver will run, trying to analyze and weave (i.e. transform) your application into a form Starcounter will use to load it into. Weaving in isolation like this does not load your application though, and the output produced will end up in a .starcounter directory next to your executable. We can get the full diagnostic by running scweaver --nocache --verbosity=diagnostic [app.exe] To solve this exception, try to follow these two steps: Set weaver.ignore file "Copy to output directory" to "Copy always". Make sure there are no blank lines at the the beginning or end of weaver.ignore. More details can be found in Starcounter/Home#31. There's no obvious solution to this exception. Reading Troubleshooting Weaver Failures might be helpful. You can also take a look at Starcounter/Home#88, Starcounter/Home#87, and Starcounter/Home#166 for more information.
https://docs.starcounter.io/v/2.3.2/guides/working-with-starcounter/weaver/
CC-MAIN-2020-34
refinedweb
1,078
56.76
public Proxy Servers is a free and independent proxy checking system. Free Proxies what vpn works with netflix uk Since 2002.Explore Hidden Categories On Netflix. berhubung pada umum nya yang sering di what vpn works with netflix uk gunakan. What vpn works with netflix uk with Hotspot Shield VPN, get unlimited VPN access to the worlds most trusted security, you get fast access to all your what vpn works with netflix uk favorite content across the globe with complete anonymity. And access app. Privacy 1 Argentina 196 Armenia 17 Australia 76. Country Trusted what vpn works with netflix uk proxies available?you can wave the internet unlimited without any issue. Nowadays internet user needs to use the internet securely and they want a tool that can protect their IPs and delicate information what vpn works with netflix uk from the hackers. This VPN performs it workable. android and iPhone, and a what vpn works with netflix uk guarantee that no browsing logs are kept. Including 256-bit SSL proxy setting on iphone encryption, a good range of VPN protocols, comprehensive OS Device compatibility You get support for Windows, mac and Linux, total security privacy ExpressVPN has all the main bases covered, India: What vpn works with netflix uk! i'm taking care of the Airport first, detailed step-by-step will help us paint a mental picture Reply Helpful Thanks for the info. But right now something weird is what vpn works with netflix uk happening. Will see for the RAID later.expressVPN propose 3 plans qui comprennent tous les mme caractristiques mais qui diffrent sur la what vpn works with netflix uk dure de la souscription. En effet,certains vous diront que TOR anonymise plus que les VPN (Chiffrage Routage)). Anonymat? Cest un dbat sans fin! Nous, geoSurf toolbar is a great tool p bay unblocked that enables us to explore the wide eco-system of online advertising. . proxy by ip:port Proxy type what vpn works with netflix uk Anonymity Country.tOR Kesako what vpn works with netflix uk Acronyme pour The Onion Router, de tor en passant par NordVPN et AirVPN, dcouvrez l alliance des VPN et de TOR.the NetExtender connection uses a Point-to-Point Protocol (PPP)) connection. Benefits. NetExtender provides remote users with full access to your protected internal network. The experience what vpn works with netflix uk is virtually identical to that of using a traditional IPSec VPN client, this causes no change for destination MAC addresses external to the host, but for destinations within the host what vpn works with netflix uk (another VM in the same VLAN )) it forces that traffic to the upstream switch which forwards it back instead of handling it internally,figure 3-2. Some design considerations for these particular IPsec VPNs are as follows: Tunnel mode is used to keep the original what vpn works with netflix uk IP header confidential. The routers are capable of handling 256-bit AES ESP transforms in hardware.vPN- Premium.,,a VPN (Virtual Private Network)) gives you the choice from being an ordinary internet user to being an internet VIP. It's like having a golden ticket for the internet - it will let you bypass censorship, what vpn works with netflix uk what is a VPN? but it what vpn works with netflix uk is just "too many clicks" or you have to do it on lots of computers set proxy server ubuntu 16 and you just need a better way? You've had a look at the instructions on how to setup the Cisco Meraki Client VPN on Windows, what now?this video will show you how to use Team what vpn works with netflix uk Viewer to remotely work on a friends computer as if you were setting right in front of it,dubai, offline Emirates Integrated Telecommunications Company PJSC 5:41 p.m. Offline ox 1150, what vpn works with netflix uk dubai, offline ox 1150, uAE 10:28 a.m. UAE 12:38 p.m. Offline ox 1150, offline ox 1150, uAE 11:01 a.m. UAE 12:17 p.m. Offline ox 1150, dubai, dubai, dubai, uAE 2:05 p.m. Vpn unlimited para que sirve! quiqly Internet Proxy is a small and comprehensive utility that allows internet sharing what vpn works with netflix uk by running a proxy server.vPN Gate what vpn works with netflix uk Windows XP,.,. i am using what vpn works with netflix uk Open-VPN to connect directly to our server at our office. Hello, i added the printer to my laptop. We have a main network printer at the office that everyone uses. While hardwired,customer Support CALPIA provides product and service support to what vpn works with netflix uk our customers,too. Whereas if you are not, you can earn yourself this bonus for free, add that number what vpn works with netflix uk to 10,uTorrent is one of the most popular BiTorrent what vpn works with netflix uk clients in the world, and many users want to know how to use uTorrent anonymously. vPN. import and export functions are available both through the GUI or through direct command line options. Secured import and export functions To what vpn works with netflix uk allow IT Managers to deploy VPN Configurations securely, ).november 2018 at 10hrs PDT. 2 5 Webcast slides: Identity Services Engine - Deployment and Best Practices. This event had what vpn works with netflix uk place on Tuesday 27th,instead you are using the middle man, when you connect what vpn works with netflix uk to our proxy server you wont be connected directly to the website that you are currently viewing, a proxy server basically allows you to hide yourself online.and a Buffalo linkstation with that (cabled from the Airport i guess?)) what vpn works with netflix uk Reply Helpful As I understand you, who are the major players in the market? Answer 61. Answer 59. Answer 60. What are some what vpn works with netflix uk of the tough questions to pose to VPN product vendors? Answer 62. What security vulnerabilities are unique to or heightened by VPN?
http://cornish-surnames.org.uk/programs/what-vpn-works-with-netflix-uk.html
CC-MAIN-2019-09
refinedweb
1,026
61.46
In reference to: Associating Same Child Record to Different Parent Records Through Script A record that contains identical values to the record you have created already exists. If you would like to enter a new record, please ensure that the field values are unique. (SBL-DAT-00381)" The above message is displayed when trying to create a new record, enter values and click SAVE button, in Address MVG applet. I need to associate an address record automatically to a contact record if the address record already exists in the database. If not, I need to create a new address. The following code is written on Address MVG bc on prewrite record event, but association is not happening. Could any one provide your inputs, please? function BusComp_PreWriteRecord () { try { var staddr = this.GetFieldValue("Street Address"); var cit = this.GetFieldValue("City"); var stat = this.GetFieldValue("State"); var zip = this.GetFieldValue("Postal Code"); var addrtyp = this.GetFieldValue("Address Type"); var obo = TheApplication().GetBusObject("Contact") ; var obc = obo.GetBusComp("CUT Address"); obc.ClearToQuery(); obc.SetViewMode(AllView); var searexpr = "[Street Address] = 'staddr' AND [City] = 'cit' AND [State]= 'stat' AND [Postal Code]= 'zip' AND [Address Type]= 'addrtyp'"; obc.SetSearchExpr(searexpr); obc.ExecuteQuery(ForwardOnly); //var Recfoun = obc.FirstRecord(); if (FirstRecord()) { var assbc = this.GetAssocBusComp(); associate(); } else { return (ContinueOperation); } } catch (e) { throw(e); } finally { searexpr = null; assbc = null; obc = null; obo = null; addrtyp = null; zip = null; stat = null; cit = null; staddr = null; } } I'm facing the same issue and I tried implementing the solution given by you. The problem I'm facing here is that the Contact is new and hence when I try to create a new instance of Contact BO and find the record there, I don't get the record. @su8careers: How did you finally resolve this issue? (If you still remember)
https://it.toolbox.com/question/creating-a-new-instance-of-contact-bo-112218
CC-MAIN-2019-47
refinedweb
296
50.73
String confusion Maduranga Liyanage Ranch Hand Joined: May 25, 2005 Posts: 124 posted May 31, 2005 07:41:00 0 Please consider this code: String str1 = "Hello"; String str2 = "World"; String str3 = "HelloWorld"; String str4 = "HelloWorld"; String str5 = str1 + str2; int a = 1; int b = 2; int c = 3; int d = a+b; in main() method .... str3==str4 //TRUE str4==str5 // FALSE (a+b)==c // TRUE c==d // TRUE Can somebody please explain me why "str4==str5" is FALSE and "c==d" is TRUE ??? I know "==" compares memory locations. So how come "c==d" ? doesnt the compiler put each integer value into a seperate memory location? Or do the integers also go into a pool like strings? Please tell me why str3==str4 is TRUE but str4==str5 is FALSE, and why it is not the same concept with integers? Thanks a lot guys.. [ May 31, 2005: Message edited by: Barry Gaunt ] Nischal Tanna Ranch Hand Joined: Aug 19, 2003 Posts: 182 posted May 31, 2005 08:00:00 0 Try the following code. public class Test { public static void main(String[] ag) { String str1 = "Hello"; String str2 = "World"; String str3 = "HelloWorld"; String str4 = "HelloWorld"; String str5 = str1 + str2; // creates a new String literal but dosent guarantee that an existing String from pool // use of intern forces JVM to get an existing String literal from the String pool String str6 = (str1 + str2).intern(); System.out.println((str3==str4)); System.out.println((str3==str5)); System.out.println((str3==str6)); } } Also if u using == with int operands, it compares the values and not references. Thnx fred rosenberger lowercase baba Bartender Joined: Oct 02, 2003 Posts: 11705 18 I like... posted May 31, 2005 08:16:00 0 i would say it's doing the same thing in both cases. we are comparing the value of what's stored in the variable. for the int variables, it's pretty obvious what's in there, and thus they are equal. but for any kind of object, it gets a little trickier. what is basically stored in an object reference is the memory address of the object. you don't have access to that value - Java automatically 'de-references' the memory address for you, giving you the actual object when you need it. the '==' operator here will compare the variable here to see if they have the same value - i.e. point to the same spot in memory. the compiler is smart enough to see that str3 and str4 refer to the same string literal, and thus assigns both to point to the same memory location. however, when you say str5 == str1 + str2; you are creating a new object. the compiler CAN'T tell that this will ulitimatly end up being the same as the literal. it creates a new object, at a new memory location, and str5 refers to there. so, str5 refers to a different spot than str4, so the values stored in the variables are not equal. did that make sense? There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors Nischal Tanna Ranch Hand Joined: Aug 19, 2003 Posts: 182 posted May 31, 2005 08:37:00 0 Hi fred, To question on my as well as ur reply... does that mean there will be duplicate literal values in the String pool? str = "hello" str1 = "hel" str2 = "lo" str3 = str1+str2 str == str3 // this will return false as we know Thus from the above there will be two "hello" in the String pool? fred rosenberger lowercase baba Bartender Joined: Oct 02, 2003 Posts: 11705 18 I like... posted May 31, 2005 09:48:00 0 I'm a little shaky on the intricacies of string pool, but here's how i THINK it works... the string pool only hold string LITERALS. So in your last example, there will be three strings in it, "hello", "hel", and "lo". when you have the line str3 = str1+str2; it creates a new OBJECT, that happens to be a String-type object. this is created in a different space than the string pool. the reason being that the three literals are determined by the compiler, but the addition of the two strings has to be done at runtime. Therefore, it's handled differently. Like i said, i'm not POSITIVE on all this, but someone will correct me shortly if i am wrong. Steven Bell Ranch Hand Joined: Dec 29, 2004 Posts: 1071 posted May 31, 2005 10:08:00 0 I think fred has that about a close as it's going to get (if not right on). Runtime created String's are not placed into the String pool unless the intern() method is called on them. In the previous example if you made it: str = "hello"; str1 = "hel"; str2 = "lo"; str3 = str1+str2; str3.intern(); str == str3; //this should be true now. I didn't actually run and test this. Maduranga Liyanage Ranch Hand Joined: May 25, 2005 Posts: 124 posted May 31, 2005 10:11:00 0 Thanks a lot for the reply fellows.. it helped a lot.. Can somebody explain me this paragraph that appers in Sybex Java certification book" "String S1 = new String("Hello"); When this line is compiled, the string literal "Hello" is placed into the pool. At runtime, the "new String()" statement is excuted and a fresh instance of String is constructed, duplicating the String in the literal pool. Finally, a reference to the new String is assigned to S1." Can somebody tell me why there are two copies of "Hello", as it says? Also can you please tell me what happens in compile time and run time from the above statement? String str = "Hello"; Doesn't this statement create a new String object? Thanks guys.. Tony Morris Ranch Hand Joined: Sep 24, 2003 Posts: 1608 posted May 31, 2005 10:37:00 0 String str = "Hello"; Doesn't this statement create a new String object? Yes. The object is created at class load time. for more information and references. Tony Morris Java Q&A (FAQ, Trivia) fred rosenberger lowercase baba Bartender Joined: Oct 02, 2003 Posts: 11705 18 I like... posted May 31, 2005 11:17:00 0 i think of it this way... "String S1 = new String("Hello"); the compiler can see the "Hello" literal, and puts it in the string pool. when the code is executed, we hit that "new" word. this says "hey, CREATE a BRAND NEW STRING object". the only place this new string object can be created is in the normal heap - we're done creating stuff in the string pool now. so, we make this new object, and 'seed' it with the value of that other string literal. it sort of boils down to what the compiler can figure out and what it can't. it's pretty dumb. it can detect literals and puts them in the string pool. However, anything with the "new" operator it leaves alone - it's not it's job to figure out what to do with that keyword, so that gets handled at runtime. Maduranga Liyanage Ranch Hand Joined: May 25, 2005 Posts: 124 posted May 31, 2005 11:36:00 0 String Str1 = "Hello"; String Str2 = "World"; String Str3 = "HelloWorld"; String Str4 = "Hello" + "World"; String Str5 = Str1 +Str2; We know str3==str5 is FALSE because the addition takes place at runtime. BUT, str3==str4 is TRUE! Doesn't this addition takes place at runtime too? Steven Bell Ranch Hand Joined: Dec 29, 2004 Posts: 1071 posted May 31, 2005 11:43:00 0 No, the compiler is smart enough to see that everything there is a compile time constant and will resolve it at compile time. If you were to change it as such: String blah = ""; String Str4 = "Hello" + "World" + blah; You would not get Str4 == Str3 (you really shouldn't capitalize the start of variable names) even though Str4 has the exact same value. The difference is it is theoretically possible for the value of 'blah' to be modified (as in pointing to a different String object) during runtime. In the case of 'blah' being a local method variable I'm not sure if it really is possible, but if it were class level another thread could modify it between the lines, however the compiler doesn't concern itself with things like that and just goes the 'safe' way of letting it be handled at runtime. amit taneja Ranch Hand Joined: Mar 14, 2003 Posts: 810 posted Jun 02, 2005 09:38:00 0 what if String str="abc"; str="cde"; now "abc" will be lost from the string pool ? sometime in this forum why didn't see the page well alinged... sometime horizontal bar come ..why ? Thanks and Regards, Amit Taneja thomas jacob Ranch Hand Joined: May 19, 2005 Posts: 91 posted Jun 02, 2005 10:36:00 0 I think these string basics will help us thru SCJP exam 1) Strings are immutable once initialized 2) Strings initialized at runtime are created as a new objects in memory 3) == operator compares string reference values 4) String Str = "Ja"+"va"; is initialized at compile time cos the compiler can see the value assigned to Str and then matches to similar values in the pool. 5)String Str = Str1+Str2; is initialized at runtime and the JVM doesn't bother to match existing similar string literal in the pool. so a new object is created 6)String Str=Str1; assignment will straightaway assign Str1 object to Str so no problems. Hope this will help for the exam Cheers Jacob I agree. Here's the link: subject: String confusion Similar Threads comparing Strings Can anyone explain what is == and equal( ) ? equals versus == equals() and ==, doubt IF strings are immutable why is the variable s appending and changing its obect All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/249347/java-programmer-SCJP/certification/String-confusion
CC-MAIN-2015-18
refinedweb
1,660
77.37
4477/how-can-i-solve-java-lang-noclassdeffounderror-in-java If you are facing java.lang.NoClassDefFoundError, which means the Class Loader file responsible for dynamically loading classes can not find the .class file. So to remove this error, you should set your classpath to the location where your Class Loader is present. Hope it helps!! If you need to know more about Java, join our Java certification course online. Thanks Hi, i'm currently using Eclipse. I can build and compile the project but at runtime i get the following error: java.lang.NoClassDefFoundError: oracle/security/restsec/jwt/JwtToken I have added the jar files accordingly. But the error still pops up. Here are two ways illustrating this: Integer x ...READ MORE import java.io.BufferedWriter; import java.io.IOException; import java.nio.file.Files; import java.nio.file.Paths; public class WriteFiles{ ...READ MORE While programming we often write code that ...READ MORE String[] source = new String[] { "a", ...READ MORE If You check System.out.print(names.length); you will get 3 ...READ MORE To work on timer in java, you ...READ MORE You can use java.util.Random class. int myNumber = ...READ MORE Your classpath is broken. Depending on how you ...READ MORE public String getCurrentTimeStamp() { ...READ MORE Hi @Daisy You can use Google gson for more ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/4477/how-can-i-solve-java-lang-noclassdeffounderror-in-java?show=4480
CC-MAIN-2022-33
refinedweb
246
63.25
0 I'm creating a program by using the generic array that allow user to define the capacity or use the default capacity. I'm running some problems with casting and methods is not exist errors when I tried to store the data into array. Here is my constructor and methods class: public class ArrayListADT <E> { public E [] element; private int count; private final int capacity =20; public int cap; public ArrayListADT() { count = 0; element = (E []) new Object [capacity]; } public ArrayListADT(int cap) { count = 0; element = (E[]) new Object [cap]; } public void resize(int k) { E [] ele = (E[]) new Object [k]; for(int i=0; i<element.length; i++) { ele [i] = element [i]; } element = ele; this.cap =k; } } This is my main class: public class ArrayListADTTest { public static void main (String [] args) { ArrayListADT <String> ArrayString = new ArrayListADT <String> (4); ArrayListADT <Integer> ArrayInt = new ArrayListADT <Integer> (); ArrayString.add("Sky"); ArrayString.resize(25); } } According to my teacher, he just me to use add, but the compiler says it does not exist. ArrayString.add("Sky"); I also trying to to do something like this: ArrayListADT <String[]> ArrayString = new ArrayListADT <String[]> (4); ArrayListADT <Integer[]> ArrayInt = new ArrayListADT <Integer[]> (); ArrayString = {"clouder"}; ArrayString.resize(25); Which give me cast errors. How can store the data into generic array? Thank you so much.
https://www.daniweb.com/programming/software-development/threads/408973/question-about-generic-array
CC-MAIN-2018-39
refinedweb
216
51.28
MADDE/GTK Example [edit] Introduction This example requires a working configuration of MADDE. At least one target has to be installed and set as 'default'. The basic steps to do this are described in this guide. - GTK helloworld application. Therefore we can use the "simple" template out of the list with small modifications. The following command creates a project using the "simple" template and named as "helloworld" mad pscreate -t simple helloworld Afterwards the skeleton is created with the following subdirectories: - src: place for sourcefiles - debian:configuration files, which are needed to create the package later on - uis:empty folder for ui files which would be needed with Qt applications Further it contains a file named "prog.pro", which is a project file, needed to create the Makefile with qmake. Lets create a small GTK helloworld application hello.c in src directory: #include <gtk/gtk.h> static void destroy( GtkWidget *widget, gpointer data ) { gtk_main_quit (); } int main( int argc, char *argv[] ) { GtkWidget *window; GtkWidget *label; gtk_init (&argc, &argv); label = gtk_label_new("Hello world"); window = gtk_window_new (GTK_WINDOW_TOPLEVEL); g_signal_connect (G_OBJECT (window), "destroy", G_CALLBACK (destroy), NULL); gtk_container_add (GTK_CONTAINER (window), label); gtk_widget_show (window); gtk_widget_show(label); gtk_main (); return 0; } [edit] Compile & build Before we can compile GTK application we have to modify prog.pro file. In prog.pro file edit SOURCES += and HEADERS += lines to contain source files in the project, in our case wee need only following line: SOURCES += hello.c For GTK applications we want to use pkgconfig to help compiling and linking against correct libraries. For our example we add following lines: CONFIG += link_pkgconfig and PKGCONFIG += gtk+-2.0 Now we are ready to build our application. The following steps describe how to compile and build the project. 1. Create the Makefile: cd helloworld mad qmake 2. Compile the project. mad make - Now the executable program helloworld is created in the helloworld/build directory. 3. Create a debian package: mad dpkg-buildpackage - This command will build the project and make a debian package called helloworld_0.1_armel.deb. The debian package will be created in the projects' parent directory. - This page was last modified on 22 April 2010, at 11:56. - This page has been accessed 3,594 times.
http://wiki.maemo.org/MADDE/GTK_Example
CC-MAIN-2017-26
refinedweb
364
55.64
Unix-style Fortune teller text display on LCD Dependencies: 4DGL-uLCD-SE SDFileSystem mbed main.cpp - Committer: - alexcrepory - Date: - 2015-03-09 - Revision: - 1:4d5e6b8edd00 - Parent: - 0:672a66c015ca - Child: - 2:7507d0c0e509 File content as of revision 1:4d5e6b8edd00: #include "mbed.h" #include "SDFileSystem.h" #include "uLCD_4DGL.h" SDFileSystem sd(p11, p12, p13, p14, "sd"); //sd card DigitalIn pb(p21); //pushbutton uLCD_4DGL uLCD(p9,p10,p8); // serial tx, serial rx, reset pin; int main() { char buffer[300]; //buffer to store the quotation float rando=0; //variable responsible to receive the random value uLCD.cls(); //clear the screen printf("Hello\n"); //check the conection with the computer mkdir("/sd/mydir", 0777); //create a folder calle mydir //Create the file with the quotations FILE *fp = fopen("/sd/mydir/sdtest.txt", "w"); if(fp == NULL) { error("Could not open file for write\n"); }("Press the button\n");//lets you know when the file is created while (true){ if (pb == 1){ //open the file to be read FILE *ft = fopen("/sd/mydir/sdtest.txt", "r+"); if(ft == NULL) { error("Could not open file for write\n"); } rando = rand()%10+1;//random value for(int i=0; i<rando; i++){ //copy to buffer the quotation of the random value fgets(buffer, 300, ft); } uLCD.cls(); uLCD.printf("%s\n", buffer); //prints in the scrren the string printf("%s\n", buffer); fclose(ft); wait(0.2); } } }
https://os.mbed.com/users/alexcrepory/code/4180Lab4/file/4d5e6b8edd00/main.cpp/
CC-MAIN-2021-10
refinedweb
229
52.09
If a library is namespaced correctly, it can define types and methods in its API which have the same names as those in another library, and a program can use both without conflicts. This is achieved by prefixing all types and method names with a namespace unique to the library. used in all GLib-based projects, so should be familiar to a lot of developers: Functions should use lower_case_with_underscores. Structures, types and objects should use CamelCaseWithoutUnderscores. Macros and constants should use UPPER_CASE_WITH_UNDERSCORES. All symbols should be prefixed with a short (2–4 characters) version of the namespace. This is shortened purely for ease of typing, but should still be unique. All methods of a class should also be prefixed with the class name. Additionally, public headers should be included from a subdirectory, effectively namespacing the header files. For example, instead of #include <abc.h>, a project should allow its users to use #include <namespace/abc.h>. Some projects namespace their headers within this subdirectory — for example, #include <namespace/ns-abc.h> instead of #include <namespace/abc.h>. This is redundant, but harmless. For example, for a project called ‘Walbottle’, the short namespace ‘Wbl’ would be chosen. If it has a ‘schema’ class and a ‘writer’ class, it would install headers: $(includedir)/walbottle-$API_MAJOR/walbottle/schema.h $(includedir)/walbottle-$API_MAJOR/walbottle/writer.h (The use of $API_MAJOR above is for parallel installability.) For the schema class, the following symbols would be exported (amongst others), following GObject conventions: WblSchema structure WblSchemaClass structure WBL_TYPE_SCHEMA macro WBL_IS_SCHEMA macro wbl_schema_get_type function wbl_schema_new function wbl_schema_load_from_data function.
https://developer.gnome.org/programming-guidelines/unstable/namespacing.html.en
CC-MAIN-2018-05
refinedweb
260
57.98
clock_getcpuclockid() Return the clock ID of the CPU-time clock from a specified process Synopsis: #include <sys/types.h> #include <time.h> #include <errno.h> extern int clock_getcpuclockid( pid_t pid, clockid_t* clock_id ); Since: BlackBerry 10.0.0 gets the clock ID of the CPU-time clock of the process specified by pid and stores it in the object pointed to by clock_id. The CPU-time clock represents the amount of time the process has spent actually running. If pid is zero, the clock ID of the CPU-time clock of the process marking the call is returned in clock_id. A process always has permission to obtain the CPU-time clock ID of another process. Returns: - 0 - Success. - ESRCH - No process can be found corresponding to the specified pid. Examples: ; } Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/c/clock_getcpuclockid.html
CC-MAIN-2016-07
refinedweb
151
69.28
Hi I have successfully created a mailto link that opens the email Client with the correct email and subject but for reasons I cannot figure out it refuses to pull in the body content.. Can anyone help? Does wix somehow block the body content? import wixLocation from 'wix-location'; $w.onReady(function () { $w('#text3').onClick(function () { wixLocation.to("mailto:?subject=stay at home!&body=hello") }) }) Following! Copied your code and put on my site (I'm in the processing of adding and coding a bunch of different share links myself) and I have the same problem. Have tried every combo I can think of, but it either inserts the body message into the subject line or the body message just doesn't appear at all... I will keep trying and let you know what I discover! Keep me posted if you find a solution! Will do I hope one of us figures it out soon. Hi, Currently, the only option is adding the Email address and the subject as you can see in the documentation here. Other parameters within the "mailto" are not supported. In order to add those parameters, you can use an iFrame. Best, Tal. OK I came to the same conclusion and embedded the mailto in an Iframe. Do you know if there is a way to do it with Javascript so that an html coded email can be sent in the body, through the default email client. Hey, After checking Stackoverflow, it seems like it's impossible. Best, Tal. OK thanks for all your help Tal! Yes, I came to the same conclusion last night, but wasn't certain. Thank you Tal for confirming! I also successfully managed to create an email share button using an html iframe borrowing from... However... I want to change the image to either one of the images in my media folder or from the free wix photo folder... How do I find the correct image url to change the image? "<a href="mailto:?Subject=Simple Share Buttons&Body=I%20saw%20this%20and%20thought%20of%20you!%20"> <img src="" alt="Email" /> </a> Hi Ashley you can host your image at and change your src link accordingly. following... UPDATE: I successfully created a share to email link using an iframe. But now, it's come to my attention - it works on desktop, but not on mobile. The strange thing is, when I go to mobile view in the editor's preview, the share to email link works! But when I go to the live mobile site, it doesn't, it merely registers as an image with the option to "copy." Any thoughts? HTML Iframe code: (Also, ultimately, I want to share a dynamic page link to email, if anyone has an update or helpful suggestions on that subject - that would be wonderful!)
https://www.wix.com/corvid/forum/community-discussion/create-mailto-link-to-pull-in-body-into-email
CC-MAIN-2019-47
refinedweb
470
73.07
Hello, I'm still a huge beginner in C++ and while i spent hours trying to find a solution to this problem i came to this forum pretty often and I thought it'd be a good idea if I try to ask about my problem. (If you notice anything that can be done better I'd be more than happy to hear an criticism, advice, or lesson about coding). I just wanted to try a little project on my own outside of my C++ class and I wanted to make a timer for myself (since i always end up losing my sense of time on the computer haha), Everything was going fine until i found the urge to make a Snooze and shutdown available while my program continuously plays a beep (Sort of like an alarm clock). Obviously Im stuck at being able to input something while the beep is going on. I don't even know if its possible since If i do cin>> in the loop itll only beep once before it waits for input and if i dont itll just never stop looping. Line 40 and down is me trying to use getch() (I read up about inputting without the prompting of enter, so i was hoping it would somehow work) but it doesnt work the way I want it to. Edit: i use dev-cpp btw :P (If theres a better program/compiler out there id also be glad to hear about that.) #include <iostream> #include <time.h> #include <windows.h> #include <stdlib.h> #include <conio.h> using namespace std; int main() { int num; //number being inputted by the user for the time. int input; //number inputted by the user in order to make his selection. cout<<"Please choose what you would like your countdown in.\n"; //Whether the user wants their time in minutes or seconds. cout<<"1.Seconds\n"; cout<<"2.Minutes\n"; cin>>input; //The user inputs his selection for the countdown. Beep(1450,300); system ( "cls" ); switch (input){ case 1: //The user has chosen his countdown in seconds (1000 ms countdown) cout<<"Please input the number of seconds to initiate the countdown.\n"; //Input by the user cin>>num; system ( "cls" ); if (num>0) //If the user follows the instructions correctly. { while (num>0) //Literal countdown sequence { cout<<num<<"\n"; num--; Beep(1450,100); Sleep(900); system ( "cls" ); } while (num==0) // When the countdown reaches 0 { cout<<"*ALARM*\n"; Beep(1450,100); num = getch(); if (num!=0) {cout<<"Shutting down alarm..."; Sleep(3000); return 0; }
https://www.daniweb.com/programming/software-development/threads/269274/input-during-a-continuous-loop
CC-MAIN-2017-26
refinedweb
424
69.82
Tutorial React Router: Optional Parameters Router makes it easy to use parameters, but these are mandatory by default. While this is enough for most cases, there are some situations where optional parameters would be beneficial. Creating a Route With an Optional Parameter As with regular parameters, declaring an optional parameter is just a matter of the path property of a Route; any parameter that ends with a question mark will be treated as optional: class App extends Component { render() { return ( <BrowserRouter> <div> {/* Optional parameters are defined by placing a question mark at the end of a parameter. */} {/* In this case, `line` is an optional parameter. */} {/* Optional parameters will be passed to the component, but will be undefined if not present in the URL. */} <Route path="/Lyrics/:song/:line?" component={LyricSheet}/> </div> </BrowserRouter> ); } } The route will be rendered if it matches the path, with or without the optional parameter. So ’/Lyrics/Spoonman’ and ’/Lyrics/Spoonman/3’ would both be accepted. Using Optional Parameters Optional parameters are passed alongside mandatory ones as props. But if they’re not in the URL, they’ll be undefined: export default function LyricSheet({ match }) { const {line, song} = match.params; // Since `line` is an optional parameter, it'll either be undefined or a string. const lineNumber = line ? parseInt(line, 10) : -1; // Get the lyrics to the song. const lyrics = getLyrics(song) // Map the lyrics to components. If `index` is `lineNumber`, set `highlight` to `true`. .map((lyric, index) => (<LyricLine highlight={index === lineNumber} key={index} lyric={lyric} />)); return ( <div>{lyrics}</div> ); } The component will be rendered to show the lyrics of song. If the optional parameter line is defined, that line will be highlighted. You’ve probably seen something like this if you’ve flipped through a file on GitHub. If you’d like to learn more about React, take a look at our How To Code in React.js series, or check out our React topic page for exercises and programming projects.
https://www.digitalocean.com/community/tutorials/react-react-router-optional-parameters
CC-MAIN-2021-49
refinedweb
324
55.44
Microsoft’s pre-beta version of its Visual Studio .NET platform, “Whidbey”, is offering a trove of new simplified tools and features that should make developers jobs easier, while giving Microsoft critics new fodder, attendees at the Professional Developers Conference here said. In the meantime, Microsoft Developer Tools Roadmap 2004-2005 were posted, and also the new CLR Profiler which allows developers to see the allocation profile of their manage applications. Microsoft’s Whidbey: Something For Everyone 2003-10-30 .NET 27 Comments is a fine thing. However it is something strange from time to time. For instance, earlier this day, after I had breakfast, I went on the computer to programm on a programm (this pun!:) with the Visual Studio of Microsoft. But then I wanted to additional Project to solution and it said vc classes not registred. I would not find workaround even in msdn help website, so I had to reinstall whole thing (Took quite some time it did!) it does it sometimes. so I overall give it VStudio .NET 00-00-00-00-00 gui design 3/5 (like borland tools better) speed: 4/5 runs fairely decent on my p3 500 notebook toschiba stability: 3/5 (see at above) 0===============================0 complete 3.5/5 @chief editor: I will write exhausting review as article if you wish Theo I want a tool which writes the user’s guide 🙂 All of these M$ tools are nice but they will never be able to be better than the ultimate programming tool! VIM! All of these M$ tools are nice but they will never be able to be better than the ultimate programming tool! VIM! I agree, but you can use VIM in VisualStudio. Forget about “Vim”. Let us create new tools, modern tools, powerful tools. Let us not rely on relics of the past like text editors such as Vim. Technology should be about advancement, continual advancement, Not basking in the achivments of yesteryear. what is a vim? can you give me the url to him? –theo Let us create new tools, modern tools, powerful tools. That’s what Bram Molenaar made : VIM is a new, modern, powerful tool, and leads the world of editing. But, according to the KISS, it’s just a text editor, that’s why it should be used inside a IDE, or inside a great OS like emacs. @Theodor Waigel Welcome in the world of true men ==> More modern does not necessarily mean better. If it did, “modern” languages like Java and C# wouldn’t still be playing catch up to ones that came decades before. “Modern” file management paradigms wouldn’t be far less efficient than good old cp and mv. “Modern” OSs like NT wouldn’t be less secure than ancient ones like UNIX. A good idea is a good idea, and good ideas are timeless. The inexorable march of progress is slowed greatly by all the crappy ideas littering the path. History repeats itself. Besides, Vi (and Emacs and other CLI editors) have not stood still over the years. The basic model is the same, but that is because it is the most optimal one we have given the limitations of our hardware (the keyboard is still the most high-bandwidth interface between human and computer). Over the years, tons of features and external scripts have been added, to the point where you can do interactive development, debugging, refactoring, code browsing, version control, etc, all without taking your hands off the keyboard, or breaking your concentration by trying to find things in a GUI. Now, I’m not saying that the CLI is optimal for everything. I still use Konqueror for webpages, and use KMail for mail. But for tasks like these, the existing CLI tools are phenomenally more productive than their GUI counterparts. MS has the best dev tools on the market, the .NET platform is the future for development. There are no other tools for developers on the market that can even compare. if it ain’t broke…. hey, if it works, why make a “better” solution? In my opinion, IDEs can be cumbersome for simple programs that kids do in school, and even for scripts. thre has be be a way to make an IDE that gives some one the power of the IDE and debugger, but the simplicity of an editor and GCC. hmm…..nEdit + GCC does a pretty good job. I have developed some of the best apps using MS Vis Dev studio. MS has geared the suite of applications towards an advanced programmer such as myself. I am pleased. can’t compare the men power month. — 😉 have used visual studio. I had to patch the thing out of the box. When I first ran it I go linking errors on Hello World! I have to fully agree with Paul: you never experienced REAL auto-complete functions until you tried IntelliJ!! And the refactoring, autogenerating of try catch blocks with the right exeptions already filled in, etc, etc is really great. Also very intelligent is that it instantly tells you if you made a mistake so your program won’t compile, or that a variable is initialised, but never used in your program… There are really a thousand little things in that program which makes it really nice for a java programmer. Indeed, using something else feels like going back in time… I must add: it doesn’t have a GUI editor, but I use qt-designer in combination with which is also good for the basic layout Paul: the nice features you are talking about is exactly what VS Whidbey is all about! … and I’m pleased :o) I have been using .NET and Windows 2003 for about 6 months now and I must say, though they are a vast improvement over past tools, they just don’t seem to be that well thought out. Maybe it’s a maturity thing but I am constantly coming across problems I need to solve that should be instinctive in a modern ‘programming’ language/runtime. It seems that although MS have improved they are still lacking a vision of a more ‘standard’ future. In the same way IE fails to bother supporting CSS properly I feel that .NET fails to be supporting technologies that are up and coming in a full way. For example ASP.NET forms are _NOT_ XHTML compatible, which is ridiculous (especially seeing that the XHTML spec has been around a while) and also have compatibility problems with non-Javascript enabled browsers (still around 15% of the market according to stats). Also there is no support for SVG whatsoever, despite the much publicised ‘full’ xml support. Overall I’ve found ASP.NET great for the Dreamweaver and VB coders out there but full of headaches and compatibility problems for those of us who want to work to W3C standards, making most of the .NET functionality, that would make it a great environment invalid. I think it’s a shame, again MS can’t be bothered to think ahead far enough in fear of supporting a standards that they might have to pull down to further themselves. It just makes my life as a developer so much less enjoyable. The product is not supposed to support W3C standards, and that is intentional. When you develop with this product don’t even try to be standards compliant unless it is a Microsoft only standard. That’s what you are supposed to be doing when using this product. is a comparison between Whidbey and Xcode (the new IDE for OS X, successor to Project Builder). Would comparison between Xcode and the current Visual Studio be more fair, as Whidbey is “pre-beta”? “…advanced programmer such as myself.” You’re a funny guy, Deak! “” “…advanced programmer such as myself.” You’re a funny guy, Deak! “” Deak isnt funny, hes 1337. I for one have made a call to the MSDN customer service so they would include the whidbey and yukon alfa’s in our next shipment. I don’t understand some of the previous posts. Maybe you know i’m not a very big fan of microsoft but some of the Visual Studio bashing is really over the line. I will agree there were some serious bugs in VS.NET 2002, but 2003 works fine. My comments come from a guy using the 2003 version, i’ve used 2002 but upgraded as soon as i found out it fixes a lot of the bugs i was running into. “I have used visual studio. I had to patch the thing out of the box. When I first ran it I go linking errors on Hello World!” How did you patch it? What was wrong? I’ve never heard something like this before. I know of many bugs but none like this. I think you are lying. re: Dissapointed with .NET I agree with you, asp.NET webcontrols and W3C are a pain. But that doesn’t have anything to do with visual studio the IDE. The way Visual studio handles webcontrols is pretty neat if you ask me. Also if you read the whidbey features (… ) you’ll see that whidbey will have validation and accessability checking. will agree that Eclipse is a great IDE (i haven’t used IntelliJ), and that it indeed provides features that VS is lacking (auto correcting by adding namespaces for example, CVS integration being another), but Visual Studio has things over them too, especially in a microsoft environment (source safe, sql server, …) All in all i think visual studio and sql server are two of the best things to come out of redmond. if whidbey lives up to it’s promises and proves stable and reliable enough it will be a great IDE, offcourse eclipse still has ’til around 2005 to improve as well (which it will). I for one can’t wait to get my hands on one of the alfa’s. MS has the best dev tools on the market, the .NET platform is the future for development. There are no other tools for developers on the market that can even compare. I completely disagree. I think that .NET and Microsoft’s VS IDE are fairly good tools, but I also think that limiting your options to one company or one tool is a lot like a carpenter that zealously limits his tool use to a square-mouthed shovel; not very effective for hammering nails. “Deak isnt funny, hes 1337.” HAHAHAHAHAHA! They’re right, you are funny guy. Microsoft has nothing that I am interested in. Their lack of security in their products is like a car company selling a car that looks good (not very good and not great) on the outside but is a Yugo underneath. Thanks, but I’ll stick with non MS solutions for everyting.
https://www.osnews.com/story/4979/microsofts-whidbey-something-for-everyone/
CC-MAIN-2020-05
refinedweb
1,808
72.97
Dennis E. Hamilton wrote in message ... >That helped him get >through to section 2.1 but things got tough around section 2.1.1 where there >was no way to get through what was being talked about without more >grounding. ("Argument," for Pete's sake!) I find one thing that's really lacking is an intermediate level book that goes through every concept, very carefully and simply, with lots of examples and explanations of how you use it. Take, for example, the concept of the "namespace." It was extremely painful for me to learn this when I first encountered it (in C++) because it wasn't a beginner concept and the explanations I read (in Stroustroup's "The C++ Language) were rather unclear. Then I had even more trouble adapting to Perl's namespace rules, which I still don't quite properly understand (eg dictionary passing using a namespace object.) This book would be very long, but also very useful. Most beginner books start you off with an intuitive or sketchily defined notion of what a variable actually is, which leads to confusion. New programmers typically confuse an object with the name that references it, for example.
https://mail.python.org/pipermail/python-list/2000-March/038467.html
CC-MAIN-2016-50
refinedweb
197
64
I have to implement rabin karps algorithm into Haskell, I think my code is right but it's not working for me When I compile I get an "parse error on let" error. Just wondering if anybody could look at the code and tell me what's wrong with it, I think it could be indentation but usually Haskell tells you. Thanks for any help! import Data.Char hash :: String -> Int hash [] = -1 hash (x:xs) = ((ord x)) rabinKarp :: String -> String -> Bool rabinKarp [] _ = False rabinKarp mainString patternString let hashPattern = hash patternString hashMain = hash (take (length pattern) mainString) if hashPattern == hashMain then True else rabinKarp (drop 1 mainString) patternString
http://www.dreamincode.net/forums/topic/304183-haskell-parse-error-on-let-help-please/
CC-MAIN-2017-43
refinedweb
109
61.7
Python eggs examples draft From FedoraProject Add a section of examples of how to write the %files section to Packaging/Python/Eggs How to Write %files Writing the %files section can be a bit tricky due to the fact that distutils in Fedora < 9 did not generate egg-info files. This section let's you know what to look out for and some examples of how to work around the gotchas. A python package can have up to three different names that show up in the %files section. For instance, in the python-sqlalchemy package we have: - Package Name: python-sqlalchemy -- This is the name we give it according to the naming guidelines. This is used when a user installs the package via yum install python-sqlalchemy. - Module Name: sqlalchemy -- This is what is used in python's importstatement. The module is stored in a directory with this name: %{python_sitelib}/sqlalchemy - Egg Name: SQLAlchemy -- This is the name given to the python module in the setup.py file. The egg info is stored in a normalized form of this (spaces and "-" turned into "_"). For instance: %{python_sitelib}/SQLAlchemy-0.3.10-py2.5.egg-info Whether you have to worry about all these names depends on a number of factors. 1. Is setuptools being used (this includes setting this up in the spec file as in [#head-4ac98208bf2f5a13b9cd997c91e2e424f67a7e35 Providing Eggs for non-setuptools packages] )? If so, the egginfo will exist on all versions of Fedora as a directory. 1. Is distutils being used? If so, the egginfo will be present on F9+ as a file and non-existent on F8 or less. 1. Is everything in %{buildroot}%{python_sitelib} to be included in your package? If so it's easy to wildcard the files you need to install. 1. If you aren't including everything (for instance you might have multiple subpackages that include different modules inside %{buildroot}%{python_sitelib} or the module you're packaging is a plugin for another module) then you'll have to get more creative to include just what you want. Let's take the simplest case first. If you are using setuptools or you want to include everything inside of %{python_sitelib} then you can use a very simple wildcard to get the desired effect: %files [...] %{python_sitelib}/* The wildcard catches both the module directory and the egg-info if it exists so it will be the same for Fedora >= 9 and Fedora < 9. Now let's suppose that you have a module created by distutils that installs into a plugin directory of another package. In this case we don't want to include the plugin directories and we need to do something different for Fedora 9 than for Fedora 8: %{python_sitelib}/bzrlib/plugins/* %if 0%{?fedora} >= 9 %{python_sitelib}/*.egg-info %endif This includes only the plugin that resides in the plugins subdirectory and only looks for an egg-info file on Fedora 9 and above. One more example that's hopefully more complex than anything you have to deal with: Your package contains multiple modules with multiple egg-info files. The main package you generate is python-foobar-1.0 and the subpackage with a second module is python-foobar-baz-1.0. The module name's are foobar and baz. The upstream decided to name the modules !FooBar and !FooBar-Baz in setup.py, therefore the egg-info names are !FooBar-1.0-py[PYTHONVERSION] .egg-info and !FooBar_Baz-1.0-py[PYTHONVERSION] .egg-info: %files %{python_sitelib}/foobar %if 0%{?fedora} >= 9 %{python_sitelib}/FooBar-*.egg-info %endif %files baz %{python_sitelib}/baz %if 0%{?fedora} >= 9 %{python_sitelib}/FooBar_Baz-*.egg-info %endif In this case we have to be careful that we only pick up the egg-info files that we're interested in and not ones for other modules. However, we can still wildcard the version and python version to make it easier when we update the package.
https://fedoraproject.org/w/index.php?title=Python_eggs_examples_draft&oldid=79254
CC-MAIN-2016-22
refinedweb
651
64.1
Q: Many C++ users are aware that you are currently engaged with the ANSI/ISO standardisation of C++ and in particular, the enhancements. What do you feel are the most major enhancements that should be part of the standard? (regardless of whether or not they are an option) A: We, that is the ANSI/ISO committee, have accepted templates, exception handling, run-time type information, and a couple of extension I consider minor. That in itself is a lot for the community to absorb and for the implementors and educators to catch up with. However, I hope to see namespaces accepted at the Munich meeting in July. Namespaces will provide significant help in organizing programs and in particular in composing programs out of separately developed libraries without the name clashes. Namespaces are also easy to implement and easy to learn. I had them implemented in five days and I have taught people to use them in 10 minutes. For example, two suppliers may each use their own namespace so that names will not clash: namespace X { class String { ... }; typedef Bool int; int f(const char*); void g(); } namespace Y { class String { ... }; enum Bool { false, true }; int f(int); void h(); } A user can then pick and chose among the names: void my_funct() { X::String s = "asdf"; Y::f(2); X::g(); } or state a preference for a particular namespace: void your_func() { using namespace X; String s = "asdf"; // X::string Y::f(2); g(); //X::g //... } There are more details, but these are the basics (and this is an interview, not a tutorial). I suspect that namespaces will be the last major extension in this round of work. We can of course have a nice little discussion about what 'major' means in this context, but we do need to get a draft standard ready for public review in September '94 and we have a lot of work to do before then. I suspect that the extensions' group will be busy with cleaning up the description of templates and exceptions and dealing with proposals for little extensions - most of which will also have to be rejected or postponed. Then we'll see what the response to the public review period is and work on based on that. I hope this doesn't sound too negative, but stability is essential for C++ users and we can't just add every feature that people like. Even if we added just the GOOD ideas the language would become unmanageable. We have to apply taste and judgement. A language has to be a practical tool and not just a grab bag of neat features and bright ideas. The minor extension I think would be most important is a set of cast operators to provide an alternative to the old "do everything" cast. Unfortunately, there still is a few loose ends in that proposal and if I can't resolve those I can't recommend its adoption and it won't make it into the standard. The basic idea is that a cast like (T)v can do too many things. It may be producing a new arithmetic value from v, it may be producing a pointer to a different subobject, it may produce a pointer of a type unrelated to the object v pointed to, it may be producing a pointer from a non-pointer type, it may be producing an integer from a pointer. It may be constructing a new object of type T by calling a constructor. It all depends on the type of v and on what kind of type T is. A reader of (T)v cannot know without looking at the context if the declarations in that context change the meaning of the cast quietly changes. Because of these quiet changes and because programmers frequently misunderstand what the casts they and their colleagues write actually do. I consider the old-style cast "slippery". It'll do something, but far too often it's not obvious what that is. We at Bell Labs and many others have found this a significant source of bugs and maintenance problems (we do measure such things). If the writer of the cast could say what was really meant to be done many of these problems would go away. The basic idea is to have three operators doing three basic kinds of conversions currently done by (T)v: static_cast<T>(v) // for reasonably well-behaved casts reinterpret_cast<T>(v) // for horrible casts const_cast<T>(v) // for casting away const and volatile In addition we now have dynamic_cast<T>(v) // for run-time checked casts Naturally, we can't just ban old-style casts, but with these new operators In place people could write better new code and eventually fade out old cast. Currently, I'm stuck on problems related to const. Many people are VERY keen that constness doesn't quietly disappear. Therefore, my intent was that only const_cast should be able to remove constness. Unfortunately, people can ADD a const specifier in one operation and then go on to try to modify a const later where it is not obvious what is going on. The problem kind of that's holding me up is: void f(X* p); // f modifies *p void (*fp)(const X* p); // *fp doesn't modify *p fp = (void (*)(const X*))&f; // forcing fp to point to f // note 'const' ADDED void g(const X* p) // g doesn't modify *p { fp(p); // OOPS, thanks to the cast // above *p gets modified } This is a highly obscure effect. Fortunately, it doesn't actually bite people very often, but when it does it can be extraordinarily hard to track down. Please note that this is also a problem in ANSI C. I have suggested: fp = static_cast<(void (*)(const X*)>(&f); // error This wouldn't work because the compiler would know that the cast was suspect with respect to const and people would have to write fp = const_cast<(void (*)(const X*)>(&f); // ok My worry is that many const problems are so obscure and subtle that people would decide that the compiler was wrong and prefer to use the old style cast that would be seen as simpler. This is the kind of problem where I have a hard time deciding whether the cure might be worse than the illness. I have a logically sound solution, but can it be successfully introduced into common C++ use? Q: If you had the opportunity to turn the clock back to 1980 and start the development process again, what would you have done differently and why? A: You can never bathe in the same river twice. There are things that I could do better now, but some of those things would have killed C++ if I had done them then. For example, I couldn't work without virtual functions, yet the introduction of virtual functions were postponed from 1980 to 1983 because I - with reason - doubted my ability to get people to accept them in 1980/81. More to a current point, many can now afford garbage collection, but in 1980 essentially all of my users could not. Had C with Classes or early C++ relied on GC then the language would have been stillborn. A language has to fit with its time and grow with the changing demands. The idea of a perfect language in the abstract is fundamentally wrong. A good language serves its users as they are, for the problems they have, on the platforms they work on. I'm not really sure Bjarne (vintage 1993) knows more about the vintage 1980 users than Bjarne (vintage 1980) did. Therefore, I don't really want to conjecture. I now know things about the vintage 1993 users that Bjarne (vintage 1980) didn't, and that knowledge I'm trying to put to good use in the standards group and elsewhere. Q: When C++ is standardised, do you have plans to extend it further? (ANSI 2010 C++ Std perhaps) A: My immediate reaction is "No way!" I have had enough of language work to last a lifetime. As I'm disengaging from language work I'm getting back into the use of language that started it all. I didn't really want to design a language, I just happened to have programming problems for which there were no sufficiently good language available at the time. If - and only if - my future projects gets me into that situation again will I consider new language features. At the HOPL-2 conference Dennis Ritchie observed that there seemed to be two kinds of languages: the ones designed to solve a problem and the ones designed to prove a point. Like C, C++, is of the former kind. Q: What was the programming problem that started you on the C++ development track? A: I was looking for a way to separate the UNIX kernel into parts that could run as a distributed system using a local area network. I needed to express the kernel as a set of interacting modules and I needed to model the network for analysis. In both cases, I needed to class concept to express my ideas. I never actually got back to those problems myself, though I have over the years helped on several projects using C++ to simulate networks and network traffic to help design networks and protocols. Q: I often hear (mainly from Smalltalk programmers) the criticism that C++ is not a 'pure' OO language. Do you think that being a 'hybrid' language strengthens or weakens C++ as a commercial programming language? A: Arm-chair philosophers also tend to make that criticism. I think that C++'s real strength comes from being a 'hybrid.' As I said above, C++ was designed to solve problems rather than (merely) to prove a point. C++ is a general-purpose programming language, a multi-paradigm programming language rather than (merely) an object-oriented language. Not all problems map well into any particular view of programming. In particular, not all problems map into a view of object-oriented programming as the design and use of deeply nested class hierarchies demonstrate. The programming problems we face and people's ways of thinking are much more varied than people would prefer to believe. Consequently, it is easy to design a smaller, simpler, and cleaner language than C++. I knew that all along. What is needed, though, and what I built was a language that was flexible enough, fast enough, and robust enough to cope with the unbelievable range of real challenges. Q: Do you think that subjects such as Garbage Collection and Persistence should be dealt with as part of the language, or be implementation/third-party add-ons? A: Persistence is many different things to different people. Some just wants an object-l/O package as provided by many libraries, others wants a seamless migration of objects from file to main memory and back, others wants versioning and transaction logging, others will settle for nothing less than a distributed system with proper concurrency control and full support for schema migration. For that reason, I think that persistence must be provided by special libraries, non-standard extension, and/or "third-party" products. I see no hope of standardizing persistence. The ANSI/ISO standards committee's extensions' group is looking into whether we can help with some of the simpler levels of this problem either through language features or through standard library classes. The support for run-time type identification that we accepted in Portland in March contains a few "hooks" deemed useful by people dealing with persistence. Optional garbage collection is, I think, the right approach for C++. Exactly how that can best be done is not yet known, but we are going to get the option in several forms over the next couple of years (whether we want to or not). Why GC? It is the easiest for the user. In particular, it simplifies library building and use. It is more reliable than user-supplied memory management schemes for some applications. Why not GC? GC carries a run-time overhead that is not affordable to many current C++ applications running on current hardware. Many GC techniques imply service interruptions that are not acceptable for important classes of applications (e.g. real-time, control, human interface on slow hardware, OS kernel). Many GC techniques carry a large fixed overhead compared to non-GC techniques. Remember, not every program needs to run forever, memory leaks are quite acceptable in many applications, many applications can manage their memory without GC and without relative high-overhead GC-like techniques such as reference counting. Some such applications are high performance applications where overhead from unneeded GC is unacceptable. Some applications do not have the hardware resources of a traditional general-purpose computer. Some GC schemes require banning of several basic C operations (e.g. p+1, a[i], printf()). I know that you can find more reasons for and against, but no further reasons are needed. I do not think you can find sufficient arguments that EVERY application would be better done with GC without restricting the set of applications you consider. Similarly, I don't think you can find sufficient arguments that NO application would be better done with GC without restricting set of applications you consider. My conclusion (as you can find in "The C++ Programming Language" (even the first edition) and also in the ARM) is that GC is desirable in principle and feasible, but for current users, current uses, and current hardware we can't afford to make the semantics of C++ and of its most basic standard libraries dependent on GC. But mustn't GC be guaranteed in "The Standard" to be useful? We don't have a scheme that is anywhere near ready for standardization. If the experimental schemes are demonstrated to be good enough for a wide enough range of real applications (hard to do, but necessary) and doesn't have unavoidable drawback that would make C++ an unacceptable choice for significant applications, implementors will scramble to provide the best implementations. I expect that some of my programs will be using GC within a couple of years and that some of my programs will still not be using GC at the turn of the century. I am under no illusion that building an acceptable GC mechanism for C++ will be easy - I just don't think it is impossible. Consequently, given the number of people looking at the problem, several solutions will emerge and hopefully we'll settle on a common scheme at the end. Q: What methodology do you use when designing C++ programs? A: That depends what kind of problem I'm trying to solve. For small programs I simply doodle a bit on the back of an envelope or something, for larger issues I get more formal, but my primary "tool" is a blackboard and a couple of friends to talk the problems and the possible solutions over with. Have a look at chapters 11 and 12 in "The C++ Programming Language (2nd Edition)" for a much more detailed explanation of my ideas (which naturally are based on experience from the various projects I have been involved in). Q: Tools nearly always lag behind the development of a 'new' language, in what areas do you feel that the C++ development world is being deprived of suitable software tools? A: Actually, I think that in the case of C++ tools are lacking less than education is. time scale. There are benefits in using C++ without making this effort, but most benefits require contains. As for tools (says he, getting off his hobby horse :-), I think that what we are seeing today will look rather primitive in a few year's time. In particular, we need many more tools that actually understand C++ (both the syntax and the type system) and can use that knowledge. Currently, most tools know only a little bit about syntax or about the stream of executable instructions. Eventually, we'll have editors that can navigate through a program based on the logical structure of a program rather than the lexical layout, be able to click on a + and instantly be told with it resolves to under overload resolution, and have re-compilation be incremental with a small grain. Such an environment would make what you can currently get for languages such as Lisp and Smalltalk look relative primitive by taking advantage of the wealth of information available in the structure of a C++ program. Let's not get greedy, though. C++ was designed to be useful in tool poor environments and even in a traditional Unix or DOS environment it is more than a match for many alternatives for many applications. Environments and tools are nice, and we'll eventually get great ones, but for much C++ at least they are not essential. Q: With all the advantages of C++, do you think the use of ANSI C will decline? A: In a sense, yes. You can't buy an ANSI C compiler for the PC market any more except as an option on a C++ compiler. I expect that over the years we'll see a gradual adoption of C++ features even by the most hard-core C fanatics. The C++ features are now available in the environments C programmers use, they work, and they are efficient. C programmers would be silly not to take advantage of the C++ features that are helpful to them. Not that I'm not preaching some OO religion. C++ is a pragmatic language and is best approached in a pragmatic manner: Use the parts of it that are useful to you and leave the rest for later when it might come in handy. I strongly prefer sceptics to "true believers." Naturally, I recommend "The C++ Programming Language (2nd edition)" as the main help in understanding C++ and its associated techniques. It contains a lot of practical information and advice - on programming, on the language, and on design - and very little hype and preaching. I think too many C++ texts push a particular limited view of what C++ is or aims at delivering only a shallow understanding of C++. To gain really major benefits from C++ you have to invest a certain amount of effort in learning the new concepts. Writing C or Pascal with C++ syntax gives some benefits, but the greatest gains come from understanding the abstraction techniques and the language features that support them. Just being able to parrot the OO buzzwords doesn't do the trick either. The nice thing about C++ in this context is that you can learn it incrementally and can get benefits proportionally to your effort in learning it. You don't have to first learn everything and only then start reaping benefits. Q: I feel that C++ should be (like C) "lean and mean" and some of the additions (such as RTTI) will be adding layers of "fat" to the language. Do these extensions impose a penalty on the C++ community even if no use is made of them? A: C++ is lean and mean. The underlying principle is that you don't pay for what you don't use. RTTI and even exception handling can be implemented to follow this dictum - strictly. In the case of RTTI the simple and obvious implementation is to add two pointers to each virtual table (that is a fixed storage overhead of 8 bytes per class with virtual functions) and no further overhead unless you explicitly use some aspect of RTTI. In my UNIX implementation, those two words have actually been allocated in the vtbl "for future enhancements" since 1987! When you start using dynamic casts the implementation needs to allocate objects representing the classes. In my experimental implementation those were about 40 bytes per class with virtual functions and you can do better yet in a production system. That doesn't strike me as much when you take into account that you only get the overhead if you explicitly use the facilities. Presumably, you'd have to fake the features if you wanted them and the language didn't support them and that's more expensive in my experience. One reason for accepting RTTI was the observation that most of the major libraries did fake RTTI in incompatible and unnecessarily expensive ways. The run-time overhead of an unoptimized dynamic cast is one function call per level of derivation between the base class known and the derived class you are looking for. One thing people really should remember is that a design that relies on static type checking is usually better (easier to understand, less error-prone, and more efficient) than one relying on dynamic type checking. RTTI is for the relatively few (but often important) cases where C++'s static type system isn't sufficient. If you start using RTTI to simulate Smalltalk or CLOS in C++ you probably haven't quite understood the problem or C++. Q: A lot of programmers (and members of the press) envisage C++ as a language developed solely for the development of GUI products, and that it has no place in the "normal" (whatever that may be) programming arena due to its complexity. I, on the other hand, think that C++ is the best all-round programming language ever invented and should be used for every programming task. A middle ground obviously exists, but what tasks do you see C++ as being best suited for? A: They are plain wrong. C++ was designed for applications that had to work under the most stringent constraints of run-time and space efficiency. That was the kind of applications where C++ first thrived: operating system kernels, simulations, compilers, graphics, real-time control, etc. This was done in direct competition with C. Current C++ implementations are a bit faster yet. Also, C++ appears much more complex to a language lawyer trying to understand every little detail than to a programmer looking for a tool to solve a problem. There are no prizes (except maybe booby prizes) for using the largest number of C++ features. The way to approach a problem with C++ is to decide which classes you need to represent the concepts of your application and then express them as simply and as straight-forwardly as possibly. Most often you need only relatively simple features used in simple ways. Often, much of the complexity is hidden in the bowels of libraries. Q: C++'s popularity seems to be accelerating at present. Do you think that other OO languages (such as Smalltalk, Actor and Eiffel) and other hybrids like OO-COBOL will make an impact on the growth of C++? A: I don't think so. Compared to C++, they are niche languages. They all have their nice aspects but none have C++'s breath of applicability or C++'s efficiency over a wide range of application areas and programming styles. Smalltalk seems to have a safe ecological niche in prototyping and highly dynamic individual projects. It ought to thrive. If OO-COBOL takes off it also ought to have a ready-made user base. Q: I see parallel processors becoming more widely available to programmers in the near future. How easy will it be to use C++ in the parallel programming environment? A: Parallel processors are becoming more common, but so are amazingly fast single-processors. This implies the need for at least two forms of concurrency: multithreading within a single processor, and multiprocessing with several processors. In addition, networking (both WAN and LAN) imposes its own demands. Because of this diversity I recommend parallelism be represented by libraries within C++ rather than as a general language feature. Such as feature, say something like Ada's tasks, would be inconvenient for almost all users. It is possible to design concurrency support libraries in C++ that approaches built-in concurrency support in both convenience of use and efficiency. By relying on libraries, you can support a variety of concurrency models, though, and thus serve the users that need those different models better than can be done by a single built-in concurrency model. I expect this will be the direction that will be taken by most people and that the portability problems that will arise when several concurrency-support libraries are used within the C++ community can be dealt with by a thin layer of interface classes. Many thanks for your time Bjarne, I'm sure my readers will enjoy reading your comments.
https://accu.org/journals/overload/1/2/toms_1356/
CC-MAIN-2021-10
refinedweb
4,097
59.03
See more for "Supermicro X8DAi can't install 2nd cpu" My backordered 2nd cpu came and when I try to install it my 7046a-t with mb x8DAi it shows no power led and I get a flashing red led indicating chassis fan failure. The monitor never lights up. I have been running whs 2011 fine with one cpu for almost a week. The cpu's are Intel Xeon E5506 Quad Core 2.13GHZ LGA1366 4MB 4.8GT/SEC Nehalem Retail Processor. The machine runs fine with one cpu and I've even switched the cpu's and heatsinks around and each one will work but not when I have both installed. I can't figure out what the problem is, anyone know? Edit to add: I updated my bios with a beta from Supermicro and it didn't help. I've switched cpu's and both work fine independently whether in cpu 1 or cpu 2 spot so it's not bent pins. I'm at a loss here and hoping to hear back from supermicro soon. Edit: Supermicro has issued me an RMA. It's going to cost a bundle to ship this server back! I have the same problem with a Supermicro X8DT6 motherboard and SC743 chassis. My server has run just fine since early this year with just one processor. Today I added a second processor and populated the extra memory slots. I have two XEON 5650 processors with 48GB 1333 ECC Registered Server Memory. The server comes on, and the monitor light stays orange and never turns blue. All the fans spin, and the hard drives boot, but nothing comes up on the screen. My red fan failure light is blinking as well. Please let me know if you have resolved this issue, or if anyone else has run into this problem. I was so excited to get my package today, and I was expecting a faster computer, not one that doesn't work. I have an aftermarket graphics card, so I might try switching back to the 8 MB VGA onboard video memory. Were you using onboard graphics, or an aftermarket card, and could that even matter? I never did get the two cpu's to work together and Supermicro ended up sending me another server and it works fine. I also have an aftermarket graphics card (Nvidia) and my mb didn't come with onboard video. I'm using the same graphics card in the new server and both cpu's work. I spoke with a tech and installed a beta version bios and nothing worked on the first server. Supermicro was okay in sending me another server although there were a few glitches, don't let fedex bully you into paying any kind of duty/taxes/etc and make Supermicro pay all costs for the ship to you. I paid about 70 bucks to ship the broken server back and that was cheap because I had a family member mail it from the States (I live in Canada) but I still wasn't happy that I had to pay any shipping. I spoke to tech support and tried everything you did, ie switching sockets, etc. Each CPU will work by itself but not together. I purchased all of the items separately, so all I should have to send back is the motherboard. The first tech was pretty cool and seemed to know what he was talking about, and the second one was OK, but a little bit of a know it all jackass. Both CPUs were identical, and had the same revision and stepping. He tried to convince me that the QPI link on one of the CPUs might not be communicating with the other. Yeah right. He was like, well if you don't believe me then just send it back for RMA, which is what I am going to do. I asked him how was I supposed to check a QPI link, and that I didn't know of a way. He then just said send it back. Seeing as how this is a production server, I am going to try and get a new one sent out first. Do you think I should ask them to test two similar CPUs on the board before they send it out? What was the turnaround like for you? Seeing as how I live in the US I doubt seriously there will be any of those types of fees, at least I hope not. One requirement for dual CPU to work is the CPU must support dual QPI. Some Xeons only have a single QPI and that will not work with dual socket motherboards. After getting my rma server I put in both processors and they worked beautifully so it was definitely a problem with the mb and not the cpu's. I also bought all my items separately and just a barebones server but I still didn't want to take out the mb so I opted for a whole new server. I was able to keep my server while waiting for the new one because one of the techs said I should do a "cross ship," he was quite concerned that I shouldn't be without my server, and that worked out well. It would've only taken 3 days to get it if fedex hadn't tried to get money out of me, and that's pretty good considering it came out of California all the way up to BC, so in the end it was four days. I took a lot longer to ship it back but still under the 30 days waiting for my relative to come up from the states and get it across the border. One thing I was concerned about was they charged my cc well over the price I originally paid for it as I got it on sale at newegg.ca and I figured with the few glitches I had they would stall on crediting my cc back. No worry tho', even after I kept two of the fans (my server only came with 2 instead of 4 behind the harddrives and I figured after all the headache I went through I was keeping them) not only did they credit me back almost immediately they also gave me an extra 35 dollars! I have no idea why (maybe they refunded part of the shipping??) but I wasn't going to b*tch. All in all Supermicro did a great job and I would buy their products with no hesitation. I think I have that taken care of. Thanks I am glad yours worked out. Maybe I should do that, because I want the two extra fans as well... So you were able to have them build you a server that was identical to the one you built yourself? So did you just have to transfer the data over to your new computer, or did you just switch drives? I did put in a RMA request, and they got back quickly with the form, and said that a RMA number would soon follow. So it sounds like they charged your credit card immediately, and they kept the charge on there until they got them back? My Motherboard is a strange example. It was listed at Newegg as new, but it was $100 off the regular price. I think it was an open box return and that since most everything was in the box, that they would sell it as new, but at a hefty discount. I paid $349.99 for it, and they are now back up to $449.00 for it. Maybe they knew it had a problem, who knows. I wonder what they will charge me. I am going to go with UPS because they are a better company, and the shipping is much cheaper. Without looking at all of them I would say all of the 5000 and greater with QPI have 2 links for the most part, even though it does not list how many QPI links the 7000 series MP CPUs have, but I would imagine probably 4. 3000 series on the other hand are designed for single socket motherboards, which I think is silly. Lets do a comparison shall we: If you look they are all functionally the same: TDP, clock, Max Turbo frequency. The Extreme 990x and the Xeon 3690 are basically the same except the Xeon has slightly higher memory bandwidth, even though they only can address up to 24 GB of memory. The 3690 uses ECC, the 990X does not. They are fundamentally the same, and both are around $1,000. Now lets take a look at the 5690. It will address up to 288 GB of memory, and it has 2 QPI links. The 3690 and 990x only have one QPI, so therefore you can only put one on each board. With the 5690, you are paying 70% more ($1700), just for that second QPI link, and the ability to address substantially more memory. You are going to have spend substantially more on a board that supports ECC memory, the ECC memory itself is going to cost twice what the same non-ECC memory would cost. It just does not make any sense to me. 3000 series processors seem like a joke to me. There is no real need for them unless you need slightly higher memory bandwidth, and ECC Memory. By the time you have error correction in the memory, it probably negates the higher bandwidth. I just do not understand why on earth anyone would pay more for a single CPU server solution when they can get better or equal performance for substantially less with the second generation processors, at least when you compare apples to apples. All in all these are all three Intel Extreme 990x processors, with minor differences, mainly the second QPI link for an extra $700. And many of the newer processors are going back to DMI instead of QPI. Go figure. I could not wait for Sandy Bridge Server chips to come out, but it will be interesting to see what they are all about. I have right at five grand pumped into this machine now, so I am hoping it will be future-proof for a while with two 2.66 GHz processors, which runs at 3.06 GHz with Turbo Boost. They have 256x12 L2 cache and 12 MB shared L3 Smart Cache (for a total of 24 MB L3 cache), a total of 12 cores and 24 threads {assuming I can get both processors to work together}, 48 GB 1333 ECC Registered Server Memory, four 15,000 RPM 15K.7 SAS hard drives with three in a RAID-5 array and one as a hot spare. I got the most powerful video card which does not require an auxillary power cable, the HIS IceQ Radeon 5670. As this serves as a home computer as well as a server, I wanted a little more than the 8 MB video that came with the board. The only things left are 4-8 or more hard drives. I am thinking of getting a SSD and getting the CacheCade 2.0 software, and perhaps a more powerful video card, but I doubt it as it runs fine for the racing sims that I do. Actually the server I bought from newegg was a barebones one that included the mb and dual nic, I had to add video card, memory, cpu's, harddrives, cd/dvd, etc. I waited for the rma server to come to me and just swapped everything including the drives. This meant buying some arctic silver but I already had some from a different job so I was good with that. Yes they charged my cc right away and didn't credit it back until I sent them the broken server but as I said before they actually credited me more than they charged (and I was worried they would ding me with the two missing fans lol), if I had of known I would have been tempted to keep a few of the harddrive trays but I didn't want to push my luck! You'll probably be charged the going rate they sell it for and I doubt you'll get any discount but it shouldn't matter because when you ship the old one back you'll get credited back. How much reduction in temperature have you noticed in the system, CPU(s), Graphics card and Hard Drives since you added the extra two fans? Is there a great deal more noise? I imagine because of the positive pressure, you would have less dust build up over time inside the case. Mine came with two real fans and two dummy fans in the space between the backplane and the central compartment. I have looked for the part number for the drop in fans and they are not common or cheap. Seems like they range in price from $15 to $30 or more dollars for each unit, but I start questioning companies that are substantially less than the competition. Most are not in stock and are special order. All I actually need I think is two the SAN 80 fans, but I cannot find that same exact model anywhere I look. Must be a vendor thing. I am thinking of just getting a couple 80x80x25 fans that use the same amperage, have the same speed, cfm, and db ratings. I wrote to the manufacturer and they did not have the SKU in their database for the actual fan itself, but one very similar. Strange seeing as how they made the fan. All of the specs seemed the same, and tech support said they did not see any reason that they would not work. I imagine I would just have to press the fan into the two housings and correctly position the 4 pins. Perhaps I will look around locally for similar fans. Being that this is designed as a DP server chassis, and it is the SQ (Super Quiet) model, maybe I won't even need them. What was your overall impression of two versus the four fans? The CPU Heatsinks have fans on the front of them blowing toward the rear, the Power supply blows the hot air out the back, and then there is a 92mm exhaust fan on the rear of the case, so I wonder if I really need a fan behind it. On the other hand, filling up the other socket and the extra 6 memory modules, maybe I should, but the cheap part of me keeps on thinking that they wouldn't have put only two fans in there if they were not adequate. I think I will just carry one of my dummy fan holders to Micro Center and see if I can find a fan that would work. A) Would you have paid for the fans if they charged you for them, and B) How much would you be willing to pay? The System usually stays below 40°C, as does the processor (well technically Low), and never gets above 60-62° when running Prime 95 with all 12 threads running at 100% in Task Manager. The Hard drives stay in the 34-37° with low disk I/O, although they have gotten to 55° (at their their stated limit) when we had an air conditioner that quit working and it was 85°+ in the room. I keep my case in immaculate condition<dust=the devil>. My graphics card stays low at idle, around 40° as well. Basically with the ambient air temp around 72-74°F (22°C), everything usually stays at or near 40°C, except the power supply, which tends to run a little warmer say 50-53°C at idle, and in the mid to upper 50s at high load. I wish I could get that down a bit more, but with fans that run at 572-687 RPM, and a power supply with 865 watts, nothing but another fan right behind it will help. Speed Fan doesn't seem to be able to control my power supply fans. This is even with the fan setting in BIOS at the High Performance level. Maybe I have just answered my own question. Percentage wise, how much above the normal Newegg selling price did they charge you? As I said, I paid 350 for a board that normally sells for 450. Should I expect to have a $600 hold on my CC? I have the money I just hate to have it tied up. I also have a hard drive that MegaRAID Storage Manager says might be going out, so I am going to have to RMA it the same way. I just don't want a combined $1,000 hold on my account. These are the errors and information that it gives me, and if anyone knows what those error messages mean, let me know. 2960 [Information, 0] 33seconds from reboot Controller ID: 0 Unexpected sense: PD = --:--:2 - Failure prediction threshold exceeded, CDB = 0x03 0x00 0x00 0x00 0x40 0x00 , Sense = 0x70 0x00 0x00 0x00 0x00 0x00 0x00 0x0a 0x00 0x00 0x00 0x00 0x5d 0x00 0x10 0x00 0x00 0x00 2959 [Warning, 1] 33seconds from reboot Controller ID: 0 PD Predictive failure: --:--:2 Most things seem fine, but the read/write times are down from the glorious levels that they once were at, look at the comparison: 8 Months ago when brand new: Random Access: 5.5ms, CPU utilization 1%, Average Read 347.8MB/s, Burst Speed 855.5 MB/s. Not too bad for a rotating hard drive array consisting of three disks in a RAID-5 setup if I do say so myself. Today: Random Access: 5.6ms, CPU utilization 2%, Average Read 313.4MB/s, Burst Speed 473.6 MB/s. Obviously I am most worried about the burst speed. It is off by 55%. Seeing as how The LSI 9260-4i MegaRAID controller card has 512 MB onboard cache, set to predictive read ahead, it is making me wonder about the card itself. No, I can try and fool myself all I want, but I should RMA that hard drive maybe I should see if there is a way to transfer the data from the supposed failing disk to the hot spare, without having to remove the bad one and rebuild the array as if it had failed. Do you have any advice about this? I know I should have started a new topic for this, but just too lazy. With the hot spare in there, maybe I should just let it go until and if it fails. The array has three partitions, ant the total combined usage is almost exactly 50%, but I keep my hard drives well cleaned of junk and defragged. I did not notice the slowdowns as much with my old first generation 36GB Raptors on my last PC, but I do not know if using 50% of the hard drive should slow down peak performance by half in SAS drives (which kill Raptor and Velociraptors by the way). I wonder if frequent defragmenting the hard drives could be causing the disk failure? With my old Raptors, they are basically the same as when new 5-6 years ago. While I am at it, I have an APC 950VA battery backup. In the powerchute software, it says that I am using about 170 watts at idle, With Prime 95 running, Supero Doctor 3 reports CPU1 Medium , system to 44C, power to 60C. APC Powerchute goes from 170 at idle to 286 Watts when Prime 95 is running, and it says that it is not recommended that you connect more equipment to your battery backup, even though the bar goes to 540 Watts, which I assume is the most it can put out. I suppose it does not like it when you go over 50% of the available wattage. With another processor, that will be 286+95= 381 watts. The software says that the Estimated Battery Time is 13 minutes, which in reality is probably less than 10, based on my experience. Various other prograss read the sensors at up to 67° on the CPU, and while not catastrophic, certainly more than I want. I guess I need the fans. I imagine with the extra RAM installed we might be looking at over 400 watts. When I RMA the bad drive, I am going to add the new one to the array, for a total of four 15k.7 SAS hard drives in a RAID-5 array. I wonder what size power unit I need. Will a 1500VA unit do it, or am I looking at 2,000+? I have always used APC, Do you have any input on the cheaper brands out there? Sorry for such a long post. Please feel free to answer any or all of my questions. I have to say this is the quietest system I have ever had, my desktop computer that sits right beside it is noisier. If I had to pay for the fans I probably would have, and I probably would have paid about 20 dollars each, give or take. I feel it's better to be safe than sorry and heat is an issue for sure. Where my computers are located I have two large windows and on the Ranch we have nothing but gravel roads so dust is a big problem, I'm always opening cases and blowing air into my machines. As for actual temps, sorry I haven't check that out (see below). I run 9 internal harddrives and also have a esata/usb box with two more drives installed and could add two more harddrives but have them empty (for now lol). Everything just seems to work, and work well together. I do have a raid card installed but I currently don't have raid set up and mainly use the whole system as jbod but run backup software, for now that works because of the amount of space I have. I also run an HP MediaSmart that backs up all my computers in the house every night (something I bought years ago but just currently installed whs2011), this too has a 4 harddrive esata box connected to it for a total of 8 drives (but right now I only have one drive installed). I have an xbox 360 and a WDTV box that the server streams movies to and, other than a few glitches with MyMovies software on occasion, streaming works great, with our hot summers (no air conditioning at all) heat seems to be okay. I too have everything (probably too much) connected to an APC UPS XS 1500 (PowerChute says I am using 432 watts of power and I can connect more) but I only get about 17 minutes when our power goes out. I immediately put my server and desktop in sleep mode and my time goes way up, I'll usually grab my netbook if I need to go online in the time it takes for our power to come back on. If it's off for a long time (and in the area I live in that's quite common) I grab our gas generator, run an extension through our doggy door and I'm good for hours. It's been awhile since I did the rma and if I remember correctly they charged me about 200 extra, but most companies charge more than any store I've seen. For instance if you go to Western Digital webpage you'd pay a lot more for any product than if you went to Futureshop, Costco, Newegg, etc. It's not often you get a "deal" from a manufacturer's website. As for your harddrive problem have you gone to the manufacturer's website to see if they have software to check the drive? Most of them do although I did come across a laptop with a Toshiba drive recently and it did not. You got me curious on temps so I installed CPUID Hardware Monitor but one thing I notice is it does not include all my harddrives (most likely the ones connected to my raid card). I snipped a pic and uploaded it to flickr for you to compare to your system. I remember checking the fan settings in the bios when I first installed the two cpu's and temps were a bit high but I chalked it up to letting the arctic silver settle in and from the numbers now things look good. Well I hope I answered most of your questions as best I could. Well your system does seem to run cooler, at least the hard drives do. It is strange that the CPUs MAX TDP on the L5506 is 60 watts, while the 5650 is 95 watts, and runs at a higher frequency, but the temperatures of your cores are higher than mine, but I don't know what kind of load you have on it. One thing is odd, and I can't say whether it is usual or not, but one of your CPUs seems to ruin about 5°C hotter than the other. Are you using the same heatsinks? Perhaps one CPU is closer to something hot in your system. When I first bought my system I had a hell of a time with my cooling. Being the first server processor that I had ordered, I did not know that the boxed CPUs did not come with heatsinks. I understand why, because there are so many different configurations. I first bought a big heatsink, but it would not work because the X frame covered the holes, or really there just was not enough clearance between the edge of the metal bracket and a capacitor, so I ended up having to take that one back. I finally found a Dynatron cooler at Micro Center that would work because it screwed straight down, but it was the old type with the copper core and was more like a Pentium 4 type cooler blowing the hot air upward instead of backwards toward the rear of the system. When the price of the memory I was using dropped substantially (from 97.99 a stick to $38.99 a stick), I ordered enough 4GB modules to populate all 12 slots and also a couple of Supermicro heatpipe type heatsink, SNK-P0040AP4, which runs at 2800 RPM, is 24 DB, and is really designed for workstations, but the reviews I read said it was very quiet and performed very well. The CPU overheat alarm would go off if I ran Prime95 torture test with the Dynatron, but when I switched to the Supermicro cooler, it only went from low to warm, but not overheating. It kept the CPU about 15°F cooler! It seems like your hard drives run cooler, but that might be due to the rotational speed. I tried every disk tool from the manufacturer, but they could only read the array, seeing as how it hardware instead of software, an add on card, and is handled by bios as a single drive, so there is no way to test it individually, unless I shut the system down and booted from floppy, but it probably would not recognize the drives either due to being on a controller. This now in, here is what the jackasses are saying, I cut and pasted it: • The Credit Card holder is authorizing $536.00 for 1pc _X8DT6 to be charged by Supermicro as a security deposit (sales tax may be applied for CA & UT residents). • Warranties may be voided and/or full credit of $536.00 will not be refunded if inspection finds that the returned product has been abused or altered. don't know what to do now. I either pay hundreds of dollars in shipping, pay a non refundable $120 service fee, or go for two weeks without my server running. I am a little pissed to be mild about it. I have no idea what the origional invoice date has to do with it. Well I just looked at their warranty page and cross ship is only 30 days. Lesson to be learned: do not order a dual processor motherboard from Supermicro with just one CPU with the idea of buying a second one later. If you do, you will lose an extra $120.00, which is F*cking ridiculous. Perhaps I should demand overnight RMA both ways, and they pay for shipping. I could live with that to a certain point, but UPS is 5 days for standard Ground, so that would mean that I would be out a server for half a month if I did it that way. I guess I need to call someone and complain. $656 (of only $536 I will get back) for a product that I paid $350 for in the first place. F*cking outstanding. You did yours through phone support if I remember correctly. I am so pissed this might be the last Supermicro product that I ever purchase, depending on what the competition offers. They know that since this is a production server that it can't be out for half a month. It's not MY fault that I discovered the defect later, just because I could not really afford both CPUs at the same time. There is still ASUS, Intel, TYAN and other reputable companies that make good server boards. Fortunately my server came with two heatsinks, I just had to order the two CPUs. I too almost bought one cpu thinking I would buy the 2nd one later but my brother talked me into it and we both came to the same conclusion "what if there was a problem later on and I was out of warranty?" I'm now thankful I did because I'd be sitting in your position now (sorry). I didn't know it was only 30 days and luckily my 2nd cpu that was back-ordered only took a couple of weeks but I must have been awful close to the 30 day limit. What I found frustrating was paying shipping charges when I so meticulously put together a server with the best prices I could find, and that included ordering from several different companies... the extra money spent on shipping the defect server back I could have used somewhere else. I checked into shipping from Canada and was told Canada Post wouldn't even take it because it was too heavy and the gal at the courtesy drop for fedex said I was probably looking at about 200 dollars from any shipping company! I waited for my relative to come up from the States and was glad she came up when she did. If I were in your position I would still exchange it because if you are anything like me it will bug the hell out of you. Complain loudly and you just might get away without paying the non refundable fee, after all it is their fault and not yours. I dealt with both phone and email (Jason, I believe) was the one guy that I got along with pretty well. Have you checked with Newegg to see if you can exchange it there with no extra fee? I remember checking newegg.ca and I would still have to pay return shipping so I just went with SuperMicro because I didn't want to start all over with newegg. The picture I clipped was my server under no load. The CPUs are identical other than one was backordered, but I had to compare the boxes when I first spoke with SuperMicro and everything was identical so I don't know why one is running hotter than the other? Like you say it could be closer to something (my raid card?) than the other one. Do let me know how things turn out, I'm curious that way. I called the RMA department, and they basically told me that since it was an online RMA, it would be better to reply to the email, especially since I didn't even have a RMA number yet. They are waiting to see how I want to do it before they issue the number. I have gotten Newegg to work with me on returns and restocking fees in the past. I mentioned to them that I was going to do some upgrades in the future, and that I would likely spend another few thousand on this machine, that may have helped them decide. I have right at $5,000 in this machine, so I am not their biggest account, but at the same time that is a whole lot more than someone who builds a $250 PC. True to my word, I did purchase the rest of my memory, the other CPU, two heatsinks, and another SAS 15K.7 HD from them. As far as returning things to Newegg, you only have 30 days generally speaking, and I have had my MB since February, so I seriously doubt they would take it back. The memory I could return (with a re-stocking fee of course), but the policy for CPUs is replacement only, so unless it is defective they will not take it back. So I am stuck with a useless $1,000 CPU unless I get a new board. That is a pretty expensive mantelpiece. The name of the document they sent me is xship_with_fee.doc, with a bullet point that states wonder if I renamed the document from xship_with_fee.doc to xship.doc, and deleted that bullet point line, if they would notice or remember. Also in the email, there is a line in the email that reads "This is over cross-ship period and qualify for repair only. However, cross-ship is still available upon request with a non-refundable service." I guess I would have to delete that line as well. I wonder if they would catch on...Just kidding (or am I?) Wow I really do have a criminal mind. I really pleaded my case to them with the basic premise which I outlined earlier, and that is that I should not be punished because I could not afford the second CPU at the time, and that had I purchased both CPUs at the same time, I would have noticed on day 1. I told them that I had Newegg invoices with dates for the motherboard and CPUs to verify my claim. I think this is an extenuating circumstance, and that they should wave the fee. Who knows what will happen, but I am praying and hoping that they will do the right thing. I also asked them to test it with two six core XEON 56xx CPUs, preferably if they have the X5650, that would be best. No sense in getting another defective board. I hope they don't just send it out without testing it, and it does not work either, and then they claim it worked fine when we shipped it. Hopefully they are not that crooked. I will probably hear something back later today. Maybe I should ask them to do it as a repair instead, and ask that they pay shipping both ways for same day service. I wonder if I sent it out early in the morning if they could get it the same day, repair or replace it, and get it back to me the same day. They probably could, but it would most likely be more in shipping than the MB is worth. I will keep you posted. I think if they refuse my first offer, I will ask for the second, just to see what they say. As a side note, I have already had to send a motherboard back once, when I was building the machine. I ordered a SUPERMICRO MBD-X8DAH+-F-O Dual LGA 1366 Intel 5520 Enhanced Extended ATX Dual Intel Xeon 5500 and 5600 Series Server Motherboard. Well when it got there, it would not fit in my SUPERMICRO CSE-743TQ-865B-SQ Black Pedestal Server Case which I had just ordered. I thought extended ATX was extended ATX, but I was wrong. It was ENHANCED extended ATX board, which was 13.68" x 13". REGULAR extended ATX cases are 12" x 13". Since I liked the case, and figured the shipping would be much greater, I sent the first MB back, had to pay a restocking fee and of course wait some more. That along with the cooling fans, the potential failing drive and now this has been more work than I ever imagined. My first server that I built was P4 3.GHzhz 1M L2 cache with 80MHzhz FSB and 2 GB non-ECC ddr 400 with three 36 GB Raptor HDs in RAID0 with a Radeon 9200 in an ASUS P4P800-Deluxeux MB with a cheap case, and Server 2003, Enterprise Edition. I will tell you that this has been a very steep learning curve with Server 2008 Enterprise Edition, along with "real" server hardware. The damn thing is going to be obsolete before I finish getting it built! Now I see servers that take up to 192GB 1333 / 1066 / 800MHz DDR3 ECC Registered DIMM, but as a new feature can instead take up to 384GB 1066MHz ECC LRDIMM ECC, whatever that is. I guess this new RAM that I have never heard of is why the memory went down from $98 to $38 in 8 months. Now I have to figure out what LRDIMMS are, what makes them so special, and how expensive they are. Maybe I will have to take my memory back after all. At least I bought on the toc side of the Intel processor line, so it is the most modern of the server chips that are going to be phased out. Just an off the wall question, even with the new Sandy Bridge stuff coming out, and there is the Extreme Edition 990X, will there be an Extreme 1000X, or will they go with the2xxx series line? They went back down to double channel memory instead of triple channel when they switched from 1333 to 1055. And why 1055 when they just had 1056? Just cut off the corner pin on your 2600K and it will fit. Just kidding, don't anyone try that.The rumor mill has it that they will have quad channel memory with the new Well they said no to waiving the fee, and I replied that I wished that they would lower the price, but if not it is OK. I just need to get it here without any further delay, so I signed the agreement form and emailed it back to them. I hope it gets here by the end of the week, and I think it can if they send it out today. I hate having 24GB of memory and a $1003.99 processor as paperweights. I have been thinking about what you said. I guess I should get a bigger battery backup, as well as a battery backup for my raid controller, but I have been thinking about buying a cheap generator just for that purpose. I am sure that with a 865 watt power supply and other items, I should be fine with a 1,000 watt power supply. I don't have a lot to spend, and I was wondering what you thought about the quality of a generator. I guess as long as I don't overload it and have as much gas as I need, it should keep on going. I specifically requested that they test it with two processors installed, all memory banks populated, and to flash the motherboard BIOS to the latest version. Oh well there goes $120 that I will never see again, and $536 that I will not have available until I get the new MB and ship it back, which could be up to two weeks. Oh well, you live and you learn. I am waiting to hear back from them with a RMA number. That is the latest for now. Well if they do the testing and flash the mb at least you'll feel like the 120 dollars goes towards something, although I'd still be ticked. I really don't like the generator I have right now. It's about 10 years old with a manual pull start and sometimes it's a b*tch to start. I've had to drag it in the house and put it in front of the woodstove to warm it up so it will fire. I've been seriously looking at ones with a battery and no manual start but they are a bit pricey, wheels would be a nice feature too because the thing weighs a ton. Contrast the service of Supermicro with that of a company with outstanding customer support, Seagate. MegaRAID controller keeps predicting drive failure for a certain drive, and I communicated by email with Seagate about this, and they told me to RMA it. I chose to do an advance return, and for this they also send you a new or reconditioned drive, second day air. You then have I think 25 days to return it, or they will charge your card $240, not too far from the retail price. And the $9.95 service fee INCLUDES return shipping! I will get the new drive first, and only be out of pocket around $11.04. The strange thing about it is that they included tax for a grand total of $10.55, but it is showing up on online banking for $11.04, so it is no real big deal, just more of a curiosity thing. They don't bill you the $240 unless you don't return the product, instead of treating you like a criminal assuming you are out to screw them. LSI is another good example. I think I read somewhere that they offer advanced returns for the duration of their 3 year warranty. ASUS, Tyan and and Intel have 3 year warranties on many of their server boards. I know Supermicro has the reputation of being reliable, but I wonder if they don't just market themselves that way. They only offer 1 year parts warranty, and shitty service, or at the very least overpriced. I will think long and hard before I buy another Supermicro Product, if I ever do. I am going to get myself that 800 rated watts/900 max watt generator from Harbor Fright. Even though they are not known to have the highest quality products that money can buy, I read almost exclusively good ratings. It is rated at 4.5 out of 5 stars, by a total of 139 reviews, and 89% said they would recommend it to a friend. I am sure with my APC it should keep the power good and clean. Have you noticed any drastic changes in voltage when running on generator, or does it stay pretty much within spec? It runs on 89 octane 50:1 oil mixture, not too thrilled about that, as I would rather use straight gas and oil, but on sale it is $89.99, so I will give it a shot. It only has a 1.1 gallon gas tank, but is supposed to run for 5 hours at 50% load. I think if I start to offer web hosting on a larger scale, customers would love to hear that I have battery backups and generators to keep things going. Now if I could only rely on Comcast to be mission critical. I love their speeds, but there have been times when it goes down monthly or more often, and when I had DSL, I think I had 2 outages in three years, but the speed is terrible. I wish we could get Verison FIOS, but, that is not part of the monopoly agreement. Yeah, I got my replacement motherboard today, and both CPUs work! You know, the one I paid $120 for cross shipment, well now the MEMORY doesn't work right. All 48GB worked fine (24GB per CPU) on my old motherboard, but now I have had Bios register up to 28GB total, then when I add more the total goes down. If I install all memory, it usually goes down to 16 GB, but ranges between 16 and 28. There are 12 slots, 6 per CPU, and each identical module is 4 GB ECC Registered Server Memory, from SUPERMICRO approved memory list. Sometimes during POST I get the error message: Un-correctable DRAM ECC Error detected at CPU01/DIMM1A. Press F1 to continue. The funny thing is it does it with different modules, you know the ones, the ones that all worked on the other board. Is SUPERMICRO trying to drive me insane in addition to ripping me off? Do I now have to decide if I want a motherboard that supports two CPUs, or a motherboard that works with the ram in it. I have been trying different things for The last 12 hours, and quite frankly am at my wits end. Is it too much to ask that a motherboard allow two identical CPUs to work simultaneously, and which will allow all of the ram to be used, especially since they are all the same size, manufacturer and specifications. Am I asking to much? am I being unrealistic? Am I setting my goals to high? If technical support cannot fix it over the phone tomorrow, is it not reasonable to expect SUPERMICRO to pay for next day air return of their defective motherboard, and upon receipt, send out a brand new one that has had two x5650 CPUs attached to it with all 12 memory slots filled with Kingston Value Ram 4 GB memory modules, and which ran with Prime95 torture test on all 24 threads for 24 hours. What the hell is it going to take? An act of Congress? An act of God? A lemon law? At this point if they cannot fix this within the next three business days, I just want a full refund of the original purchase price, a full refund of the Cross-shipment fee ad the return of my money they are holding hostage. It takes a good hour or so to shut down a server, completely disassemble it, exchange the motherboard, and then hook everything back up. I am seriously wondering if they didn't do this on purpose. Maybe because I didn't put test with memory on the original work order(I only put test with 2 CPUs), but after that I told them on every correspondence that I wanted it tested with memory. I can think of a lot of things I would like to do to these worthless communist Chinese a$$holes, but all of them would end with me in the electric chair. Besides, I don't want to drive all the way to California. All kidding aside,is this going to be two more weeks then they say I didn't get my original motherboard back in time so we are keeping your $536. I am not writing this company off yet for the sole reason that I had heard that they produced reliable products. I am at a loss as to what to do next. They know that only about $900 of the parts in my $5,000 computer came from them, so really, why should they give a damn. I am sure there are plenty of corporate customers that buy tens of thousands of dollars of their products. I am running out of options. I am not bragging when I say my computer cost R$5,000, I am just putting things into perspective. Neither of our family vehicles is worth as much as this computer, and they are 11 and 15 years old. I have a broken down $5,000 pile of junk, and all of it works except the SUPERMICRO part. I am beginning to wish I had never bought that second processor, but if not, then why did I waste all of this money? Well that's some good news and bad! Good you got the board but crappy you're having problems with it. I googled your error and found a guy was having similar problems with a Supermicro board and documented everything he did. He ended up updating the BIOS and that worked to fix it. You can read his notes here: [...] MM00_Issue I know you've probably already updated the BIOS but I thought I'd send the info of what he did. I'd be feeling frustrated at this point because that's exactly how I felt when I went through the 2nd cpu issue, hopefully things will work out and this will be an unpleasant memory in the near future. Well I already had that bios, but just to be sure I downloaded it as you suggested and re-flashed the BIOS. That caused the beep code to go from four beeps to three beeps during POST, but it really varies. Four beeps is not listed in Supermicro Documentation. According to this: 3 short Base 64K memory failure A memory failure has occurred in the first 64K of RAM. The RAM IC is probably bad 4 short System timer failure The system clock/timer IC has failed or there is a memory error in the first bank of memory It has to be the board. The memory worked in the other computer. Maybe I will take the motherboard out and set it on the wooden desk and see if that helps, but I doubt it would. I guess I need to call Supermicro. This has got to be pretty darn frustrating and I don't think I'd be able to hold my temper while speaking with Supermicro! I hope they pay for all costs involving the recent mb... any word yet? No I have been busy with other stuff including a Kingston RMA as well. It seems as though Supermicro's motherboard fried three of my memory modules. The strange thing is that on two of the sticks nothing happens, it does not POST (on the new board, as I wasn't going to risk ruining the known good memory I had left in there), and the other one has the 5 beeps and then the long beep, indicating no memory. I need to get this taken care of asap so I don't get charged for it. I don't know whether to write or call, seeing as how it was an online cross shipment return. I don't know how it happened, because all the memory worked before. I have 9 out of 12 good sticks remaining. but I have to wait on my RMA from Seagate to get the other three replacement modules (even though that is not critical at the moment), and I also need another motherboard before the memory will do me any good. It seems like I am having all kinds of trouble from my enterprise class equipment. I never had nearly the same amount of problems on consumer hardware. And to top it all off I had to replace a failing hard drive that my LSI 9260-4i MegaRAID controller card predicted that it was failing. Well after getting a replacement and doing a rebuild, I moved it to the onboard LSI SAS2008 controller and ran Seagate SeaTools on it, and ran all the tests, and it found no errors. The MegaRAID Manager shows the PredFail Count at 0 on the new controller. What the hell. Now I guess I wipe the drive out, sanitize it good, and send it back to Seagate. Then they can test it and find nothing wrong with it and take off the label and replace it with one that says re-certified, and send it out to the next customer. What gives? Sorry for the rant, I just do not understand what is going on. Was the original adapter right, or is the onboard adapter along with SeaTools right? Who knows. There was a lot of extra codes when I checked the SMART status of the supposedly bad drive with Hard Drive Sentinel, and those codes were not in the other drives, so maybe something is awry. That PredFail Count got up to like 9, but kept resetting after reboots. This is all a whole big mess that I hope to get straightened out soon. I will try a pre-boot (non-Windows) version of the tool and see what comes up and then install my floppy drive and run the SCSIMax tool as well, even though I am not sure if it works on SAS drives. I guess I will try Western Digitals tools too. I hate to send a good drive back. Well they are finally shipping me a replacement board. It seems as though they have thoroughly tested it, and everything seems OK. I will keep my fingers crossed and pray that it makes it here fine and still works OK. It is scheduled to be delivered by 3 PM tomorrow. Maybe the motherboard will help with some of the other problems. My LSI 9260-4i MegaRAID controller card with 512 MB DDR2 onboard cache has an alarm present, but I have disabled it. In MegaRAID manager, it says that the Virtual Disk State is Optimal, and that all four drives in the RAID5 array are Online. There are no Media Count Errors or Predicted Failure Counts on any of them. The MRM does not list any errors. Any idea what could be going on? Is it safe to ignore the alarm (which is set to disabled by default I do believe), or should I be worried? I will keep everyone updated. Wow you have really been around the block with this haven't you? On the plus side you will know the system inside and out. But what a hassle for sure. No idea why an alarm would be present, especially if no error is listed, google might help? Firmware update maybe? It would bug me knowing there's something there. I did in fact flash the MegaRAID controller with the newest BIOS. It is getting worse and worse. The maximum read speeds from the RAID5 array have gone from around 850 MB/s to around 400, and the Sustained Read speeds have gone from around 350 MB/s to 275, and I have been having system reboots with Fatal cache errors. Event log said that the file system on a certain partition had become corrupt and unusable. I ran chkdsk and that seemingly fixed the problem, but I have been having unexplained reboots. I am going to RMA the controller. As far good news, my replacement motherboard is working just fine with both physical CPUs, and all 24 threads. I did some more testing on the bank of memory, but it was only one stick that was bad, so I am RMAing it too. I have the other two 4GB modules, but because this machine uses triple channel memory, I will wait until I get the replacement before I add it and the other two. I am hoping that the failing LSI MegaRAID card is the cause of the random reboots. Other than the reboots, the MB has been working fine, and I think they may have shipped me a brand new one, but I am not certain. There is another strange occurrence. BIOS, CPU-Z, System Information, and other applications all register 36GB. Task Manager registers 36GB under Total Physical Memory, but the Commit Charge is 10/35, for example, depending on how much memory I am using, not 10/36. No big deal but I still wonder why. I can't wait to add the other three modules for a total of 48GB. I made another purchase recently, due to the fact that I do racing Sims and listen to music occasionally ( I usually listen to talk radio, so that doesn't really justify it, but even that is notably better). The main thing I couldn't stand was that the wires always needing to be adjusted to get sound, or to get rid of static. It was a cheap model, and I replaced it with the Bose Companion 3, series 2. I bought it mainly because my old speakers sucked, and I wanted decent audio quality. I have always wanted to try Bose. The deciding factor is that Sam's Club had it on their discount rack for $135.79, instead of the $194.74 regular price. The only reason for the markdown was it was open box. I cut it open and looked inside, and it was in pristine condition, and didn't even look like it had been used. For a savings of over $60 (with tax), I couldn't go wrong. They usually sell closer to $200 online. I know it is not the absolute best sound system in the world, but it is excellent and more than met my expectations. The sound is truly amazing for the size and price. I would recommend this system to anyone. The desktop speakers take up a very small amount of space, and the Acoustimass Module hides away and the effect is amazing. It really spreads out the sound and makes it come to life. If you close your eyes, you can visualize where all of the musicians are, and so forth. If you don't mind paying full price, go ahead and find one online or even better check for deals. I have several rules deciding on online purchases. If it is a well-known retailer such as Newegg or Amazon, I look for a high customer service rating, or how many eggs, and I prefer at least 98% satisfaction for sellers and around 80% in eggs of 5 and 4 combined, for example out of 100, I would prefer at least 70% with 5 eggs, and 10% or more with 4 eggs. For unknown sellers or companies, I first use NetCraft to determine how long they have had a website up and such. If a deal seems too good to be true, it probably is. Then I check RipoffReport, BBB, and Consumer Reports.org (I have a subscription, and I renewed for like $19.00 for one year) to start with. I think the normal price is $26 a year. It is well worth it. It pays for itself in one purchase in many cases. My parents chose not to use it, or either I did not offer first. They purchased an inexpensive dryer, and out of about 90 models, they picked the one which only had three models worse than it. That means about 85 models were better than this one, and a Consumer Reports Best But model which was way up on the scale only cost around $100 extra. How stupid on their part. Oh well it is their dryer and their money. Ok enough of my rants. I am starting to sound like a salesman. Hopefully when I get my controller and memory module replaced, I will have the perfect system (until I can afford an 8 way server). I have over $5750 in the whole system, including everything, so I just want everything to work properly. I wonder if the motherboard has not been responsible for some of the other problems. Oh well, I see light at the end of the tunnel. Hopefully, with the new stick of RAM, and controller, I will be golden. Next is an 800 watt Honeywell Inverter type generator, which is designed for sensitive electronics, and an onboard battery backup for the LSI MegaRAID card, then I can turn off Windows write cache flushing to disk altogether. A SSD for my pagefile and CacheCade software are also on the wanted list. I have room for 4 more 3.5" hard drives in the backplane, but I am not certain whether I can use the onboard LSI 2008 SAS controller for the other 4 drives. Supermicro got their board back last Thursday, and I am waiting on my $536.00 refund. Won't that be nice? Everything seems to be working great, except I get random reboots from time to time. Seeing as how I have already RMA'd two motherboards, and this one works with both physical CPUs (all 12 cores, and all 24 threads), and all 48 GB of ECC Registered Server Memory, I think I will keep it, and hope to find a solution elsewhere. I got my replacement memory module, and RAID card. The memory works fine, but I will get into the Controller Card later. I have had 10 reboots since 10/1/2011, and they do not seem to be related, but I am coming up with a theory. Yesterday it crashed while playing F1 2010 (a racing sim), and today during a GPU stress test the screen turned white a little after an hour. The GPU never got over 60C, and it is a Radeon IceQ 5670, the only piece of consumer grade hardware that I have on my server. I could not do anything. It locked up, and Ctrl+Alt+Del did nothing, nor did Ctrl+Shift+Esc. The strange thing is the Num Lock worked, but not Scroll Lock or Caps Lock. I had to manually press the power button, and I got the exact same event in the event log as the other nine: Event 41. Then, during or after POST, I get this message from the LSI controller: Cache data was lost due to an unexpected power-off or reboot during a write operation, but the adapter has recovered. This could be due to memory problems, bad battery, or you may not have a battery installed. Press any key to continue, or 'C' to load the configuration utility. _ Well I don't yet have a BBU for the controller, so the battery is not the problem. I am going to get one soon, hopefully this week. The system memory is just fine, but I assume they may be talking about the memory on the card, but if the RMA replacement is having memory problems, then it needs to go back too. Could the controller card work with bad memory, and all the speed is just from the four disk RAID5 array? It has a sticker on it that does not instill a great deal of confidence. It reads, "Serviceable Used Part". That means to me, it was ready for the trash and someone said no, it still works. The funny thing is that it is slower than the one I was planning on RMAing. I moved it from the lowest slot, closest to the bottom of the case, where there was less than half an inch between the heat spreader and the case to the PCI-Express slot above it, and the speeds went up, dramatically. My guess is that it got better airflow there, or for some reason that slot performs better, although not as good as my card when new. I thought they were all X8. It makes no sense to me. I have run Windows Memory Diagnostics several times, including the option where you press F1 and can run all of the more advanced memory tests, which takes several hours. It came back fine with no problems. The other day I ran System Stability Test - AIDA64 [TRIAL VERSION] for over one and a half hours, with the options checked to Stress CPU, FPU, cache and system memory. None of the cores really got much hotter than 65C, maybe 68C for a second, but there were no problems. I ran Prime95 with 24 execution threads for three hours, 16 minutes, in which time it completed 78 tests with 0 errors, and 0 warnings. I monitor the voltage in Supermicro SD3, and they are always well within tolerance (although I have never been staring at that screen when a crash happens). Even though I have told the computer not to automatically restart on system crashes, there still is no blue screen, which makes me all warm and fuzzy inside, but keeps me from figuring out what the problem is. Do power switches ever go bad? I know they may quit working, but have you ever known of them to shut down the computer themselves? I am just trying to think of anything. I want to offer web-hosting service, but I must have a rock-solid machine which does not crash in order to do so. Gag me with a spoon, I could shut the machine down, change the jumper, take out my 1GB 5670 Radeon card, and switch back from HDMI to VGA, then hook it onto the motherboard's onboard 8MB video. If it ran fine for a week or so, I think I could safely say the video card is the culprit. I don't want to do that because it would suck. My buddy said I should put it in a colo and vpn when I need to, but just let it sit there and run. I know this is a server, but I built it as a dual purpose machine. With this much money in it, I want to play with it too! Here is the simple error that I get: The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly. Here is the more technical log (which I have put an X in the place of some numbers or letters) - <Event xmlns=""> - <System> <Provider Name="Microsoft-Windows-Kernel-Power" Guid="{XXXXXXXX-XXXX-XXX-XXXX-XXXXXXXXXXXX}" /> <EventID>41</EventID> <Version>2</Version> <Level>1</Level> <Task>63</Task> <Opcode>0</Opcode> <Keywords>0x8000000000000002</Keywords> <TimeCreated SystemTime="2011-11-14T01:34:35.938850000Z" /> <EventRecordID>278807</EventRecordID> <Correlation /> <Execution ProcessID="4" ThreadID="8" /> <Channel>System</Channel> <Computer>XXXX</Computer> <Security UserID="X-X-X-XX" /> <> Here is the friendly version: - System - Provider [ Name] Microsoft-Windows-Kernel-Power [ Guid] ="{XXXXXXXX-XXXX-XXX-XXXX-XXXXXXXXXXXX} EventID 41 Version 2 Level 1 Task 63 Opcode 0 Keywords 0x8000000000000002 - TimeCreated [ SystemTime] 2011-11-04T04:34:56.531651000Z EventRecordID 267944 Correlation - Execution [ ProcessID] 4 [ ThreadID] 8 Channel System Computer XXXX - Security [ UserID] X-X-X-XX - EventData BugcheckCode 0 BugcheckParameter1 0x0 BugcheckParameter2 0x0 BugcheckParameter3 0x0 BugcheckParameter4 0x0 SleepInProgress false PowerButtonTimestamp 0 This is driving me nuts. The main reason I spent this type of money on this machine was to make it bulletproof, but my old P4 that I spent about a third the price on 5 years ago, when parts were expensive, has proven much more reliable in some cases. I know I should have started a new thread, but someone please help! I am at my wits end on this. It is not right to have over $5,000 in an Enterprise Server, and have all these problems. I will be up to $6250 with a generator and a battery backup unit for my LSI 9260-4i controller, for everything. Let's not even start thinking about adding four more hard drives, and then we will be topping the $7K mark! What have I gotten myself into? Not bragging, but am seriously starting to wonder why I put so much money into this system. I could enable Hibernation at 100%, and use a full sized Pagefile. I wonder if that would help. That would only waste 98 Gigabytes of hard drive space. That is 2GB less than my Boot, Page File and Crash Dump Partition (Where Windows is Located). My system reserved partition is a measly 100MB. I don't suppose that I can put the Hibernation and Pagefile files on a different partition, and expect it work properly. I am of the impression that they have to be on the root partition. Sorry, I'm at a loss as to what it could be. How long is the system on for before you get the error? Could it be a heat issue? Have you considered installing more fans? It did it yesterday as I was replying to this message. I think it might have to do with simultaneous access of SMBus, but I am not sure how. Speedfan Exotics page killed it. I had a game paused and was writing an email when it happened today. Whatever the cause, I am getting sick of it. Intel lists the TCASE as 81.3°C, although I am not exactly certain what that means. I probably could benefit from two more fans and will most likely get them, but, nothing in my system has ever gotten anywhere near that temperature listed above. On the most severe CPU testing with all 24 threads running at 100%, they rarely ever go above 65C, and at rest they are usually less than the system temperature, that is why Intel started using the reading of "Low", because they said that below around 45 or 50 I think, the measurements of the core's temperatures are not that accurate. Most everything in my system stays at or below 45C during general usage, with the power supply closer to 50 (It is the super-quiet 875 watt model). Just Error 41, your system did not cleanly restart. I think I might have another bad LSI adapter. What are the odds of that? Who knows. It took three motherboards from Supermicro to get it right. It seems like the CPUID programs hardware monitor and CPU-Z work fine, but their PC Wizard 2010 makes it crash (I think, so I don't want to try). How about this for good measure, I have only seen this exact thing posted only one other time, maybe I need to leave off information. There was no resolution, although it came to fruition that he had things in the 120C range in there. Gee, I wonder why increased CPU usage causes my computer to slow down. The guy asked for answers for weeks before that was discovered. There was not another post after that. Either his computer bit the dust, or he wised up and purchased some canned air. I found this in Device Manager: General tab, Device status -No drivers are installed for this device Driver tab lists below named as installed Intel(R) 7500/5520/5500/X58 I/O Hub I/OxAPIC Interrupt Controller-32D Resource settings: Memory Range 0000000FEC8A000 - 0000000FEC8AFF Conflicting device list: Memory Range 00000000FEC8A000 - 00000000FEC8AFFF used by: ACPI x64-based PC System board Under the details tab, with Power Data selected, it lists: Current power state: D3 Power capabilities: 00000099 PDCAP_D0_SUPPORTED PDCAP_D3_SUPPORTED PDCAP_WAKE_FROM_D0_SUPPORTED PDCAP_WAKE_FROM_D3_SUPPORTED Power state mappings: S0 -> D0 S1 -> D3 S2 -> Unspecified S3 -> Unspecified S4 -> D3 S5 -> D3 I do not know if I am looking squarely at the problem, or if this is pretty much standard. I am generally of the opinion that conflicting devices are not a good thing, although I am not certain that anything can be done in this case. I also don't know about the S2 and S3 states. This might be worth a further look, although I do not know how I could fix such a thing. Is the computer blaming itself? In system Info, the following conflicts/sharing are listed: I/O Port 0x00000000-0x0000000F Direct memory access controller I/O Port 0x00000000-0x0000000F PCI bus I/O Port 0x000003C0-0x000003DF ATI Radeon HD 5600 Series I/O Port 0x000003C0-0x000003DF Intel(R) 7500/5520/X58 I/O Hub PCI Express Root Port 5 - 340C IRQ 10 Intel(R) Chipset QuickData Technology device - 3431 IRQ 10 Intel(R) Chipset QuickData Technology device - 342A IRQ 11 Intel(R) Chipset QuickData Technology device - 3429 IRQ 11 Intel(R) Chipset QuickData Technology device - 3430 IRQ 23 Intel(R) ICH10 Family USB Enhanced Host Controller - 3A3A IRQ 23 Intel(R) ICH10 Family USB Universal Host Controller - 3A34 IRQ 14 Intel(R) Chipset QuickData Technology device - 3432 IRQ 14 Intel(R) Chipset QuickData Technology device - 342B IRQ 14 Intel(R) ICH10 Family SMBus Controller - 3A30 IRQ 15 Intel(R) Chipset QuickData Technology device - 3433 IRQ 15 Intel(R) Chipset QuickData Technology device - 342C IRQ 16 Intel(R) ICH10 Family USB Universal Host Controller - 3A37 IRQ 16 Intel(R) ICH10 Family PCI Express Root Port 6 - 3A4A IRQ 17 Intel(R) ICH10 Family PCI Express Root Port 1 - 3A40 IRQ 17 Intel(R) ICH10 Family PCI Express Root Port 5 - 3A48 Memory Address 0xD0000000-0xDFFFFFFF ATI Radeon HD 5600 Series Memory Address 0xD0000000-0xDFFFFFFF Intel(R) 7500/5520/X58 I/O Hub PCI Express Root Port 5 - 340C IRQ 18 Intel(R) ICH10 Family USB Enhanced Host Controller - 3A3C IRQ 18 Intel(R) ICH10 Family USB Universal Host Controller - 3A36 IRQ 19 Intel(R) ICH10 Family 4 port Serial ATA Storage Controller 1 - 3A20 IRQ 19 Intel(R) ICH10 Family USB Universal Host Controller - 3A39 IRQ 19 Intel(R) ICH10 Family 2 port Serial ATA Storage Controller 2 - 3A26 IRQ 19 Intel(R) ICH10 Family USB Universal Host Controller - 3A35 Memory Address 0xA0000-0xBFFFF ATI Radeon HD 5600 Series Memory Address 0xA0000-0xBFFFF PCI bus Memory Address 0xA0000-0xBFFFF Intel(R) 7500/5520/X58 I/O Hub PCI Express Root Port 5 - 340C I/O Port 0x000003B0-0x000003BB ATI Radeon HD 5600 Series I/O Port 0x000003B0-0x000003BB Intel(R) 7500/5520/X58 I/O Hub PCI Express Root Port 5 - 340C Memory Address 0xFEC8A000-0xFEC8AFFF Intel(R) 7500/5520/5500/X58 I/O Hub I/OxAPIC Interrupt Controller - 342D Memory Address 0xFEC8A000-0xFEC8AFFF System board I/O Port 0x0000D000-0x0000D0FF ATI Radeon HD 5600 Series I/O Port 0x0000D000-0x0000D0FF Intel(R) 7500/5520/X58 I/O Hub PCI Express Root Port 5 - 340C Does that look bad? Is there an easy fix? I am not sure how to assign an IRQ, or if I should just let the system handle it. It looks to my untrained eye like the graphics card might be wreaking havoc, but I am no expert. What do you think? Casually looking over the MegaRAID SAS 9260-4i RAID Controllers Quick Installation Guide, on page 2 of 4, on the right hand column on the bottom half of the page, I found this "Step 4 Insert the RAID controller in a PCI Express slot on the motherboard, as shown in figure 2...... Note: This is a PCI Express X8 card and it can operate in X8 or X16 slots Well silly old me somehow got the mistaken idea that all of my PCI Express slots were X8. According to the manual, there are four X8 slots, and two X4 slots, but in reality, my motherboard has three 4X slots, with a grand total of five. I had moved it originally because it was sandwiched between the CPU Heatsink, and the double width graphics card. There is about a half inch on either side. The performance has improved drastically; I have not had any problems. I think I will give it a few more days then decide whether to keep it, or send back the RMA replacement part. I cannot believe I did not think of that. That may also explain why my video card (or system) has hung during game play. I know you are wondering what that has to do with anything, but here is my answer: The graphics card is X16 in an X8 slot, which has an opening in the back. The raid controller is should be installed in an X8 or X16 slot. That is my theory. The adapter takes to it better, but not the LSI MegaRAID card. I am praying that this will answer my prayers. As I said before, I have an APC 950 VA UPS, but it does nothing when the computer just shuts down with error 41, with no record in the event log of what may have caused it. I am considering purchasing a LSI00161 MegaRAID LSIiBBU07 Battery Backup Unit for my array. My question is twofold. First, is there a need for both, or is the APC suitable? It can store cache data for up to 72 hours during the event of an extended power outage, so that should answer my question. I have had the crashes, and it says you either have bad memory, a bad battery, or no battery installed, and that the cache data has been lost, but the adapter can recover. If it could not recover, I might be in trouble. What I am saying is apart from the problems I have experienced, this machine runs good, and I do not want to re-do anything if I can avoid it. So here is the second and probably most important question. Say the machine reboots all on its own again, and I had the LSI battery backup unit attached to the MegaRAID card, would that save the data, during an unexpected power outage where the UPS doesn't even help, the machine just shuts down in an unclean fashion. I am seriously considering purchasing it, but if it is not going to do anything better than my UPS, then why waste the money? With a calculated risk, even though I do not yet have the LSI BBU installed, I have already enabled write cache policy to always write back on my server. There are three selections in WebBios or MegaRAID Manager: Write Through, Always Write Back, and Write Back With BBU. I have heard folks say that it does go faster with the BBU installed, and set to Write Back With BBU. Would there really be any difference that and Always Write back, or are they just trying to sell their batteries? There are 1910
http://www.tomshardware.com/forum/308946-28-supermicro-x8dai-install
crawl-003
refinedweb
13,002
77.27
SP 2010: Programmatically work with External Lists (BCS) in SharePoint 2010 Posted by Tobias Zimmergren Author: Tobias Zimmergren | | @zimmergren Introduction Article 2 in the small BCS-series: 1. SP 2010: Getting started with the Business Connectivity Services (BCS) 2. SP 2010: Programmatically work with External Lists (BCS) in SharePoint 2010 3. SP 2010: Programmatically work with External Lists (BCS) using the Client Object Model In my previous article I talked about how you can set up a simple BCS configuration to fetch and work with external data in SharePoint 2010. In this article I will talk about how you can utilize the SharePoint 2010 object model to work with that external data, directly from the SharePoint API. It’s all really simple! Working with data in an External List using the SharePoint Object Model The code in this sample doesn’t really differ from the way you fetch information from any other list in SharePoint (2007 or 2010). This – of course – is very welcomed news, as we do not need to learn any new frameworks or tools to work with the data in our external lists. It simply works as any other SPList, basically. Retrieving external data, made simple: When fetching items from an external list, you can simply do that by utilizing the good-old SPList object. We do not need to work with any other types of namespaces or frameworks in order to do this. In my SQL Server I’ve got a table called “ProductList“. This list is filled with the following data: Fetching some items from the external list, and displaying them in a console app: // Product List is my external list, that is working with data in the SQL Server! SPList list = web.Lists["Product List"]; SPQuery q = new SPQuery(); q.Query = “<Where><IsNotNull><FieldRef Name=’ProductID’ /></IsNotNull></Where>”; q.RowLimit = 100; SPListItemCollection col = list.GetItems(q); foreach (SPListItem item in col) Console.WriteLine(item["Name"].ToString()); This will render the following result (fetched from the database): The things you see in the console windows is fetched straight from the SQL Server (using a BCS connection through the External List). Writing data to the External List (hence, writing to the SQL Server) Seriously, this is way too easy as well… // Get the external list SPList list = web.Lists["Product List"]; // Use the traditional approach to create SPListItems and hook it up with the list SPListItem item = list.Items.Add(); item["Name"] = “Sample Product Wohoo“; item["Description"] = “Sample Description Wohoo“; item.Update(); Upon running this code in your SharePoint application, it will create the SPListItem object and add a Name and Description. When you hit .Update() it will push this data through the data source connection, to your SQL server. Here’s what the updated data looks like: We’re running a Beta-product! As you can imagine, there’s a ton of new cool things to work with in SharePoint 2010 – where the BCS is one. This article discuss the very basics of how you can retrieve information from these lists using the normal API-approach. At the time of this writing (during Public Beta) there isn’t any measures on performance and what impact it has on the server in comparison to alternative ways to fetch and work with the data. As time goes on, there will be probably be some new information on this – I’ll keep you posted when I know more. Summary As you can see, working with external data from the SharePoint API isn’t very hard to do. What you need to make sure is to have an external list set up somewhere (see this article for how you can do that) and then you can simply use the normal SPList object from the SharePoint object model to work with the external list and it’s external data from the SQL server (in my case). So if you haven’t already: Get on the SharePoint 2010 wagon and enjoy the ride! Pingback: SP 2010: Customizing the forms for External Lists (BCS) in SharePoint 2010 by using Custom Field Controls and jQuery | Tobias Zimmergren's thoughts on development
http://zimmergren.net/technical/sp-2010-programmatically-work-with-external-lists-bcs-in-sharepoint-2010
CC-MAIN-2013-20
refinedweb
688
60.55
Magical::Hooker::Decorate - Decorate an SV using magic hooks # this object serves as a namespace, you can only get values that were set # by it, so you probably want to have a single instance for your module in # some global variable my $hooker = Magical::Hooker::Decorate->new; # associate an SV like this $hooker->set(\$var, $decoration); # get the associate value like this: my $decoration = $hooker->get(\$var); magical_hooker_decoration_set(target_sv, decoration_sv, (void *)self); decoration_sv = magical_hooker_decoration_get(target_sv, (void *)self); This module provides a C api and a thin Perl wrapper that lets you associate a value with any SV, much like Hash::Util::FieldHash does. The decoration will be reference counted, so DESTROY will be called when target disappears. This lets you do things like: $hooker->set($object, Scope::Guard->new(sub { warn "object just died"; }); and of course also access the value of the decoration. The code was used to associate code references created with newXS with their associated objects in Moose's experimental XS branch. Takes no arguments, and returns a handle. All the association methods use storage that is private to the handle. Note that $target is dereferenced before casting magic. Returns the value. Removes the value. Creates a new MAGIC entry on sv and stores obj in the mg_obj. mg_ptr is set to ptr, which allows for namespacing. In the OO api sv is the dereferenced target, and ptr is the dereferenced $self. ptr can be NULL but then you're limited to one decoration per SV. Get the mg_obj. Removes the MAGIC and returns the mg_obj (after mortalizing it). Get the MAGIC entry in which the decoration is stored. Shawn M Moore (he knows why) Yuval Kogman Copyright (c) 2008, 2009 Yuval Kogman. All rights reserved This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~nuffin/Magical-Hooker-Decorate-0.03/lib/Magical/Hooker/Decorate.pm
CC-MAIN-2017-39
refinedweb
309
55.24
I blather on and on about Windsor and how I adore it, much to the annoyance of the other devs that are forced to work with me. My last few uses of the Windsor container used Binsor as the configuration language, and some of my chums that are new to the Windsor/DI landscape have had to go from not using to Windsor to using it with Binsor. There are not very many good Binsor tutorials out there that I could find, so I figured I would take some of the best Windsor tutorials and redo them using Binsor. These tutorials are, of course, the Bitter Coder Tutorials on Windsor and they are wonderful. I figured someone would have done this by now, but I couldn’t find it, so if this is a duplication of someone else’s effort, I duly apologize. First off, a brief definition of Binsor. Binsor is a Domain Specific Language (DSL) written in Boo with the specific purpose of configuring the Windsor container. The “default” configuration option for Windsor (and most .NET based configuration approaches) is XML, which is fine. However, XML can quickly get unwieldy, has no ability to perform any real logic, and (let’s face it) is just not sexy. So, off we go, to the first tutorial, found here. Simple Configuration So, before we get to the code, in order to get Binsor to work, you’ll need the following references: From Rhino Tools () - Boo.Lang - Boo.Lang.Compiler - Boo.Lang.Extensions - Boo.Lang.Parser - Rhino.Commons.Binsor - Rhino.Commons.Clr - Rhino.DSL From Castle () - Castle.Core - Castle.DynamicProxy2 - Castle.MicroKernal - Castle.Windsor My first solution has 2 projects, one class library and one console app. The above references all go on the console app only. Our tax calculator looks just like the Bitter Coder’s namespace BitterCoder.Tutorials.Binsor.Core { public class TaxCalculator { private decimal _rate = 0.125m; public decimal Rate { set { _rate = value; } get { return _rate; } } public decimal CalculateTax(decimal gross) { return Math.Round(_rate * gross, 2); } } } Here’s where we take a different path from the XML configuration. Setting up the container looks for the Binsor script named here (“Windsor.boo”) When using Binsor, make sure you set the right build properties for the .boo file. In this case, we want it to be copied to the output directory. If, however, you were working in a web environment, you could just mark the file as content and it’ll be copied to the root of your directory. Now, let’s look at the console app and the setting up of the container: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Castle.Core; using Castle.Windsor; using Rhino.Commons.Binsor; using BitterCoder.Tutorials.Binsor.Core; namespace BitterCoder.Tutorials.Binsor.ConsoleTester { class Program { static void Main(string[] args) { IWindsorContainer container = new WindsorContainer().Install(BinsorScript.FromFile(“windsor.boo”)); TaxCalculator calculator = container.Resolve<TaxCalculator>(); decimal gross = 100; decimal tax = calculator.CalculateTax(gross); Console.WriteLine(“Gross: {0}, Tax: {1}”, gross, tax); Console.Read(); } } } So, still just one line…groovy. We use the Rhino.Common.Bindsor.BinsorScript class to read windsor.boo and convert it to the parameters windsor needs. Let’s look at the windsor.boo file now: import System import System.Reflection import BitterCoder.Tutorials.Binsor.Core component “tax.calculator”, TaxCalculator: Rate=Convert.ToDecimal(0.25) Five lines! That’s it! And now you see one of the reasons why people like to use Binsor. It’s concise, readable, and can contain logic. As you can see, I have access to any class in .NET that I wish to import, shown by my Convet.ToDecimal call to convert the rate to a decimal. If you run the console app, you’ll see: Gross: 100, Tax: 25.00 just like in the other tutorial. Go ahead, mess with the rate and run it again. It works. So, that’s the first one down. Next time we look at configuring arrays…. July 11th, 2008 at 10:54 pm Yes!!! This is great stuff! July 16th, 2008 at 12:11 am […] Part 1 […] July 28th, 2008 at 6:01 pm […] Part 1 […] August 4th, 2008 at 7:06 pm […] Part 1 […] August 11th, 2008 at 10:47 pm […] Part 1 […] August 18th, 2008 at 8:37 pm […] Part 1 […] August 21st, 2008 at 6:16 am […] Part 1 […] September 9th, 2008 at 1:17 am […] reading this post I referenced the following assemblies to my project (these can be found in the build directory […] October 16th, 2008 at 5:29 am […] Part 1 […] May 31st, 2010 at 11:00 pm Wow, this works well, but it gave me a couple of problems. The first is that I needed to add a binding redirect for Boo.Lang.Compiler into my app.config to get this to run at all. The second is the boo syntax above has had extended characters added by a word processor and needs re-typing. You land up with all sorts of funny errors, including complaints about line endings (RPAREN in boo compiler speak). June 1st, 2010 at 5:29 pm Hey Adam, Yeah, this project likely could use a porting to the latest Windsor 2.0 code. I haven’t looked at it in awhile. Thanks for finding the issues. Glenn
https://ruprict.wordpress.com/2008/07/11/the-bitter-coder-tutorials-binsor-style/
CC-MAIN-2017-30
refinedweb
886
59.4
online VoIP Information VoIP Information VOIP Information When you are looking for the most comprehensive VOIP Information available, you will quickly find that you have... understanding of all the VOIP Information at your disposal. Doing your homework jdbc odbc connection jdbc odbc connection i need a program in java which uses a jdbc odbc connection Hi, You can create odbc datasource on your windows computer and then access in your Java program. Read the JDBC ODBC example. Thanks...: Hope that it will be helpful for you. Thanks Connection to Database database tables? Do I need to open one connection to each section? Thank you...Connection to Database Hello, I have a website with more than 50... and viewing hundreds of pages, I wont get any connection error and the truth connection with xslx - JDBC connection with xslx hai to all i am not able to connecting to xlsx(2007 excel file) using jdbc?please help me? Hi Friend, Please.../insertExcelFileData.html Hope that it will be helpful for you. and SQL Query - JDBC JDBC connection and SQL Query Hi, I'm reading a all files one... variables. I'm trying to execute following command. Though I use executeQuery...]. In short I read values in array of temp, and I need to insert into DB using those Server DB connection - JDBC get it to work. If you have any sample code for getting connection please...Server DB connection Hello Guys I want to connect two databases.... One database is on my localhost and the other one is on server, I want java database connection - JDBC java database connection sir i want to join my project with MS access database. i am making my project in netbeans.please tell me the coding to do so.. Hi Friend, We are providing you a code that will insert SQLException caught: No data found SQLException caught: No data found i m using ms office 2010, n when i run my source code it is giving error SQLException caught:No data found, plz..."); Connection con=DriverManager.getConnection("jdbc:odbc:evati1 JDBC Training, Learn JDBC yourself want to describe you a code that helps you in understanding a JDBC Connection... you brief description of JDBC Steps for making connection with the database... that helps you to understand JDBC Connection Url.For this we have a class JDBC Connection Pool application will give you improved performance if you use the JDBC Connection... database connection in resource pool. You can make your own code for JDBC...JDBC Connection Pool In this section we will learn about JDBC Connection Pool Understanding JDBC warning Methods ();// closing a connection } } When you run this application it will display...; } JDBC getWarnings() And clearWarnings() In JDBC getWarnings() method is used get the warnings reported by the call of Connection, ResultSet, and Statement JDBC connection JDBC connection ![alt text][1]I got exception in Connecting to a MySQL Database in Java. The exception is ClassNotFoundException:com.mysql.jdbc.Driver wat is the problem JDBC - JDBC JDBC JDBC driver class not found:com.mysql.jdbc.Driver..... Am getting an error like this...... i have added the jar files for mysql inside... path. For read more information on JDBC visit to : http Chapter 4. Demonstrate understanding of database connectivity and messaging within IBM WebShpere Application Server connection calls. To get a connection in JDBC 1.0, you would need to call... JDBC 2.0 Standard Extension specification, data sources allow you to manage a pool of connections to a database. Using connection pools provides you Understanding Data Source Understanding Data Source The JDBC API provides the DataSource interface as an alternative to the DriverManager for establishing the connection Thank U - Java Beginners Thank U Thank U very Much Sir,Its Very Very Useful for Me. From SUSHANT Jdbc Program for building Student Information Database - JDBC Jdbc Program for building Student Information Database import... l1,l2,l3,l4,l5,l6,l7,l8,l9,l10,l11,l12,l13,l14,l15; GridLayout gl; Connection con; Statement st...;sun.jdbc.odbc.JdbcOdbcDriver"); con=DriverManager.getConnection("jdbc Could not establish the connection to oracle - JDBC Could not establish the connection to oracle Hi Friends, I am... to use: Connection conn = DriverManager.getConnection("jdbc:oracle:oci8... to use: Connection conn = DriverManager.getConnection ("jdbc:oracle:thin connection with database - JSP-Servlet connection with database I tried the DSN and connection... and the connection with the database using jsp code, I get exceptions that I have mailed you. I have wasted more than 15 days only in finding the solution jdbc - JDBC ); } } rs10.close(); here i want to drop table... thank y sir Hi friend, Please implement following code. import...[] args) { System.out.println("Tabel Deletion Example"); Connection con jdbc - JDBC ."); Connection conn = null; String url = "jdbc:mysql://localhost:3306... what problem you faced and explain it in details and read more information...(); } } } I have written the above code in NetBeans.But it is not working.There are so Array Initialization in obj C Array Initialization in obj C how can we initialize array...;hello, you can initialize the array by this code // in your .h @interface...) dealloc {[list release]; [super dealloc]; } // whereever you set it up ( init jdbc - JDBC ("com.mysql.jdbc.Driver"); Connection con=DriverManager.getConnection("jdbc:mysql://localhost...); } } } in this program, if suppose i have two rows with the value of rs10.getStrign(3) in my table... an error. cannot perform any operations after closing resultset obj jdbc mysql - JDBC for connection u geeting or not. thanks rajanikant Hi, I am sending... for more information. Thanks. Amardeep...jdbc mysql import java.sql.*; public class AllTableName jdbc - JDBC Deletion Example"); Connection con = null; String url = "jdbc:mysql://localhost... friend, What you means this line. ((i want to drop table and delete some...jdbc jdbc Expert:Ramakrishna Statement st1=con.createStatement database connection database connection i wanted to no how to connect sqlite database through netbeans? is it posible to connect it to a database that is on a remote pc? thank you Connection pooling Connection pooling Sir, In my project i want to implement connection pooling with ms-sql server 2005. i wrote the code in JSP like this... <% Connection con=DbCon.getConnection(); Statement stmt JDBC Class Not Found Exception ; } JDBC ClassNotFoundException And SQLException ClassNotFoundException In JDBC When we try to load a database driver which is not in the driver manager class or wrong driver then it throws ClassNotFoundException. You should Java Jdbc connection Java Jdbc connection What are the steps involved for making a connection with a database or how do you connect to a database JSP Tutorials Resource - Useful Jsp Tutorials Links and Resources . You may find this information to be useful for other operations as well... JavaScript through a book or from the Internet, my guess is that you found... are: HTML. You should be able to put together HTML pages. Java JDBC JDBC save a data in the database I need a code to save a data in the database can anyone help? The given code set up the connection... information, visit the following link: JDBC Tutorials Tutorial, JDBC API Tutorials Java Database Connectivity(JDBC) Tutorial This tutorial on JDBC explains you... to use JDBC API effectively to develop database driven applications in Java. You.... The JDBC API is very important API and if you are planning to develop database JDBC Connection Example ; } JDBC Connection Example JDBC Connection is an interface of java.sql.*; ... a information about a connection Connection.getMetaData() method. This method.... To create a connection with database you need to call a method JDBC-SERVLET JDBC-SERVLET *while doing connectivity of jdbc with servlet I m... that datasource name in url like jdbc:odbc:msdsn i tried the program.... Thank u for you solution JDBC Architecture the Drivers found in JDBC environment, load the most appropriate driver... the data. JDBC-ODBC is native code not written in java.The connection occurs.... The connection occurs as follows--Client -> JDBC Driver -> Middleware-Net jdbc connection issues jdbc connection issues Hello. kindly pls help in this issue...i have created 11 jsp form wit some attributs in it also created 11 tables... tables with only one jdbc connection program...if possiable pls post a sample code Know About Outsourcing, More About Outsourcing, Useful Information Outsourcing Everything you need to Know about Outsourcing Introduction Let us start at the beginning with a definition of what is outsourcing. Outsourcing can... of IT outsourcing, you can either outsource the entire management of all your connection pooling - JDBC . In JDBC connection pool, a pool of Connection objects is created...++) { gObjPool.addObject(); } Connection connection = java.sql.DriverManager.getConnection("jdbc...connection pooling how to manage connection pooling? Hi java - JDBC (); } } ---------------------------------------------------- in this program using jdbc driver, if you...java how to store and retrive images from oracle 10g using jdbc Hi friend, i am sending insert and retrive image code pooling - JDBC Connection pooling what is meant by connectio pooling and how it is achieved using bea weblogic server. can u explain with an example. while i am trying i am getting some exceptions in console and command prompt.   problem in jdbc connection problem in jdbc connection when i am trying to insert into apache derby databse using java in netbeans an exceprion is thrown at run time like this:- java.lang.ClassNotFoundException: org.apache.derby.jdbc.ClientDriver....what jdbc - JDBC jdbc Hi, Could you please tell me ,How can we connect to Sql server through JDBC. Which driver i need to download. Thank You Hi Friend, Please visit the following code: java - JDBC java i want to create a database entering student name and roll...(Color.red)); panel2.setBorder(BorderFactory.createTitledBorder("Enter Your Information...."); lblmsg.setForeground(Color.magenta); } else{ try{ Connection con = null jdbc - JDBC management so i need how i can connect the pgm to database by using jdbc...? if u replyed its very useful for me... Hi, Please read JDBC tutorial at Thanks Hi, You... Tomcat/5.5.26 logs. Apache Tomcat/5.5.26 Hi Friend, Have you installed Jdbc Mysql Connection String JDBC Mysql Connection String  ... you to understand JDBC MysqlConnection String. The code include a class ... public final String connection = "jdbc:mysql://localhost:3306/komal" Displaying Database information on the browser. in the database. In this tutorial, I will show you how to insert the information... the width of the text boxes or anything whatever you need. I just show you the simple...PHP DATABASE Information Part-5(b): Displaying Data (with the help JDBC - JDBC ()), here i want to know Connection and Statement Interfaces methods implementing class. Hi friend, Example of JDBC Connection with Statement... database table!"); Connection con = null; String url = "jdbc:mysql jdbc mysqll - JDBC ("com.mysql.jdbc.Driver"); Connection con=DriverManager.getConnection("jdbc:mysql...(); } } } --------------------------------------------------- Visit for more information . mysqll import java.sql.*; public class AllTableName{ public jdbc - JDBC information.... databasemetadata&resultset.(i create a databaseconnection class and servlet class iam getting the connection from databaseconnection class through dbconnection method SQL connection error in android SQL connection error in android hi, i am android developer...:sql:Exception : BUFFERDIR connection property invalid. if you have any answer please send me. Thank you jsp-jdbc - JDBC jsp frequently. I am getting error through request.getParameter(). can you please...;br> <br>Dear Rajanikanth, thank you very much for posting...jsp-jdbc Hi! html- jsp-jdbc program from the html form where JDBC - JDBC JDBC connection to database and show results Check if the database... name varchar(20) slno varchar(5) and go thru this code,i think your... Exception{ Connection conn=null; try{ Class.forName Connections with MicroSoft SQL - JDBC ?? Actually i used below code i got SQLException and class not found... import... with this ... Thank You Iranagouda... static void main(String[] args){ Connection con = null JDBC - JDBC JDBC how can i do jdbc through oracle.. pls if u can send me d... are using oracle oci driver,you have to use: Connection conn... are using oracle thin driver,you have to use: Connection conn jdbc - JDBC information on JDBC-Mysql visit to : I am designing an application to insert table in database... in JSP to create a table. 2)how desc can be written in JDBC concepts   JDBC Insert Preparedstatement want to describe you a code that helps you in understanding JDBC Insert Prepared... and database. prepareStatement ( ) : This method is useful when you want to execute... JDBC Insert Preparedstatement   Socket connection - MobileApplications that it will be helpful for you. Thanks I am thank Deepak123 very much!!! you have given me that I need, thank a lot!! PS: If i use Tomcat...Socket connection Hi everyone!! I am a student in Viet Nam jdbc - JDBC jdbc Hi.. i am running the servlet program with jdbc connections in this porgram i used two 'esultset' objects.. in this wat ever coding.... now in this same program i am want to do same thing using second resultset connection connection how to make multiple database connection using jdbc jdbc connection to java program code - JDBC jdbc connection to java program code i want a simple java program that which communicates with oracle database like creating table,insert values... table structure as an example so that i am able to understand logic jdbc - JDBC main(String[]args){ try{ Connection con = null; String url = "jdbc:mysql...(); Connection con = DriverManager.getConnection( "jdbc:mysql://localhost:3306/test... the database... Hi Friend, It seems that you haven't inserted any how can i create a mysql database to connect to this code - JDBC "); Connection con = DriverManager.getConnection("jdbc:mysql://localhost:3306/register...("com.mysql.jdbc.Driver"); Connection con = DriverManager.getConnection("jdbc..."); Connection con = DriverManager.getConnection("jdbc:mysql://localhost:3306/register jdbc - JDBC Example!"); Connection con = null; String url = "jdbc:mysql://localhost...(); } } } --------------------------------------------- Visit for more information:...); System.out.println("Columns Name: "); for (int i = 1; i <= col; i Understanding Struts Controller Understanding Struts Controller In this section I will describe you the Controller part of the Struts Framework. I will show you how to configure the struts Connection pool in Tomcat 6 - JDBC Connection pool in Tomcat 6 Hi All, Any one please tell me how to implement connection pooling in Tomcat 6 and MySQL 5.0.1b. Thanks... for you. Thanks MS Access - JDBC MS Access hello Mr.Kaleeswaran, thank you very much for giving information about MS access database...but still i am having doubt in that topic... when i execute the code what you have given is giving some errors... the error Understanding the JDBC Architecture Understanding the JDBC Architecture JDBC is an API specification developed by Sun... databases. JDBC is a core part of the Java platform and is included in the standard JDK mysql tables - JDBC _id,emp_name,emp_vertical,emp_supervisor. i need a JDBC program with driver... insert or update or select or what? I gave a sample program you have.... This link will help you. Please visit for more information. http JDBC-Odbc Connection in explaining JDBC Odbc Connection in Java. The code explains you a how creation... JDBC-ODBC Connection JDBC-ODBC Connection is a JDBC driver that translates the operation SQLException:Column not found? (help me:( "; String url="jdbc:odbc:miniprojek"; Statement stmt; Connection con...SQLException:Column not found? (help me:( import javax.swing.*; import java.sql.*; import java.awt.*; import java.awt.event.*; public class Mini2 How is LBS useful? How is LBS useful? LBS is designed to provide valuable information to the users based... when one is traveling such that where am I? What is around me? Where JTable - JDBC "); Connection con; con=DriverManager.getConnection("jdbc:odbc:Mycon1...JTable Hello..... I have Jtable with four rows and columns and i have also entered some records in MsSql database. i want to increase Jtable's Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/11944
CC-MAIN-2015-22
refinedweb
2,633
50.23
Recently, I've been getting more involved in front-end development. The more I do, the more my mind and my soul get lost in its chaotic world. Even a simple To–Do–List app can easily require a bunch of tools—ESLint, Babel, Webpack, to name a few—and packages just to get started. Fortunately, there’re many starter kits out there so we don’t have to do everything from the ground up. With them, everything is already set up so we can start writing the first line of code right away. It saves time on repetitive, boring tasks, which can be great for experienced developers. However, this benefit comes with a price for beginners. Since everything works out of the box, it seems like magic, and they might not know what's really happening under the hood, which is important to understand at some level. Although the learning curve is not as steep as others—try to compare with some tools you've been learning and using, you'll get what I mean—in this chaotic world, we need a survival guide for the journey. This series will cover fundamental tools of front-end development and what essentials we need to know about them. This will allow us to rule the tools instead of being controlled by them. In it, we’ll focus on the developer experience of each one of these tools. So the goal of this series is to act as a survival guide and to give a high-level overview of each tool, not to serve as documentation. What will be included: - ESLint <- We are here - Babel - Webpack - Flow - TypesScript - Jest. Enough of a preface, let's get started with the first tool: ESLint. What is ESLint and Why should we use it? ESLint is, as the name implies, a linter for ECMAScript. And, the definition of a linter is: a machine for removing the short fibers from cotton seeds after ginning. Although code and cotton seeds don't have any relationship, regardless of code or cotton seeds, a linter will help make things cleaner and more consistent. We don't want to see the code like this: const count = 1; const message = "Hello, ESLint" count += 1 It both looks ugly and has a mistake. Here's when ESLint steps in to help with that. Instead of letting the error be dumped out to the browser console when we run the code, ESLint will catch it as we're typing (well not really: we’ll need editor or IDE extensions for this, which will be covered later). Of course, this error isn't difficult to figure out, but wouldn't it be nicer to have an assistant reminding us every time we're about to make a mistake and perhaps auto-correcting it for us? Although ESLint can’t catch all kinds of errors, it at least spares us some effort so we can spend time on other things that matter and need human attention. How does ESLint work? Now that we know what ESLint is and why we need it, let's go a bit deeper and check out how it works. In essence, we can break it down to three big steps. Parser The code that we write is nothing more than a sequence of characters. However, this sequence isn't just random characters: they need to follow a set of rules or conventions that is the grammar forming a language. For a human, going from reading text or code to understanding it conceptually takes us little effort. For a computer, this is much more difficult to accomplish. For example: const tool = 'ESLint' // 1 const tool = "ESLint" // 2 As we read the two lines above, we immediately know that they are identical, and can be read as "there's a constant named tool with the value of ESLint". For a computer, which doesn't understand the meaning, these two lines look quite different. As a result, if we feed in raw code to ESLint, it's nearly impossible to do anything. When things get complicated and hard to communicate—think of how we can have a computer to understand what we do—abstraction can be an escape. By abstracting a thing, we hide all unnecessary details, reduce noise, and keep everyone on the same page, which eases the communication. In the above example, one space or two spaces don’t matter, neither do single quotes or double quotes. In other words, that's what a parser does. It converts raw code to an Abstract Syntax Tree (AST), and this AST is used as the medium for lint rules to base on. There are still many steps a parser need to do in order to create an AST—if you're interested in learning more about how an AST is generated, this tutorial has a good overview. Rules The next step in the process is to run the AST through a list of rules. A rule is a logic of how to figure out potential existing issues in the code from the AST. Issues here aren't necessarily syntactic or semantic errors, but might be stylistic ones as well. The output given out from a rule will include some useful information for later use like lines of code, positions, and informative messages about the issue. In addition to catching issues, a rule can even auto-correct code if possible. For example, when no-multi-spaces is applied to the code above, it will trim all redundant spaces, which makes the code look clean and consistent. const tool = "ESLint" // 2 // becomes const tool = "ESLint" // 2 In different scenarios, a rule can be used at different levels—opted-out, warning only, or strict error—and have various options, which gives us control on how to use the rule. Result Here comes the end of the process. With the output from a rule, it's just the matter of how we display it in a human friendly manner, thanks to all the useful information we mentioned earlier. Then from the result, we can quickly point out the issue, where it is, and make a fix, or maybe not. Integration ESLint can be used as a standalone tool with its robust CLI, but that’s a bare-bones way to use ESLint. We don't want to type in a command every time we want to lint code, especially in a development environment. The solution for this is to integrate ESLint into our development environment so we can write code and see issues caught by ESLint all in one place. This kind of integration comes from extensions specific to IDEs or editors. These extensions require ESLint to work since they run ESLint behind the scene—no wonder that we still need to install ESLint along with them, they are nothing without ESLint. This principle applies to other IDE or editor extensions we are using daily. Remember the output from a rule we talked above? An extension will use it to display in the IDE or editor. How exactly the output is displayed depends on how the extension is implemented and how the IDE or editor is open to its extensions. Some extensions also take advantage of the abilities of issue correction from rules to change code on save if we enable it. Configuration Configuration is the main power that gives versatility to a tool. ESLint is not different from that, except it has the most comprehensive configuration among other tools. In general, we need a file or a place to put the configuration, and there's a couple of options of us. All of them boil down to two main ways: either we have a separate configuration file for each tool, or we bundle them all in package.json. .eslintrc.js is one of the files that ESLint will be looking for its configuration, and also the one with the highest priority. The next thing we need to know about configuration is its hierarchy and cascading behavior. Thanks to these features, we don't need to have a configuration file in every single folder in the project. If a configuration file doesn't exist in a folder, ESLint simply looks up the folder's parent for one until it can't find one. Then it'll fall back to the user–wide default configuration in ~/.eslintrc. Otherwise, the configuration file will add up or override the ones at upper levels. There is, however, a special tweak on this. If we specify root: true in a configuration file, the lookup will stop at that file instead of going up like before. Besides, ESLint will use that configuration file as the root configuration, and all child configurations will be based on this one. This is not only limited to ESLint - these things are common for other tools. Let's talk about ESLint specific configuration. Parser The role of the parser in ESLint has been discussed above. By default, ESLint uses Espree as its parser. We can change this parser to another compatible one like babel-eslint or @typescript-eslint/parser if we use Babel or Typescript, respectively. To configure the parser, we use parserOptions. Among options supported by Espree, here are some we often use and need to pay attention to: ecmaVersion We need to specify the appropriate ECMA version to features we want to use. For example, if emcaVersion: 5, the code below will give some errors. ```javascript let a = [1, 2, 3, 4] // error due to `let` keyword var b = [...a, 5] // error due to spread syntax ``` The parser can't parse the code because both the let keyword and spread syntax were just introduced in ES6. Changing emcaVersion to 6 or above will simply resolve the errors. sourceType Nowadays, we mostly write everything in modules, then bundle them together. So this option, most of the time, should be module. Another value we can use—as well as the default—is script. The difference is whether we can use JS modules or not, i.e., use import and export keyword. The next time we get this error message Parsing error: 'import' and 'export' may appear only with 'sourceType: module', we know where to look. ecmaFeatures.jsx There might be additional ES features we want to use, for example JSX syntax. We use ecmaFeatures.jsx: true to enable this feature. Note that, JSX support from Espree isn't the same as JSX in React. If we want React specific JSX, we should use eslint-plugin-react for better results. If we use another parser, these options are more or less the same. Some might have fewer options, and others might have more, but they're all defined under parserOptions. Environment It depends on where the code is running: there are different predefined global variables. We have window, document in the browser, for example. It would be irritating if the no-undef rule is enabled, and ESLint keeps telling us window or document is not defined. The env option is here to help. By specifying a list of environments, ESLint will know about global variables in these environments, and let us use them without a word. There's a special environment we need to note, es6. It'll implicitly set parserOptions.ecmaVersion to 6, and enable all ES6 features except for modules which we still need to use parserOptions.sourceType: "module" separately. Plugins and Shareable Configs Having the same configuration for rules over and over again across different projects might be tiresome. Luckily, we can reuse a configuration, and only override rules as needed with extends. We call this type of config shareable configs, and ESLint already has two for us: eslint:recommended and eslint:all. Conventionally, ESLint's shareable configs have eslint-config prefix so we can easily find them via NPM with eslint-config keyword. Among hundreds of results, there're some popular ones, like eslint-config-airbnb or eslint-config-google, you name it. Out of the box, ESLint has a bunch of rules to serve different purposes from possible errors, best practices, ES6 to stylistic issues. Moreover, to supercharge its ability, ESLint has a great number of 3rd-party rules provided by almost a thousand plugins. Similar to shareable configs, ESLint's plugins are prefixed with eslint-plugin, and are available on NPM with the eslint-plugin keyword. A plugin defines a set of new rules, and in most cases it exposes its own handy configs. For example, the eslint-plugin-react gives us two shareable configs, eslint-plugin-react:recommended and eslint-plugin-react:all just like eslint:recommended and eslint:all. To use one of them, we need to, firstly, define the plugin name, and secondly extend the config. { plugins: ["react"], extends: "plugin:react/recommended" } // Note that we need to prefix the config by `plugin:react` One common question to ask is what plugins or configs to use. While it largely depends on our needs, we can use Awesome ESLint as a reference to find useful plugins as well as configs. Prettier We're almost there - we've almost gotten to the end. Last but not least, we'll discuss a popular pair of ESLint, Prettier. In short, Prettier is an opinionated code formatter. Though Prettier can be used alone, integrating it to ESLint enhances the experience a lot, and eslint-plugin-prettier does this job. The difference between using Prettier alone and using Prettier with ESLint can be summarized to code formatting as an issue. Instead of giving format issues separately, running Prettier with ESLint will treat format issues just like other issues. However, these issues are always fixable, which is equivalent to formatting the code. That's how eslint-plugin-prettier works. It runs Prettier, as a rule, behind the scene and compares the code before and after being run through Prettier. Finally, it reports differences as individual ESLint issues. To fix these issues, the plugin simply uses the formatted code from Prettier. To have this integration, we need to install both prettier and eslint-plugin-prettier. eslint-plugin-prettier also comes with eslint-plugin-prettier:recommended config—which extends eslint-config-prettier. Therefore we also need to install eslint-config-prettier to use it. { "plugins": ["prettier"], "extends": "plugin:prettier/recommended" } Conclusion Code linters or formatters have become the de facto standard in software development in general, and ESLint, specifically, in front-end development. Its benefits go far beyond what it does technically, as it helps developers focus on more important matters. Thanks to delegating code styling to a machine, we can avoid opinionated styles on code review, and use that time instead for more meaningful code review. Code quality also benefits, and we get more consistent and less error-prone code. This article was originally posted at my blog.
https://www.freecodecamp.org/news/the-essentials-eslint/
CC-MAIN-2020-05
refinedweb
2,469
62.68
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab. Microcontroller Programming » Master/Slave I2C Here is an example of master/slave communication between the nerdkit and another ATmega using the I2C interface. Similar to the master/slave SPI, in this example the master also sends a number to the slave, the slave increments the number and sends the result back to the master. Using TWI.c and TWI.h the I2C interface is implemented using interrupts so they must be enabled in both the master and slave for things to work. Slave processing actually occurs in an interrupt callback function when a data packet is received from the master. The slave modifies the buffer and then the master reads back the buffer. Version V1.2 of TWI.c and TWI.h can be found in the Access Serial EEPROM using I2C thread near the bottom. // // master_I2C.c - load on nerdkit with LCD // note: I2C and ATmel TWI (Two Wire Interface) are // the same thing. // #include <avr/io.h> #include <avr/interrupt.h> #include <avr/pgmspace.h> #include <util/delay.h> #include <stdbool.h> #include <stdlib.h> #include <ctype.h> #include "../libnerdkits/lcd.h" #include "TWI) // define ports, pins // wire master SCL (PC5) to slave SCL (PC5) // wire master SDA (PC4) to slave SDA (PC4) // put a 4.7k external pull-up resistor on SCL and another on SDA // wire master PC1 to slave reset (PC6) #define SLAVE_RESET C,1 // LCD stream file - enable printf in functions outside of main() FILE lcd_stream; // settings for I2C uint8_t I2C_buffer[sizeof(int)]; #define I2C_SLAVE_ADDRESS 0x10 void handle_I2C_error(volatile uint8_t TWI_match_addr, uint8_t status); // -------------------------------------------------------------------------------------------------------- int main() { // initialize LCD display lcd_init(); fdev_setup_stream(&lcd_stream, lcd_putchar, 0, _FDEV_SETUP_WRITE); lcd_clear_and_home(); fprintf_P(&lcd_stream, PSTR("master_I2C")); // Specify startup parameters for the TWI/I2C driver TWI_init( F_CPU, // clock frequency 300000L, // desired TWI/IC2 bitrate I2C_buffer, // pointer to comm buffer sizeof(I2C_buffer), // size of comm buffer 0 // optional pointer to callback function ); // Enable interrupts sei(); // reset the slave for a clean start OUTPUT(SLAVE_RESET); CLEAR(SLAVE_RESET); _delay_ms(500); SET(SLAVE_RESET); _delay_ms(500); // send 100 test bytes int i; for(i=1;i<100;i++){ // set value into buffer *(int*)I2C_buffer=i; // transmit TWI_master_start_write_then_read( I2C_SLAVE_ADDRESS, // slave device address sizeof(I2C_buffer), // number of bytes to write sizeof(I2C_buffer) // number of bytes to read ); // wait for completion while(TWI_busy); // if error, notify and quit if(TWI_error){ lcd_goto_position(2,0); fprintf_P(&lcd_stream, PSTR("TWI error at %d"), i); break; } // check result lcd_goto_position(1,0); if(*(int*)I2C_buffer==(i+1)) fprintf_P(&lcd_stream, PSTR("%d OK"),i); else { fprintf_P(&lcd_stream, PSTR("Error at byte %d"), i); lcd_goto_position(2,0); fprintf_P(&lcd_stream, PSTR("expected %d, got %d"), i, *(int*)I2C_buffer); break; } } // done while(true); } // -------------------------------------------------------------------------------------------------------- // =============================================================================== // // slave_I2C.c // #include <avr/io.h> #include <avr/interrupt.h> #include <avr/pgmspace.h> #include <util/delay.h> #include <stdbool.h> #include <stdlib.h> #include <ctype.h> #include "TWI.h" // settings for I2C uint8_t I2C_buffer[25]; #define I2C_SLAVE_ADDRESS 0x10 void handle_I2C_interrupt(volatile uint8_t TWI_match_addr, uint8_t status); // -------------------------------------------------------------------------------------------------------- int main() { // Initialize I2C TWI_init( F_CPU, // clock frequency 100000L, // desired TWI/IC2 bitrate I2C_buffer, // pointer to comm buffer sizeof(I2C_buffer), // size of comm buffer &handle_I2C_interrupt // pointer to callback function ); // Enable interrupts sei(); // give our slave address and enable I2C TWI_enable_slave_mode( I2C_SLAVE_ADDRESS, // device address of slave 0, // slave address mask 0 // enable general call ); // received data is processed in the callback // nothing else to do here while(true){ } } // -------------------------------------------------------------------------------------------------------- // void handle_I2C_interrupt(volatile uint8_t TWI_match_addr, uint8_t status){ if(status==TWI_success){ // increment the integer in the buffer // and it will be returned during the read cycle (*(int*)I2C_buffer)++; } } So hey Paul, have you updated this code or made any changes? Ralph I don't remember making any updates. Haven't looked at these demo programs for a couple of years but I still use the TWI code. Lately I've been working on a project that uses a DS3232 RTC to time stamp measurement data as it is logged to eeprom. Both the DS3232 and eeprom use I2C and both are working fine with the TWI library code. Why do you ask? Have you been using the TWI code? I have a project coming up, that I am picturing using the TWI Master/Slave. I will be running a Temperature process (filament extruding) with a stepper motor. I am thinking of running the stepper off the slave freeing the master to monitor temperature and control. I sure appreciate the work you put into the I2C projects it really helped me. I'm glad you found it helpful. I like master/slave projects but it is extra work and so far I've been doing ok with interrupt driven programs that are able to use all available cycles. I don't use delay.h functions (don't even include delay.h) but use the built-in timers instead. I also like to run 18.432Mhz because it's the fastest crystal for for the mega168/328 that is still baud rate friendly. ha, I actually understand I2C better than interrupts. It would be interesting to do a stepper driver using interrupts. Sounds like you have an excellent opportunity to work more with interrupts. It will get easier with practice just like I2C did. How easy or difficult is it to implement this if the master is a RaspberryPi using Python? I'm working on developing a flight computer for research and need to be able to read PWM signals from an RC receiver, manipulate them, and then send them to the servos. I've posted on the Adafruit forum about the RaspberryPi and PWM and it appears that it is not able to read or really produce a PWM signal (at least not at the speeds I need). A user suggested using one micro controller to read the PWM and another micro controller to send PWM. I immediately thought of the nerd kit. The code posted here appears to be exactly what I need, but since my programming is rather limited, I was wondering how to implement this if the master is using Python (or should I stick with C?). Would it be possible to just add the PWM code into the above slave code and then read accordingly from the RaspberryPi? Thanks! Yes, it should be about that easy. The slave doesn't care where or what the master is as long as the signals on the wire are correct. I've working with the slave code, starting at a basic level, and am having some problems. I used the code as is, but added a I2C_buffer[0]=3; inside the while loop. In Python I have value=bus.read_byte(addr) and it gives back a value of 2. If I change the 3 to a 5, it gives back zero. What is the structure of the I2C_buffer/how should data be stored to it? Is the simple read_byte enough or should read_byte_data be used along with the position in the array? lsoltmann- In Noter's code, he declared the buffer thus: uint8_t I2C_buffer[25]; That means it's a byte (unsigned char) buffer, 25 characters in size (consecutive). Therefore, the following simply puts 0x03 in the first byte of that array space: BM I guess my question would be how do I read from the I2C_buffer? I set I2C_buffer[0]=3 so I know the value and to see if I could read that value in python on a different device. I tried setting a few more of the array locations to known values and tried reading them in python using bus.read_byte(addr) and bus.read_byte_data(addr,reg) where reg is the location in the I2C_buffer array. Both don't give the expected value back. My first questions would be: 1 - Where did you get the Python Program to Run I2C? 2 - Have you used this Python Program to successfully communicate with 'other' I2C Devices? For me that would be the first part. Try to validate that the Python Program is working with other Devices. We already know that "Noter's Program" is good. It does seem like you are more comfortable with a "Computer" though, so that is why I would start there. Get your "Master I2C Implementation" to work and then proceed to communicating with the AVR Microcontroller. Without that you basically have a "2 Term Equation with 2 unkowns". ALWAYS, much harder to solve. The python script I am using to read the MCU is one that I wrote for a calibration of some I2C sensors so I know it works. I'm running the script on a Raspberry Pi and I've gutted it to just the necessities import smbus import time bus = smbus.SMBus(1) address = 0x27 reg=2 while True: number1=bus.read_byte(address) number2=bus.read_byte_data(address,reg) print number1 print number2 time.sleep(1) I went back into the I2C_slave program and inside the while(true) loop, added I2C_buffer[0]=12; I2C_buffer[1]=34; I2C_buffer[2]=56; I2C_buffer[3]=78; I2C_buffer[4]=90; to see if I could read back any of the values. read_byte appears to just read back what ever was read previously and read_byte_data gives back a value equal to reg+1 (I tried multiple different values for reg). Are the array locations in I2C_buffer equivalent to registers? Isoltmann-- Generally speaking, when we talk about buffers in any programming language, we are talking about a block of memory. Usually a consecutive block of elements of a given size. The Address is normally the physical memory location of any one of the elements. A single dimension array is nothing more than a consecutive block of ram of the specified type. For example-- each byte/char/uint8_t is exactly 8-bits in length. String those together consecutively and you have a buffer that has an address at the start and for each byte location in RAM. I2C_buffer looks like this in RAM. I'm just using location 0x00 as a starting point, but it would be wherever the linker locates it based on the make-file setup and architecture. 0x0000 00 00 00 00 00 00 00 00 0x0008 00 00 00 00 00 00 00 00 0x0010 00 00 00 00 00 00 00 00 0x0018 00 When you say: I2C_buffer[0] = 12; what you're saying is in reality: 0x0000 0C 00 00 00 00 00 00 00 0x0008 00 00 00 00 00 00 00 00 0x0010 00 00 00 00 00 00 00 00 0x0018 00 See? You stuffed a 12 (0x0C) into location 0, at address 0x0000. The variable named I2C_buffer contains a single value: An Address: 0x0000, in my example above. that makes it a pointer. So when you access an array, you are indirectly accessing an offset index added to that base address. In my example, if you asked for the address of the 20th location in the I2C buffer: addr = &I2C_buffer[20]; // 0x0000 + 0x14 'addr' would be a variable with a value of: 0x0014 Which would be the location marked with asterisks, below: 0x0000 0C 00 00 00 00 00 00 00 0x0008 00 00 00 00 00 00 00 00 0x0010 00 00 00 00 ** 00 00 00 0x0018 00 Hope that clarifies it. Sorry for typos, just banged this out on my way out the door. BM - I think there is a problem with your last post. c arrays/buffers are always referenced relative to "0". So the 1st location of an array is an index of "0", which is currently set to "0x0c"). The "20th location" would be "0x13", which is currently set to "0x00". The "21st location" would be "0x14", which is the location marked with asterisks. JimFrederickson-- You are correct-- when I counted it, I started with zero... It didn't look quite right, but as I said I was rushing out the door.... :: hand to face :: :) Good catch. BobaMosfet, thanks for the enlightenment on the buffers. What you said makes a lot of sense and I’m trying to implement it into the code. The ultimate goal of what I’m trying to do is measure a pulse length (which is proportional to RPM) and send it to a Raspberry Pi over I2C. The pulse length code I have working (thanks to Icarus,). Now I’m trying to send it over I2C to the Pi. The pulse length values range from 5000 usec to 200 usec. Lets say the pulse length is 1226 usec (defined as a uint32_t), since Noters code has the buffer as a uint8_t, the 1226 is divided up into two bytes, 202 and 4, and stored in the buffer. For simplicity, I hard coded I2C_buffer[0]=202; I2C_buffer[1]=4; which based on what you said, would look like this in memory. 0x0000 CA 04 00 00 00 00 00 00 0x0008 00 00 00 00 00 00 00 00 0x0010 00 00 00 00 00 00 00 00 0x0018 00 On the Pi I can see the MCU at an address of 0x27, which is what I set it at. Using the wiringPi I2C library (), shouldn’t the commands int setup=wiringPiI2CSetup(0x27); wiringPiI2CReadReg8(setup,0); wiringPiI2CReadReg8(setup,1); return the values located at I2C_buffer[0] and I2C_buffer[1] respectively? Based on some I2C tutorials () the “register” value is the internal address, which I thought was 0 and 1 (at least for the places I stored data). Also, where should the “data manipulation” take place, inside the while(true) loop in the main function or inside the handle_I2C_interrupt function? Thanks for all the help so far! Isoltmann- If your buffer is a uint32_t, that is 4 bytes in length (32 bits), not 2. Would look more like this: 1226 = 0x000004CA If MSB (most significant bit) is on the left in RAM it would appear thus: 0x0000 00 00 04 CA 00 00 00 00 0x0008 00 00 00 00 00 00 00 00 0x0010 00 00 00 00 00 00 00 00 0x0018 00 Because we are dealing with a memory block, you can tell the compiler to address it in any format you wish. Noter's code specifies it as an array of bytes-- that nomenclature is FOR THE COMPILER ONLY. Not for the CPU/MCU. All that does is tell the linker what size the offsets are for the base pointer of the memory block. You could address his memory block as an array of unsigned longs (which is what a uint32 is), simply by adjusting how you address it by using type coercion (now called 'casting'). Casting is an incorrect word developed by a newer generation who didn't understand what was actually being done. Coercion is accurate because you are coercing or forcing a type to behave differently than what it was defined-- because you know what you are doing. uint8_t I2C_buffer[25]; uint32_t myLongPtr; uint32_t aULong; myLongPtr = (uint32_t*)I2C_buffer; aULong = myLongPtr[0]; // will return 0x000004CA Now regarding the I2C communications, the Raspberry Pi link you sent makes me believe it wants to be the master, and your MCU would be the slave device. The 0x27 is supposed to be returned by the system as a unix file handle where it 'sees' an I2C device, and then you hand that value to the 'WiringPiI2C' library. In which case, you need to initialize the MCU as a slave device for your I2C protocol, give it an address (like '1') and go from there. Any location in RAM can be called a 'register' if you regularly use that location to handle a specific value. A register can be a byte, 2 bytes, 4, etc. When you call 'wiringPiI2CReadReg8(setup,0);' Raspberry PI expects the underlying I2C code to be valid on both ends, and the wiring correct. It will try to issue a command to the specified slave device, and will expect that slave device to return a value to it from that specified 'register' within the slave device. Raspberry Pi neither knows nor cares about anything on the MCU. All it expects is that the device understands I2C and responds accordingly. I would study the datasheet for the ATMEGA chip you are using, they have an I2C section which (usually) provides an example and a flow example also, so you can get a better understanding of the required interaction between the two devices. Raspberry Pi obfuscates all this stuff under the hood that you now have to learn in order to make it work on the embedded platform. Casting is conversion of type only, underlying data not converted. Coercion is conversion of type as well as data by the compiler. Look it up, they are not synonymous. Noter- Welcome back :) Actually, if you look at the underlying disassembly (which is what we're really talking about), they are the same. Casting/Coercion is nothing more than telling the linker how to address the object in question in a way other than that which it may have been typedef'd. In binary, there is no concept of 'type'-- just an address and an offset. By definition they are not the same. Most people are confused by the two because their syntax is the same and both are commonly referenced as casting. If you are looking at a cast of integers and structures then only simple type conversion is required and can lead one to believe it is all the same. However if you look at the assembly when casting (converting) to/from a floating point variable in a math operation you will see the compiler actually performs coercion for you. It's about the compiler and how it produces the 'binary' rather than the 'binary' itself. Noter brings up a good point, but although he and I like to reduce things to black and white in our own worlds, many people find a gray area here, and I think some of that is because there are other issues involved. It may help a newcomer to consider the context of implicit .v. explicit. For example, an implicit conversion is normally where the compiler performs extra work because the programmer didn't explicitly define the conversion desired. As in the case of using a char interchangeably with an int, as many people do. I can reference publications where it's described that a programmer can "...use a cast to coerce the type of a given expression into another type." So, perhaps it's more accurate to say coercion is performed by casting? It isn't different so much as it is the act of performing an explicit coercion. Perhaps this school of thought is the source of why people use the term interchangeably. For me personally, I was taught it was 'coercion' when I began coding in C in the 70's. Before that I was doing a lot of assembly-language, so I was already thinking about how things worked below the compiler level. As Noter mentioned the float, I wanted to be sure we addressed it. A 'float' is a special case, because it has a representation factor that relates to scale and magnitude, unlike any other type. Floats require extra steps by the compiler to alter that representation, showing in the disassembly (below). Here is an example with me explicitly casting/coercing, and then without-- compiler made the right choice and the end assembly is identical either way even though 'aULong' is declared as an 'unsigned long' and 'floatNum' is declared as a 'float'. You can see the extra 2 calls into the float library code for the representation conversion and then equality testing: Note that 'floatNum' is declared as a float and then initialized in main() to this value: floatNum = 3.1415926536; Now for the disassembly: if> if((float> I use AVR Studio for an ATMEGA168, as that was convenient, and familiar to most everyone here. I also did a bad cast, trying to misuse the float, and you can see that the compiler takes my word for it and simply does a subtract, compare and appropriate branch: if(aULong == (unsigned long)floatNum) bc: 03 97 sbiw r24, 0x03 ; 3 be: a1 05 cpc r26, r1 c0: b1 05 cpc r27, r1 c2: 29 f4 brne .+10 And here, people can see what happens with an implicit conversion between an unsigned long taking the value of a float (according to the C-Standard it truncates0: aULong = floatNum; a8: 83 e0 ldi r24, 0x03 ; 3 aa: 90 e0 ldi r25, 0x00 ; 0 ac: a0 e0 ldi r26, 0x00 ; 0 ae: b0 e0 ldi r27, 0x00 ; 0 Hope the helps everyone. At the end of the day, controlling the binary is what you're after, and the compiler is nothing more than a tool, or layer between you and the binary. It's no different than talking to someone else-- you master your larynx and the language to allow you to communicate efficiently and effectively. The float is a good example of that above. You can successfully coerce a float to an unsigned long, without allowing the compiler to do anything with the representation. But why would this be useful? Well, just as a real-world example, you might find it worthwhile to be able to store a float in 4-bytes (it's raw representation) in an EEPROM, for retrieval later. Such as an in-the-field sensor device recording relative humidity. If you store it as an ASCII representation, it might take 12 bytes, but the raw data only requires 4 to store the same precision. Knowing how things really work, and what you can really do with the tools and objects you work with, is what allows you to become not just a great developer, but one of the few who know it as a black art and can craft things in ways the majority of developers will never achieve. Let your imagination and knowledge guide you. In your example above the AVR cross compiler is failing to convert the float to to long which is a bug in the compiler. Not that it matters much in the world of AVR but try the same test with the gcc compiler and you will see coercion performed every time as expected. float f=3.14; unsigned long l=3; void main(){ if(l==f) l=0; if((float)l==f) l=0; if(l==(long)f) l=0; if(f==l) l=0; if((long)f==l) l=0; if(f==(float)l) l=0; f=l; f=(float)l; l=f; l=(unsigned long)f; } Compiled with "gcc -c -g -O0 -Wa,-a,-ad x.c > x.lst". GAS LISTING /tmp/ccXJV0e3.s page 1 1 .file "x.c" 2 .text 3 .Ltext0: 4 .globl f 5 .data 6 .align 4 9 f: 10 0000 C3F54840 .long 1078523331 11 .globl l 12 0004 00000000 .align 8 15 l: 16 0008 03000000 .quad 3 16 00000000 17 .text 18 .globl main 20 main: 21 .LFB0: 22 .file 1 "x.c" 1:x.c **** float f=3.14; 2:x.c **** unsigned long l=3; 3:x.c **** void main(){ 23 .loc 1 3 0 24 .cfi_startproc 25 0000 55 pushq %rbp 26 .cfi_def_cfa_offset 16 27 .cfi_offset 6, -16 28 0001 4889E5 movq %rsp, %rbp 29 .cfi_def_cfa_register 6 4:x.c **** if(l==f) l=0; 30 .loc 1 4 0 31 0004 488B0500 movq l(%rip), %rax 31 000000 32 000b 4885C0 testq %rax, %rax 33 000e 7807 js .L2 34 0010 F3480F2A cvtsi2ssq %rax, %xmm0 34 C0 35 0015 EB15 jmp .L3 36 .L2: 37 0017 4889C2 movq %rax, %rdx 38 001a 48D1EA shrq %rdx 39 001d 83E001 andl $1, %eax 40 0020 4809C2 orq %rax, %rdx 41 0023 F3480F2A cvtsi2ssq %rdx, %xmm0 41 C2 42 0028 F30F58C0 addss %xmm0, %xmm0 43 .L3: 44 002c F30F100D movss f(%rip), %xmm1 44 00000000 45 0034 0F2EC1 ucomiss %xmm1, %xmm0 46 0037 7A10 jp .L4 47 0039 0F2EC1 ucomiss %xmm1, %xmm0 48 003c 750B jne .L4 49 .loc 1 4 0 is_stmt 0 discriminator 1 50 003e 48C70500 movq $0, l(%rip) 50 00000000 50 000000 51 .L4: GAS LISTING /tmp/ccXJV0e3.s page 2 5:x.c **** if((float)l==f) l=0; 52 .loc 1 5 0 is_stmt 1 53 0049 488B0500 movq l(%rip), %rax 53 000000 54 0050 4885C0 testq %rax, %rax 55 0053 7807 js .L6 56 0055 F3480F2A cvtsi2ssq %rax, %xmm0 56 C0 57 005a EB15 jmp .L7 58 .L6: 59 005c 4889C2 movq %rax, %rdx 60 005f 48D1EA shrq %rdx 61 0062 83E001 andl $1, %eax 62 0065 4809C2 orq %rax, %rdx 63 0068 F3480F2A cvtsi2ssq %rdx, %xmm0 63 C2 64 006d F30F58C0 addss %xmm0, %xmm0 65 .L7: 66 0071 F30F100D movss f(%rip), %xmm1 66 00000000 67 0079 0F2EC1 ucomiss %xmm1, %xmm0 68 007c 7A10 jp .L8 69 007e 0F2EC1 ucomiss %xmm1, %xmm0 70 0081 750B jne .L8 71 .loc 1 5 0 is_stmt 0 discriminator 1 72 0083 48C70500 movq $0, l(%rip) 72 00000000 72 000000 73 .L8: 6:x.c **** if(l==(long)f) l=0; 74 .loc 1 6 0 is_stmt 1 75 008e F30F1005 movss f(%rip), %xmm0 75 00000000 76 0096 F3480F2C cvttss2siq %xmm0, %rax 76 C0 77 009b 4889C2 movq %rax, %rdx 78 009e 488B0500 movq l(%rip), %rax 78 000000 79 00a5 4839C2 cmpq %rax, %rdx 80 00a8 750B jne .L10 81 .loc 1 6 0 is_stmt 0 discriminator 1 82 00aa 48C70500 movq $0, l(%rip) 82 00000000 82 000000 83 .L10: 7:x.c **** if(f==l) l=0; 84 .loc 1 7 0 is_stmt 1 85 00b5 488B0500 movq l(%rip), %rax 85 000000 86 00bc 4885C0 testq %rax, %rax 87 00bf 7807 js .L11 88 00c1 F3480F2A cvtsi2ssq %rax, %xmm0 88 C0 89 00c6 EB15 jmp .L12 90 .L11: 91 00c8 4889C2 movq %rax, %rdx 92 00cb 48D1EA shrq %rdx GAS LISTING /tmp/ccXJV0e3.s page 3 93 00ce 83E001 andl $1, %eax 94 00d1 4809C2 orq %rax, %rdx 95 00d4 F3480F2A cvtsi2ssq %rdx, %xmm0 95 C2 96 00d9 F30F58C0 addss %xmm0, %xmm0 97 .L12: 98 00dd F30F100D movss f(%rip), %xmm1 98 00000000 99 00e5 0F2EC1 ucomiss %xmm1, %xmm0 100 00e8 7A10 jp .L13 101 00ea 0F2EC1 ucomiss %xmm1, %xmm0 102 00ed 750B jne .L13 103 .loc 1 7 0 is_stmt 0 discriminator 1 104 00ef 48C70500 movq $0, l(%rip) 104 00000000 104 000000 105 .L13: 8:x.c **** if((long)f==l) l=0; 106 .loc 1 8 0 is_stmt 1 107 00fa F30F1005 movss f(%rip), %xmm0 107 00000000 108 0102 F3480F2C cvttss2siq %xmm0, %rax 108 C0 109 0107 4889C2 movq %rax, %rdx 110 010a 488B0500 movq l(%rip), %rax 110 000000 111 0111 4839C2 cmpq %rax, %rdx 112 0114 750B jne .L15 113 .loc 1 8 0 is_stmt 0 discriminator 1 114 0116 48C70500 movq $0, l(%rip) 114 00000000 114 000000 115 .L15: 9:x.c **** if(f==(float)l) l=0; 116 .loc 1 9 0 is_stmt 1 117 0121 488B0500 movq l(%rip), %rax 117 000000 118 0128 4885C0 testq %rax, %rax 119 012b 7807 js .L16 120 012d F3480F2A cvtsi2ssq %rax, %xmm0 120 C0 121 0132 EB15 jmp .L17 122 .L16: 123 0134 4889C2 movq %rax, %rdx 124 0137 48D1EA shrq %rdx 125 013a 83E001 andl $1, %eax 126 013d 4809C2 orq %rax, %rdx 127 0140 F3480F2A cvtsi2ssq %rdx, %xmm0 127 C2 128 0145 F30F58C0 addss %xmm0, %xmm0 129 .L17: 130 0149 F30F100D movss f(%rip), %xmm1 130 00000000 131 0151 0F2EC1 ucomiss %xmm1, %xmm0 132 0154 7A10 jp .L18 133 0156 0F2EC1 ucomiss %xmm1, %xmm0 134 0159 750B jne .L18 GAS LISTING /tmp/ccXJV0e3.s page 4 135 .loc 1 9 0 is_stmt 0 discriminator 1 136 015b 48C70500 movq $0, l(%rip) 136 00000000 136 000000 137 .L18: 10:x.c **** f=l; 138 .loc 1 10 0 is_stmt 1 139 0166 488B0500 movq l(%rip), %rax 139 000000 140 016d 4885C0 testq %rax, %rax 141 0170 7807 js .L20 142 0172 F3480F2A cvtsi2ssq %rax, %xmm0 142 C0 143 0177 EB15 jmp .L21 144 .L20: 145 0179 4889C2 movq %rax, %rdx 146 017c 48D1EA shrq %rdx 147 017f 83E001 andl $1, %eax 148 0182 4809C2 orq %rax, %rdx 149 0185 F3480F2A cvtsi2ssq %rdx, %xmm0 149 C2 150 018a F30F58C0 addss %xmm0, %xmm0 151 .L21: 152 018e F30F1105 movss %xmm0, f(%rip) 152 00000000 11:x.c **** f=(float)l; 153 .loc 1 11 0 154 0196 488B0500 movq l(%rip), %rax 154 000000 155 019d 4885C0 testq %rax, %rax 156 01a0 7807 js .L22 157 01a2 F3480F2A cvtsi2ssq %rax, %xmm0 157 C0 158 01a7 EB15 jmp .L23 159 .L22: 160 01a9 4889C2 movq %rax, %rdx 161 01ac 48D1EA shrq %rdx 162 01af 83E001 andl $1, %eax 163 01b2 4809C2 orq %rax, %rdx 164 01b5 F3480F2A cvtsi2ssq %rdx, %xmm0 164 C2 165 01ba F30F58C0 addss %xmm0, %xmm0 166 .L23: 167 01be F30F1105 movss %xmm0, f(%rip) 167 00000000 12:x.c **** l=f; 168 .loc 1 12 0 169 01c6 F30F1005 movss f(%rip), %xmm0 169 00000000 170 01ce 0F2E0500 ucomiss .LC0(%rip), %xmm0 170 000000 171 01d5 7307 jae .L24 172 01d7 F3480F2C cvttss2siq %xmm0, %rax 172 C0 173 01dc EB1E jmp .L25 174 .L24: 175 01de F30F100D movss .LC0(%rip), %xmm1 GAS LISTING /tmp/ccXJV0e3.s page 5 175 00000000 176 01e6 F30F5CC1 subss %xmm1, %xmm0 177 01ea F3480F2C cvttss2siq %xmm0, %rax 177 C0 178 01ef 48BA0000 movabsq $-9223372036854775808, %rdx 178 00000000 178 0080 179 01f9 4831D0 xorq %rdx, %rax 180 .L25: 181 01fc 48890500 movq %rax, l(%rip) 181 000000 13:x.c **** l=(unsigned long)f; 182 .loc 1 13 0 183 0203 F30F1005 movss f(%rip), %xmm0 183 00000000 184 020b 0F2E0500 ucomiss .LC0(%rip), %xmm0 184 000000 185 0212 7307 jae .L26 186 0214 F3480F2C cvttss2siq %xmm0, %rax 186 C0 187 0219 EB1E jmp .L27 188 .L26: 189 021b F30F100D movss .LC0(%rip), %xmm1 189 00000000 190 0223 F30F5CC1 subss %xmm1, %xmm0 191 0227 F3480F2C cvttss2siq %xmm0, %rax 191 C0 192 022c 48BA0000 movabsq $-9223372036854775808, %rdx 192 00000000 192 0080 193 0236 4831D0 xorq %rdx, %rax 194 .L27: 195 0239 48890500 movq %rax, l(%rip) 195 000000 14:x.c **** } 196 .loc 1 14 0 197 0240 5D popq %rbp 198 .cfi_def_cfa 7, 8 199 0241 C3 ret 200 .cfi_endproc 201 .LFE0: 203 .section .rodata 204 .align 4 205 .LC0: 206 0000 0000005F .long 1593835520 207 .text 208 .Letext0: GAS LISTING /tmp/ccXJV0e3.s page 6 DEFINED SYMBOLS *ABS*:0000000000000000 x.c /tmp/ccXJV0e3.s:9 .data:0000000000000000 f /tmp/ccXJV0e3.s:15 .data:0000000000000008 l /tmp/ccXJV0e3.s:20 .text:0000000000000000 main NO UNDEFINED SYMBOLS It occured to me this morning that maybe you encountered a bug because like most nerdkit users, you have an old version of avrgcc. I ran the same test again with avrgcc v3.4.3 and coercion was performed every time, the float was converted to unsigned long. It's a good idea to stay current with all your compilers. Noter, In the example above, I'm using GNU GCC with the WinAVR stub. The stub doesn't interfere with how the GCC compiler deals with coercion. It's not a bug, it's compiler 'implementation dependent' as this is not specified in the standard. This is one of the reasons it's important to understand your tools and what they generate. All you're showing is that your standard float library, that came with that compiler for the processor you're showing code for, is enforcing a type coercion in the examples you've given. And even so, I could still force that compiler to not do it with careful crafting of C. I don't allow a compiler to limit my capabilities-- it does what I want it to do because _I_ am the architect of my logic, not it. It cannot, nor will it ever understand what I am trying to achieve. As the ATMEGA code is what is pertinent to most folks here, I'm sticking with that in this forum. Thanks Paul and BM, I sure love your "discussions". It is interesting most of what BM says is how I learnt C. I spent a lot of time in the old C developer newsgroups and even had some answers to my questions answered by Ritchie himself. Of course I never became a programmer but have had some rich exposure. Ralph, Thanks. I think it's important for people to bear in mind that they are discussions. Noter and I view things differently, have different experiences and knowledge, but his knowledge is no less valuable or competent than mine. We both have done many, many things over the years and it works for each of us. The original thread gets derailed a bit, but by the same time, by expounding upon our differing views and exploring them, people in the forums that know less, get exposed to more useful information. The truth is, most of the really valuable information about programming is no longer available to most people wanting to learn. Computer sections in bookstores are so small now-- they used to be shelves and shelves. Most of the books in them have little useful information. Manufacturers used to give away reference manuals for their processors just to get people to consider their chip. There were dozens of magazines with all sorts of things (Byte, Dr. Dobbs Journal, etc...). BM, Interesting that AVR studio compiler truncates rather than converting from float to long like the other gcc compilers. I won't be testing that one since I haven't a single windows box left after moving all of them over to linux. Anyway, I hope we can agree that casting and coercion are not synonymous. Casting is conversion of type only while coercion is conversion of type and underlying data. May not match what you learned back in the stone age but now's always a good time to catch up. lol - I should know, I'm probably way older than you anyway. If you want to be a programmer then all you have to do is program, program a lot. It takes lots of practice. Stick with a language until you get it by writing at least a handful of reasonably complex programs. I've been writing a gui program in python this week that controls an arduino over serial usb. First time I've really gotten into python and I like it. I've always liked developing with interpreters compared to compiled languages but no matter the language, it's always rewarding and fun to get a program working. Reviving old thread to see if I can get some guidance. I am playing with 2 nerdkits to understand twi/i2c. I uploaded Noter's files intact, the only difference in the master is that since I don't have an lcd at the moment, I commented out all lcd lines. instead I am using uart to display the messages to my labtop. I am getting "Error at byte 1 expected 1, got 1". This makes no sense to me. If it is expecting 1 and the slave sends 1 isn't it suppose to request the next number instead of breaking? escartiz- TWI/I2C (hereafter just I2C) is relatively straightforward to use. All you have to do is the initial calculations to get the speed right (set TWBR and so on, based on appropriate calculations for your MCU speed). Once that's done, you need to know the address of the slave, usually there is an initial default value (like 0xA0, for example. Yours may differ). You need a buffer, like 60 bytes (depends on size of largest xmit/rcv data you might handle). Beyond that, you have 13 TWSR responses to deal with, 9 TWI commands, and then monitoring/setting a few flags and/or registers and the Hi/Lo relationship between SDA and SCL. Pay particular attention to the state each of these is in, because usually if the slave doesn't understand something, or didn't get an ACK from the MASTER it will hold the SCL low (to let the Master know it's still processing). Even without a scope, you can check this with a logic probe. I recommend, initially, you simply work with I2C in the simple form, just a Master and a Slave until you have that mastered, then add the rest of the logic to handle multiple, simultaneous I2C conversations. The datasheet has a bare-metal working example you can base your code off of. I wouldn't waste time jumping into someone else's code until after you've read the datasheet and have your mind around I2C to some extent-- then if you look at other code, you'll have a better idea what you're looking at when registers, lines, and flags are referenced, and the logic in that code will be more evident-- but ultimately learn the way that works best for you. Hopefully, you have a fast enough oscilloscope-- I know that I usually have my scope set to the 5us to 10us range so I can see the pulse train for I2C. Please log in to post a reply.
http://www.nerdkits.com/forum/thread/1554/
CC-MAIN-2019-30
refinedweb
6,196
70.94
Hi, I've been having trouble getting a piece of code I've made to find whether a set is a proper subset of another. The code I'm using is the following: The problem is, when I run the program, the output always comes out FalseThe problem is, when I run the program, the output always comes out FalseCode:bool contain(bool arrayA[letter],bool arrayB[letter]) { int i,counter,counter2; counter=0; counter2=0; for (i=0;i<letter;i++) { if (arrayA[i]==arrayB[i]) counter++; if (counter==26) return false; else break; } for (i=0;i<letter;i++) { if (arrayB[i]==true) { counter++; if (arrayA[i]==true) (counter2++); } } if ((counter==counter2)/*&&(counter!=0)*/) return (true); else return(false); } . Any help is appreciated.. Any help is appreciated.
http://cboard.cprogramming.com/c-programming/70201-finding-proper-subset.html
CC-MAIN-2016-44
refinedweb
129
51.78
Date: May 2, 2008 11:31 PM Author: Kirby Urner Subject: Free geometry tools... For those of you lucky enough to have one flatscreen and keyboard per child, this might be especially interesting, but also if you share computers and/or just project: Python + VPython, both free and open source projects, from python.org and vpython.org respectively, will let your students deliver moving pictures to the screen based on very few instructions involving the concepts we're supposed to be teaching (e.g. sine, cosine, radians, degrees), and in a concise computer language with job market value, so easy to prove relevance (as in "what will we use this for??"). Example: the code below is sufficient to produce a lissajous curve, with just a few lines omitted (click plaintext view in web archive interface to see original indentation): def lissa(): # you could make a, b arguments if you want """ Lissajous curves: Other ideas of a, b """ a = 3.0 b = math.pi/2 for t in range(360 * 4): # 0-360 degrees * cycles x = math.sin(math.radians(t)) y = math.sin(a * math.radians(t) + b) ball = sphere(pos=(x, y, 0), radius=0.1, color = color.orange) rate(100) That might look intimidating at first, and it is, somewhat, so if you're looking for topics for in-service training, the gov wants you to have more of it anyways, so why not enjoy the ride? Python + VPython is a fun dynamic duo, will cost your school nothing, except for the time and effort of downloading and installing. Apprenticing students might help with that, do in our flagships. YouTube of VPython: Re: 2D plotting capabilities: Kirby 4D
http://mathforum.org/kb/plaintext.jspa?messageID=6202109
CC-MAIN-2014-10
refinedweb
280
62.78
0 Hi all, I am trying to use the following python code to read data from two .csv files which have data in the following format, e.g. 1, 23.27, 23.21, 83.22 2, 34.25, 65.31, 34.75 ... etc I seem to be getting an error around during execution of the function. I would really appreciate some help in fixing this problem. import csv Or = csv.reader(open('Or.csv')) Ne = csv.reader(open('Ne.csv')) def make_dict(data): return OrderedDict((d.split(',',1)[0],d) for d in data if d) updated = make_dict(Or) updated.update(make_dict(Ne)) # Write new data to file.csv Thank-you
https://www.daniweb.com/programming/software-development/threads/443614/fixing-function-problem
CC-MAIN-2016-36
refinedweb
111
71.41
Hi Guys, Welcome to Proto Coders Point. So you are here to know the best flutter package/plugins, That means you might be having a great and awesome cross-platform project idea in your mind using flutter? am I right? then you are in the right place to make research on the best plugins for a flutter app development. Bro, Don’t miss this is simple yet easy-to-use and awesome flutter packages to include in your project that makes app development much faster and easier. 5 Best flutter packages/plugin - GetX. - Equatable. - Flutter Secure Storage. - path Provider. - fluter share. 1. GetX Flutter The GetX package in flutter is a very extra lightweight plugin & a powerful package that will help flutter developers to build apps much faster. Using GetX we can easily switch between screens, show snackbar, show dialog and bottomsheet that too without passing any context. Then, it combimes high performance state management, intelligent dependency injection & route management that to quickly. Feature provided in Flutter GetX library - Getx State Management - GetX Route Management - Dependency Management - Validation & much more. - GetX Storage Snippet GetX Code //Easy Navigation to Flutter Pages Get.to(Page2()); // Display Snackbar messages Get.snackbar("Hi", "message"); //Easy to display a dialog pop Get.defaultDialog(title: "THIS IS DIALOG BOX USING FLUTTER GET X ",middleText: "GET x made Easy"); And much more Event is easily possible using Flutter GetX. 2. Flutter Equatable Using Equatable you can compare between objects, and thus save lots of time in writing boiler plate codes. Equatable easily override == & hashcode with flutter equatable there is no need to generate code, you can keep your focus of writing amazing flutter application. To learn more visit official sire here Snippet Code Flutter Equatable Example import 'package:equatable/equatable.dart'; class Fruits extends Equatable{ final String FruitName; Fruits(this.FruitName); @override // TODO: implement props List<Object> get props => throw UnimplementedError(); } // inside main or class Fruits fruits1 = Fruits("Apple"); Fruits fruits2 = Fruits("Banana"); print(fruit1 == fruit2); //return false Fruits fruits3=Fruits("Banana"); print(fruit1 == fruit2); //return true 3. Flutter Secure Storage This Flutter storage package will work the same as SHARED PREFERENCES but in a secured form, This flutter storage plugin will store data in a secured way with AES Encryption. A Flutter plugin to store data in secure storage: - Keychain is used for iOS - AES encryption is used for Android. AES secret key is encrypted with RSA and RSA key is stored in KeyStore. Note : This package library will work only on Android 4.3 (API level 18) and above. Snippet code how to use flutter secure storage import 'package:flutter_secure_storage/flutter_secure_storage.dart'; //); 4. Flutter Path Provider A Plugin in flutter that help you in finding most commonly used directory lication on a mobile file system. Very useful package library to read any kind of document in device path such as TemporaryDirectory or any document. Snippet code example on how to use flutter path provider Directory tempDir = await getTemporaryDirectory(); String tempPath = tempDir.path; Directory appDocDir = await getApplicationDocumentsDirectory(); String appDocPath = appDocDir.path; 5. Flutter Share Plugin So, Once your flutter project is ready to be distributed publicly, then it’s time to add-on an awesome functionality, such as share, where your app user can easily share flutter app URL with his friends & family to download your awesome flutter application. What is Flutter Share plugin? The flutter share plugin is very useful when the user wants to share contents from the flutter app with any of his friends via the platform share dialog box. This plugin is wraped with ACTION_VIEW INTENT as in android and UIActivityViewController as on iOS devices. whenever the flutter app user wants to share any contents he can just click on the share button which simply pop-up a share dialog using which he/she can easily share contents. snippet code flutter share //on share button pressed onPressed: () { Share.share( 'check out my website'); }, Here is an article flutter share plugin with example
https://protocoderspoint.com/5-best-flutter-packages-plugins/
CC-MAIN-2021-21
refinedweb
660
54.32
54544/ansible-ansible-import-context-importerror-import-context When I try to execute my ansible playbook, I get the following error message: Traceback (most recent call last): File "/usr/bin/ansible", line 137, in from ansible import context ImportError: cannot import name context Hey Loki, Looks like your python library ...READ MORE Hey, @Nishit seems like you've made a ...READ MORE Hey @Niana, I don't think you have ...READ MORE hey @Neel, this seems like your Linux .. adding --no-wait option to the ansible-galaxy import call ...READ MORE Check your python version. It should be ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/54544/ansible-ansible-import-context-importerror-import-context
CC-MAIN-2020-16
refinedweb
107
67.96
- 0shares - Facebook0 - Twitter0 - Google+0 - Pinterest0 - LinkedIn0 Python Modules In this section we will learn to import and create modules in Python and we will discuss how to use built in modules in Python programming language. In Python a module can be defined as a file that contains Python statements and definitions. A file that has python modules has an extension .py for example abc.py is a Python module that has python statements and definitions. Modules are used to break down large programs into smaller ones. Modules help to modify the program or codes easily. Modules increase the readability of programs. The mostly used functions can be imported to modules instead of copying the function into the program repeatedly. Consider the following example, in which we created a module and saved it with the .py extension: CODE >>> def subtract(a, b): c = a-b return c We saved this code with the .py extension. This file is the Python module that subtracts two numbers and returns the result after subtraction by storing it in the variable c. The function subtract () is defined inside the module Subtraction.py. How to Import modules in Python Modules can be imported inside another module in Python. The import keyword is used to import modules in Python. To import the module defined previously named subtraction, the following statement will be used: import Subtraction In Python the functions that are defined in the modules are not directly called. They are called by using the name of the module. The functions in the modules can be accessed by using (.) operation. Consider the following example in which we have used the subtract function that was defined in the Subtraction module using (.) operation: CODE >>> Subtraction.subtract(6,2) OUTPUT 4 The built in modules can also be imported into the program as the user defined modules are imported. The following are some of the ways to import a module: Python import statement A built in module can be imported by using the import keyword. The functions inside the imported module can be accessed by using the (.) operation. Consider the following example in which we have imported the math module: CODE >>> import math >>> print(“The values of PI is:”, math.pi) OUTPUT The values of PI is: 3.141592653589793 Import with renaming In Python a module can be imported by renaming. A module can be imported by renaming its name by using the as keyword. This is illustrated below: CODE CODE >>> import math as x >>> print(“The values of PI is:”, x.pi) OUTPUT The values of PI is: 3.141592653589793 In the above example, math is renamed to be x by using the as keyword. This is advantageous when the user wants to save time. Now math is not in the scope instead x will be recognized as the module name. If we write math.pi now then an error message will appear. Python from… import statement In Python we can import the specific names from a module by using the ‘from’ keyword with the import keyword. Through this we do not need to import the whole module. Consider the following example in which pi is imported from the math module: CODE >>> from math import pi >>> print(“The value of PI is “,pi) OUTPUT The value of PI is 3.141592653589793 In the above example, we only imported a single attribute named pi from the module named math. In this way the use of dot (.) operation is not needed. Consider the following example, in which we are importing multiple attributes using from keyword: CODE >>> from math import pi, e >>> print(“The value of PI is”,pi) >>> print(“The value of e is”,e) OUTPUT The value of PI is 3.141592653589793 The value of e is 2.718281828459045 In the above example, multiple attributes are imported from the module math. The names of these attributes are separated by comma. And then the values are printed using print statement. Import all names All names or definitions can be imported from a module by using the import keyword along with an asterisk (*). Consider the following example in which we have imported all names (definitions) from the module: CODE >>> from math import * >>> print(“The value of PI is”,pi) >>> print(“The values of e is”,e) OUTPUT The value of PI is 3.141592653589793 The values of e is 2.718281828459045 It can be seen in the above example, that all the names or definitions are imported from the module math using *. In this way we can use all attributes in module math like we printed the value of pi and e. Python Module search path When interpreter is given instructions to import a module, it first looks for the built in modules, if the built in modules are not found then the interpreter finds it into the list of directories that are defined in sys.path. The search made by interpreter is in the following order: - Current directory - PYTHONPATH - Installation dependent default directory. Consider the following example, which is demonstrating the sys.path by importing sys using the import keyword:’] Reloading a Module The interpreter of Python imports only one module during a session. Let’s suppose that we have the following code in our module named myModule. print (“MODULE”) Consider the following example in which multiple import keywords are used to import multiple modules: CODE >>> import myModule MODULE >>> import myModule >>> import myModule From the above example we can conclude that the code is executed only once that is the module is executed only once. The program will be reloaded if you want to import another module. This can be done by restarting the interpreter. Restarting the interpreter is not a good idea. We there is a much better and neat way to do this. That is we can use the reload () function to reload a module. Consider the following example, in which reload () function is used to reload the module: CODE >>> import imp >>> import myModule MODULE import myModule imp.reload (myModule) MODULE <module ‘myModule’ from ‘.\\myModule.py’> The dir () built in function The built in dir () function is used to find the names that are defined inside a module. Consider the following example, in which the dir () function is used with the function subtract () that we added in the module subtraction. CODE >> dir(subtract)__’] In the above example, the dir () function is used with the function subtract () that we added in the module subtraction. This will result in the sorted list of names along with subtract, all of these names begin with an underscore and these names are the default Python attributes that are associated with the module. Note that these attributes are not defined by us, they are default attributes.
http://www.tutorialology.com/python/python-modules/
CC-MAIN-2017-22
refinedweb
1,123
63.7
User Details - User Since - Jun 14 2019, 3:51 PM (9 w, 19 h) Yesterday - Use ExprEvalContext and remove mangling context code Thu, Aug 15 As far as I can tell this case was just overlooked. The original commit adding this change only allows chars to int and chars to chars. Another commit ignores typing of chars. I did not see anything related to this particular case in previous commits. Tue, Aug 13 - Add comment in test Tue, Aug 6 Mon, Aug 5 - Remove 'maybe', remove boolean and fix other call to ParseSimpleDeclaration - Remove changes from accidentally formatted files - Allow decl-specifier source location to propagate to decl parsing Fri, Aug 2 Thu, Aug 1 - Support python2 Mon, Jul 29 - Fix test case spacing - Fix test, formatting and conditional check Fri, Jul 26 - Rework attribute parsing - Formatting fixes - Order swap and elif - Add in-tree as possible default location I agree that parsing according to attribute name/type is not a good solution. - Make filepath optionally user specified Wed, Jul 24 - Add comment Tue, Jul 23 Mon, Jul 22 - Disable self reference checking for C - Style fixes - Style fixes - Add tracking of declaration of initializers in Sema. Thu, Jul 18. Jul 17 2019 The main problem that we have is that the __attribute__ token always causes the parser to read the line as a declaration. Then the declaration parser handles reading the attributes list. Jul 16 2019 - Fixed formatting Revival of and. Jul 15 2019 - Change cast type Jul 12 2019 - Add warning-free test cases - Fixed style - Adding fix - Forgot to fix list.rst - Added script for generation of docs based off Checkers.td - Updated to match newly standardized anchor urls - Add auto redirect and remove alpha checkers - Add auto redirect and remove alpha checkers - Add test Jul 11 2019 - Ran git-clang-format - Fix style on comments in script - Added script for generation of docs based off Checkers.td - Updated to match newly standardized anchor urls Jul 10 2019 - Fix .m and i.. - Fix file periods being converted to dashes - Fixed periods in sentences being dashes - Added script for generation of docs based off Checkers.td Jul 9 2019 The contents of each check page are identical other than the check name. Jul 8 2019 Jul 3 2019 - Stylistic fixes of function names and removal of namespace prefixes - Small functional and formatting changes Jul 1 2019 Jun 27 2019 - Minor style change on assert - Add assertion message and simplify test case Jun 19 2019 - [analyzer] Revise test case and add TODO
https://reviews.llvm.org/p/Nathan-Huckleberry/
CC-MAIN-2019-35
refinedweb
423
56.69
OpenMediation Test Suite With the OpenMediation mobile ad aggregation test suite, you can test whether apps and ad units are correctly configured so that they can display ads from third-party ad networks through aggregation. This guide briefly describes how to integrate the OpenMediation mobile ad aggregation test suite into your iOS app to use this tool in your app. Precondition 1. Xcode 12 or higher. 2. iOS 9.0 or better. 3. iOS OpenMediation SDK V2.0.2 or higher. 4. Create an OpenMediation account and register the app. Note: Use Cocoapods to download OpenMediationTestSuite V1.4.0+, it will download MBProgressHUD SDK automatically. Install The Test Suite needs to be installed using Cocoapods. Please add the following code to your Podfile: pod 'OpenMediationTestSuite', '2.0.0' Manual Download SDK Download iOS OpenMediationTestSuite V2.0.0 Start the aggregation function test suite #import <OpenMediationTestSuite/OMTestSuite.h> [OMTestSuite presentWithAppKey:@"YOUR_OpenMediation_APP_KEY" onViewController:YOUR_CONTROLLER]; import OMTestSuite OMTestSuite.present(withAppKey:"YOUR_OpenMediation_APP_KEY", on:YOUR_CONTROLLER) Test Suite: HomePage The following disclaimer will be displayed when the tool is opened: A disclaimer will be displayed every time the test suite is launched to remind you that you should turn on test mode for aggregate ad sources. Check the box and click "GOT IT" to continue. Enter the homepage to display the Ad Network integration status of your current App. Each Ad Network has three configurations, including SDK/Adapter/Configure. For each Ad Network, this screen will display a warning if the following conditions occur: - Ad source SDK is not installed - Ad source adapter not installed - The background configuration information is incorrect If the three configurations of each Ad Network are all right, the homepage will display "Good", as shown in the figure below: If you integrate the Facebook/Mintegral/MoPub/ChartboostBid SDK in your application (the SDK version number cannot be obtained), the three configurations of each Ad Network are fine, and the homepage will display "All Ad Networks Done" and the version failed to be obtained The number of Network of the number, as shown below: If you verify the instance of Ad Network and it fails, the home page will display the number of failed instances. At the same time, the corresponding Ad Network unit column will display a yellow prompt on the right. Exit Click the "More" button in the upper right corner of the homepage. In the pop-up drop-down box, click "Device ID" to choose to view your current Device ID, and click "Exit" to exit the test suite. Ad Network Details Page The details page displays the detailed information of Ad Network, including the integration status of Ad Network STATUS, testID list TEST ID (if included) and Instance list INSTANCE: STATUS The screen indicates, for the given ad source: - If the SDK is installed and, if available, the SDK version (Some AdNetwork SDK version cannot be obtained, such as Facebook、MoPub、Mintegral and Helium). - If the adapter is installed and, if available, the adapter version. - On the Instance list page, click the "Filter button (funnel shape)" and the filter page will pop up. You can filter the Instance results according to the Instance status and advertising type. Load And Show Ad On the TESTID and INSTANCE list pages, click the "LOAD AD" button to send the ad request to the SDK of the ad source. After the ad request is completed, an update prompt will be displayed, stating whether the request was successful or failed. Ads loaded successfully page If the request is successful, the "LOAD AD" button will change to "SHOW AD" and there will be a "Success" prompt on the page. - For banner ads and native ads, the ads are displayed directly on the current page. - For interstitial ads and rewarded video ads, the ads will be displayed on the newly pop-up page. Ad display page The ad opens on the current page, for example: Data Reporting After clicking the upload button, your AdNetwork integration results and Instance verification results will be uploaded to the OpenMediation dashboard - SDK Testing - Test Suite Result, For more information. Article is closed for comments.
https://support.openmediation.com/hc/en-us/articles/360057860773-TestSuite
CC-MAIN-2022-21
refinedweb
684
51.28
import "database/sql" Package sql provides a generic interface around SQL (or SQL-like) databases. The sql package must be used in conjunction with a database driver. See for a list of drivers. For more usage examples, see the wiki page at. ErrNoRows is returned by Scan when QueryRow doesn't return a row. In such a case, QueryRow returns a placeholder *Row value that defers this error until a Scan. Drivers returns a sorted list of the names of the registered drivers. Register makes a database driver available by the provided name. If Register is called twice with the same name or if driver is nil, it panics.. Begin starts a transaction. The isolation level is dependent on the driver. Close closes the database, releasing any open resources. It is rare to Close a DB, as the DB handle is meant to be long-lived and shared between many goroutines. Driver returns the database's underlying driver. Exec executes a query without returning any rows. The args are for any placeholder parameters in the query. Ping verifies a connection to the database is still alive, establishing a connection if necessary. Prepare creates a prepared statement for later queries or executions. Multiple queries or executions may be run concurrently from the returned statement. Query executes a query that returns rows, typically a SELECT. The args are for any placeholder parameters in the query.) } QueryRow executes a query that is expected to return at most one row. QueryRow always return a non-nil value. Errors are deferred until Row's Scan method is called.). NullBool represents a bool that may be null. NullBool implements the Scanner interface so it can be used as a scan destination, similar to NullString. Scan implements the Scanner interface. Value implements the driver Valuer interface.. NullInt64 represents an int64 that may be null. NullInt64 implements the Scanner interface so it can be used as a scan destination, similar to NullString. Scan implements the Scanner interface. Value implements the driver Valuer interface.. Row is the result of calling QueryRow to select a single row. Scan copies the columns from the matched row into the values pointed at by dest. If more than one row matches the query, Scan uses the first row and discards the rest. If no row matches the query, Scan returns ErrNoRows. ... Close closes the Rows, preventing further enumeration. If Next returns false, the Rows are closed automatically and it will suffice to check the result of Err. Close is idempotent and does not affect the result of Err. Columns returns the column names. Columns returns an error if the rows are closed, or if the rows are from QueryRow and there was a deferred error. Err returns the error, if any, that was encountered during iteration. Err may be called after an explicit or implicit Close.. Scan copies the columns in the current row into the values pointed at by dest. If an. If the value is of type []byte, a copy is made and the caller owns the result. type Scanner interface { // Scan assigns a value from a database driver. // // The src value will be of one of the following restricted // set of types: // // int64 // float64 // bool // []byte // string // time.Time // nil - for NULL values // // An error should be returned if the value can not be stored // without loss of information. Scan(src interface{}) error } Scanner is an interface used by Scan. Stmt is a prepared statement. Stmt is safe for concurrent use by multiple goroutines. Close closes the statement. Exec executes a prepared statement with the given arguments and returns a Result summarizing the effect of the statement. Query executes a prepared query statement with the given arguments and returns the query results as a *Rows.) Tx is an in-progress database transaction. A transaction must end with a call to Commit or Rollback. After a call to Commit or Rollback, all operations on the transaction fail with ErrTxDone. Commit commits the transaction. Exec executes a query that doesn't return rows. For example: an INSERT and UPDATE. Prepare creates a prepared statement for use within a transaction. The returned statement operates within the transaction and can no longer be used once the transaction has been committed or rolled back. To use an existing prepared statement on this transaction, see Tx.Stmt. Query executes a query that returns rows, typically a SELECT. QueryRow executes a query that is expected to return at most one row. QueryRow always return a non-nil value. Errors are deferred until Row's Scan method is called. Rollback aborts the transaction. Stmt returns a transaction-specific prepared statement from an existing statement. Example: updateMoney, err := db.Prepare("UPDATE balance SET money=money+? WHERE id=?") ... tx, err := db.Begin() ... res, err := tx.Stmt(updateMoney).Exec(123.45, 98293203) Package sql imports 9 packages (graph) and is imported by 3698 packages. Updated 2015-06-09. Refresh now. Tools for package owners.
http://godoc.org/database/sql
CC-MAIN-2015-32
refinedweb
823
69.18
On 16 April 2011 00:18, Andreas Dilger <adilger@dilger.ca> wrote:> On 2011-04-15, at 6:29 AM, Miklos Szeredi wrote:>> Apparently tmpfs does not support generic xattr. I understand why>> tmpfs is an attractive choice for an upper filesystem, so this should>> be addressed.>>>> I see two options here:>>>> 1) implement generic xattr in tmpfs>> There was a patch posted recently to add xattr support to tmpfs, so that it can use security labels:>> From: Eric Paris <eparis@redhat.com>> Subject: [PATCH] tmpfs: implement xattr support for the entire security namespace> Date: March 29, 2011 12:56:49 PM MDT>> Cheers, AndreasApplying this patch is not sufficient. Apparently more xattrs areneeded but adding them on top of this patch should be easy.The ones mentioned in the overlayfs doc aretrusted.overlay.whiteouttrusted.overlay.opaqueThe patch implements security.*ThanksMichal--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2011/4/18/155
CC-MAIN-2015-22
refinedweb
169
56.45
Possible Duplicate: Terminating a Python script Is it possible to stop execution of a python script at any line with a command? Like some code quit() # quit at this point some more code (that's not executed) sys.exit() will do exactly what you want. import sys sys.exit("Error message") You could raise SystemExit(0) instead of going to all the trouble to import sys; sys.exit(0). You want sys.exit(). From Python’s docs: >>> import sys >>> print sys.exit.__doc__). So, basically, you’ll do something like this: from sys import exit # Code! exit(0) # Successful exit The exit() and quit() built in functions do just what you want. No import of sys needed. Alternatively, you can raise SystemExit, but you need to be careful not to catch it anywhere (which shouldn’t happen as long as you specify the type of exception in all your try.. blocks). Tags: laravelpython
https://exceptionshub.com/programmatically-stop-execution-of-python-script-duplicate.html
CC-MAIN-2022-05
refinedweb
153
67.15
Abstract articles by employing other cracking tools, such as CFF explorer and ILDASM. Reflector eases the task of dissembling or de-compiling .NET binaries. It’s proven to be a useful reverse engineering tool among security professionals. Prerequisites A security researcher must have thorough knowledge of .NET supported programming languages. These days, we’re confronting C# and vb.net source code, in addition to IL and hexidecimal code. Playing with source code is relatively easy when compared to byte code or opcodes. A security researcher’s machine must be configured with the following tools: - Red Gate Reflector - .NET Framework 2.x version - VC 2010 IDE - ILDASM.exe (optional) Reflector Reflector, the dissembling tool, is developed by Red Gate. Although it used to available free of cost, now the vendor has commercialized it. Reflector is used to decompile or dissemble .NET binaries. They could be executables or dynamic link libraries. This tool is limited to open binaries which are compiled or developed under CLR. It can’t decompile any other native unmanaged assemblies. This limitation poses an advantage where we can easily anticipate the development framework of a DLL or EXE. If the binary is opened or decompiled in the Reflector IDE, then it is .NET assembly. Reflector has advantages for reverse engineering professionals. It provides an IL code dissembling facility. Opening ILDASM.EXE separately isn’t necessary. It can be integrated with Visual Studio 2010 or 2012 as add-ons. It can convert decompiled code in other .NET supported languages, such as VB.NET, or Delphi. We can run the Visual Studio command prompt directly from Reflector. We can execute or test an executable from Reflector after decompiling its assembly. Finally, we can view decompiled code under any .NET framework 4.5 to 1.0. After downloading and installing the software, the IDE will display something like this; NOTE: Apart from such it advantages and features, I would like to address a common misconception that the Reflector is not able to edit .NET source code or opcode in any manner. We can’t save as assemblies like other dissemblers, such as CFF explorer, IDA pro or Ollydbg. It just decompiles .NET assemblies. When it comes to editing .NET binaries, we need to import some sort of open source add-on into the Reflector IDE. We’re provided with other useful features, such as editing hex code, and opcode. We can put our modifications into another file with different name. Target Software We’re not relying on any ready-made software or on other third party downloaded cracking files in order to demonstrate dissembling assembly source code. Instead, we have developed custom cracking software. It’s called “Champu,” named after my son’s pet. The program can be launched when a vendor initiates a test, and works for a fifteen day period. After the fifteen day period, it expires automatically. That’s the first security feature it has. Other security features include a form to authenticate usernames and passwords, and a license key code, due to piracy. The initial trial version window looks like this: In the meantime, the user can launch the evaluation version by clicking the Continue button. Then, you must create a username and password. If the user wants to register a full licensed version of the program, then they are to go to the vendor’s website to get a license key. That key is to be entered in the following window. Is there another way to get a full version, without paying? Yes, reverse engineering with Reflector is ultimate way to crack a propriatery program. Dissembling the Target As you can see, the Continue button is disabled and a trial expiration alert message is showing in a dialog box. We’re not provided a source code of the program, so that we can modify the code to extend or eliminate the trial. We have no other option to operate the program except buying a product key. Now, it’s time for action. All we have is champu.exe, which is enough. Just launch the Reflector IDE and open Champu.exe with it. When it’s opened, you’ll see the following graphic. We know it’s assembled with .NET framework, because we were able to open it. The executable is automatically decompiled or disintegrated into its corresponding source code. It has all the rudimentary information associated with this .NET assembly, such as application type, version, cryptographic key, copyright, GUID, and the target framework. We can also obtain extra information about the external .NET assembly reference in this executable, along with image resources. We can obtain a lot of information about the target quite easily. Next, Just right click on champu.exe, and the IDE will show you other options such as analyzer, ILDASM, command prompt and run. Optionally, can add or remove other .NET native assemblies such as system System in Reflector. Now, just expand Champu from the left pane, and you can find the namespace definition referred to as Champu in the source code. Just select it and it will wrap-up everything in the right pane. You can see that Champu.exe has a namespace referred to as Champu which contains C_Trial, Login and Register classes, along with two static classes, Program and gData. We can presume that these classes contain the actual functionality logic and are responsible for operating the program. The assembly could contain numerous classes and they all contain two different types of logic. So, it’s very complicated to figure out where we start tracing. We have to identify the entry point of the executable. That’s why you should right click on champu.exe to provide Reflector with an entry point option. In the right pane, the Main() function of the assembly is displayed, and C_Trail class loads. Up until now, we’ve discovered that the entry point C_Trail class which is where the trial duration logic is implemented. Select that class from left pane after expanding the Champu namespace. The class will give us an overview to the control integrated in the trial class user interface. It contains two methods, RegisteredUser() and TrailCheck(). Go to the TailCheck() method body by selecting it. We’ll find the entire trial duration logic source, as we would in Visual Studio 2010. We can figure out that the program will work for fifteen days from 10/5/2013. In addition, we can find the trial expiration message in the source code. Here’s the source code of the TrailCheck() method. We’re not sure about the calling or triggering condition of the method. Just select the TrailCheck() method from the left pane and click on the analyze option. It’ll display some useful information such what is using the method (the triggering condition is On_Load()) and what its dependencies are. With .NET, Reflector shows the source code in C# by default. Some programmers aren’t proficient in the C# language. So we can instantly change the assembly source code into another .NET supported programming language such as VB.NET. If you’d like to see the MSIL opcode instructions behind the source code, we can change it into its corresponding IL code. We don’t need to use the ILDASM tool. Reflector also provides another utility for importing and exporting assembly source code into an XML file. Sometimes we need to look at the global assembly cache, the repository of all .NET assemblies. We can open the GAC in Reflector as seen below. Summary This article demonstrates dissembling of assembly code by using Reflector. As we explained earlier, it’s not a tool to modify the instruction sets behind source code. It just decompiles source code in order to analyze its logic. This article explores numerous features of Reflector such as importing and exporting assembly, GAC, analyzer, and code conversion, which is useful when dissembling. We’ve analyzed the C_Trail class implementation to get information. In the next article, we’l go through the remaining class implementations in order to crack applications.
http://resources.infosecinstitute.com/reverse-engineering-reflector/
CC-MAIN-2016-18
refinedweb
1,340
59.7
FORKED for 4dn-dcic. Use to package and deploy lambda functions. Project description Python-lambda is a toolset for developing and deploying serverless Python code in AWS Lambda. Important This is a FORK of the original Python-lambda package by Nick Ficano. It will NOT be updated regularly and is frozen per our projects needs. library takes away the guess work of developing your Python-Lambda services by providing you a toolset to streamline the annoying parts. Requirements - Python 3.6 - Pip (Any should work) - Virtualenv (>=15.0.0) - Virtualenvwrapper (>=4.7.1) Getting Started Using this library is intended to be as straightforward as possible. Code for a very simple lambda used in the tests is reproduced below. config = { 'function_name': 'my_test_function', 'function_module': 'service', 'function_handler': 'handler', 'handler': 'service.handler', 'region': 'us-east-1', 'runtime': 'python3.6', 'role': 'helloworld', 'description': 'Test lambda' } def handler(event, context): return 'Hello! My input event is %s' % event This code illustrates the two things required to create a lambda. The first is config, which specifies metadata for AWS. One important thing to note in here is the role field. This must be a IAM role with Lambda permissions - the one in this example is ours. The second is the handler function. This is the actual code that is executed. Given this code in example_function.py you would deploy this function like so: from aws_lambda import deploy_function import example_function deploy_function(example_function, function_name_suffix='<suffix>', package_objects=['list', 'of', 'local', 'modules'], requirements_fpath='path/to/requirements', extra_config={'optional_arguments_for': 'boto3'}) And that’s it! You’ve deployed a simple lambda function. You can navigate to the AWS console to create a test event to trigger it or you can invoke it directly using Boto3. Advanced Usage Many of the options specified in the above code block when it came to actually deploying the function are not used. These become more useful as you want to make more complicated lambda functions. The ideal way to incorporate dependencies into lambda functions is by providing a requirements.txt file. We rely on pip to install these packages and have found it to be very reliable. While it is also possible to specify local modules as well through package_objects, doing so is not recommended because those modules must be specified at the top level of the repository in order to work out of the box. There is a comment on this topic in example_function_package.py with code on how to handle it. Tests Tests can be found in the test_aws_lambda.py. Using the tests as a guide to develop your lambdas is probably a good idea. You can also see how to invoke the lambdas directly from Python (and interpret the response). Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/python-lambda-4dn/
CC-MAIN-2020-24
refinedweb
476
58.89
Instant Servers is the infrastructure as a service (IaaS) system I have been working on during the last months in Telefónica Digital. The service offers a public REST API (Cloud API) that is super simple to use. However, in this post I will show you how to manage your infrastructure using a Java client, without dealing with HTTP requests. Build the Cloud API client Man does not live by nodejs alone. There is an instantservers project at github you can easily clone and compile (pull requests are also welcome). In the future it will be published as a proper maven artifact, so you can skip this point. git clone cd ./instantservers/instantservers-api-client mvn install That will generate an instantservers-api-client-1.0.0.M1.jar library you can use in your own applications. Deploy your first virtual machine To deploy a virtual machine on Instant Servers cloud you only need to choose a name for the machine, a package that corresponds to the hardware configuration (cpu, mem, disk) you need, and a dataset that represents the image or template you want to use (i.e. ubuntu 12.04, mongodb, smartos, etc). Let’s code speak. package net.guidogarcia; import com.tdigital.instantservers.model.cloud.Machine; public class InstantServersExample { // there are several datacenters, I use Madrid "eu-mad" in this example private static final String CLOUDAPI_URL = ""; public static void main(String[] args) throws Exception { CloudAPIClient client = new CloudAPIClient("username", "password", CLOUDAPI_URL); Machine machine = new Machine(); machine.setName("smallmachine"); machine.setPackage("g1_standard_1cpu_512mb"); machine.setDataset("sdc:sdc:smartos64:1.6.3"); Machine deployed = client.createMachine(machine); System.out.printf("Machine id is %s", deployed.getId()); } } You will notice that virtual machines are up and running in a matter of seconds. This is due to the fact that the virtualization is based on rock solid Solaris zones. You will need a username and a password to authenticate API calls, but you can sign up for Instant Servers for free (machines are still not free but you can try it for something like 6 cents per hour). If anyone is interested in other API operations or about cloud computing in general, leave a comment and I will be happy to write more posts about it.
https://guidogarcia.net/blog/2013/02/17/deploy-virtual-machines-on-instant-servers-cloud-with-java/
CC-MAIN-2019-18
refinedweb
373
54.52
There are five ways to update a field in Java in a thread safe way. But before we start, what do you have to look at? If you access a field from many threads, you must make sure that: - Changes are made visible to all threads - The value is not changed during the update by the other threads - Threads that are reading do not see the inconsistent intermediate state You can achieve this by one of the following 5 ways: Volatile Field When to Use? You can use a volatile field when you have only one thread updating and many threads reading a single-valued field. Or you can use it when the writing threads do not read the field. You can use it only for single valued fields like boolean or int. If you want to update object graphs or collections, use copy on write, as described below, instead. Example The following example code shows a worker thread, which stops processing based on a volatile field. This makes it possible for other threads, like an event dispatching thread, to stop the worker thread. public class WorkerThread extends Thread { private volatile boolean canceled = false; public void cancelThread() { this.canceled = true; } @Override public void run() { while( ! canceled ) { // Do Some Work } } } How Does it Work? Declaring the field volatile makes changes made by one thread visible to all other threads. As a writing thread do not read the value, point b “the value is not changed during the update by the other thread” is fulfilled. Since the field is a single value point c “reading threads do not see inconsistent intermediate state” is also fulfilled. How to Test? By using vmlens, an Eclipse plugin to test multi-threaded software and to detect Java race conditions, we can find fields, which should be declared volatile. After declaring the field volatile, we can check in the “order of event” view of vmlens, that the field is correctly read and written: Copy On Write When to Use? Use copy on write if you want to update a graph of objects or a collection, and the threads mostly read and only rarely update. Example The following shows the add and get Method from java.util.concurrent.CopyOnWriteArrayList: private transient volatile Object[] array; final Object[] getArray() { return array; } public boolean add(E e) { final ReentrantLock lock = this.lock; lock.lock(); try { Object[] elements = getArray(); int len = elements.length; Object[] newElements = Arrays.copyOf(elements, len + 1); newElements[len] = e; setArray(newElements); return true; } finally { lock.unlock(); } } public E get(int index) { return get(getArray(), index); } How Does it Work? Again the volatile declaration makes changes made by one thread visible to the other threads. By using a lock around the updating method, we make sure that the value is not changed during the updating process. Since we copy the data before changing it, reading threads do not see the inconsistent intermediate state. How to Test? We can test this by using a multithreaded test and adding a wait point inside vmlens at the read of the field. Lock-Based Atomic Update When to Use? Use locks when updates and reads happen equally often. Use it until the lock becomes a bottleneck and you need the more performant solution compareAndSet, as described below. Example The following example shows a lock based counter: public class LockBasedCounter { private int i = 0; public synchronized void addOne() { i++; } public synchronized int get() { return i; } } How Does it Work? The synchronize statements make sure that the changes made by one thread are seen by the other threads. Since only one thread can execute the methods protected by the lock at a given time, the value can not be changed during the update by another thread and the other threads cannot see an intermediate inconsistent state. How to Test? We can test this by using a multi-threaded test and adding a wait point at the updating method. Conclusion Which of the first 3 of 5 ways you use to update a field in a thread safe way depends on your performance and safety requirements. Independent of which way you use, you should test it. Read more about unit testing multi-threaded software with vmlens and concurrent-junit in a new way to junit test your multithreaded java code. If you have a question or remark please add a comment below, and stay tuned for part 2 tomorrow! {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/5-ways-to-thread-safe-update-a-field-in-java
CC-MAIN-2017-51
refinedweb
745
71.04
- special column - java - Article details Send you a strange technique of concurrent programming to take off comfortably Write in front Time flies. I'm going to comb all my past articles. I have to say, this is really a very grand project. In the process of sorting out, I found that many early works were written, and the amount of reading was very hip pulling. In the process of sorting out the article, I found that there are many knowledge points I have written before, which I may not remember clearly. Shame, shame. But this is really a process of learning and consolidation again, which is very good. Talk about Future This article will talk about CompletionService. Before we talk about it, let's review the usage of Future. Let me ask you first. After you submit a set of computing tasks to the thread pool, you want to get the return value. What submission method should you use for Executor? What overload type of this submission method? what? You can't answer? Bah, you scum man, forgot after whoring last week? There is an article that says: Because it is a group of computing tasks, you want to get the return value to do things. The return value is encapsulated in Future. How to get it? Call the get method of Future, including the licking dog type get with unlimited waiting without timeout, and the slag man type get with timeout and give up at the point: Let's take a look at an example: public class JDKThreadPoolExecutorTest { public static void main(String[] args) throws Exception { ExecutorService executorService = Executors.newCachedThreadPool(); ArrayList<Future<Integer>> list = new ArrayList<>(); Future<Integer> future_15 = executorService.submit(() -> { TimeUnit.SECONDS.sleep(15); System.out.println("The execution time is 15 minutes s The execution of is complete."); return 15; }); list.add(future_15); Future<Integer> future_5 = executorService.submit(() -> { TimeUnit.SECONDS.sleep(5); System.out.println("The execution time is 5 minutes s The execution of is complete."); return 5; }); list.add(future_5); Future<Integer> future_10 = executorService.submit(() -> { TimeUnit.SECONDS.sleep(10); System.out.println("The execution time is 10 minutes s The execution of is complete."); return 10; }); list.add(future_10); System.out.println("Start preparing for results"); for (Future<Integer> future : list) { System.out.println("future.get() = " + future.get()); } Thread.currentThread().join(); } } Now there are three tasks, the execution time is 15s/10s/5s respectively. These three Callable tasks are submitted through the submit method of the JDK thread pool. You compile it with your eyes and output it in your heart. What do you think is the output result of this code. First, the main thread submits the three tasks to the thread pool, stores the corresponding returned Future in the List, and then executes the output statement of "start preparing to obtain results". Then enter the for loop, execute the future.get() operation in the loop, and block the wait. See if the output you want is like this: From this output, we can see the problem. Obvious barrel effect. Among the three asynchronous tasks, the one that takes the longest time is executed first, so it enters the list first. Therefore, when the task result is obtained in the loop, the get operation will be blocked all the time, even if the task with an execution time of 5s/10s has been executed. OK, for example. Imagine a scenario: Suppose you are a sea king, you have many ordinary female friends. You invited three female friends to dinner at the same time. Say to them: you make up first. Well, tell me, I'll drive to pick you up. Xiaohong takes 2 hours to make up. Floret make-up takes 1 hour. Xiaoyuan takes 30 minutes to make up. Because you first told Xiaohong, you have been waiting at Xiaohong's door for Xiaohong to finish her makeup. When Xiaohong finished her makeup, you received it in the car. The other two friends were already ready and waiting for you to pick her up at home. This is not what a qualified sea king should look like. This is the limitation of future in this scenario. According to the above scenario codes, you can get them (the codes can be directly copied and pasted. I suggest you take them out and run them): public class JDKThreadPoolExecutorTest { public static void main(String[] args) throws Exception { ExecutorService executorService = Executors.newCachedThreadPool(); ArrayList<Future<String>> list = new ArrayList<>(); System.out.println("Let's have dinner with some girls."); Future<String> future_15 = executorS."; }); list.add(future_15); Future<String> future_5 = executorS."; }); list.add(future_5); Future<String> future_10 = executorS."; }); list.add(future_10); TimeUnit.SECONDS.sleep(1); System.out.println("All notified,Wait."); for (Future<String> future : list) { System.out.println(future.get()+"I'll pick her up."); } Thread.currentThread().join(); } } The output results are as follows: It's said that they are all the same ordinary friends. Why do you have to wait for Xiaohong with the longest makeup time? Why not pick up the first one who moves fast? What do you think of Xiaoyuan and Xiaohua when you operate like this? Can only say: you are a good man. what? You central air conditioner asked me "what is sea king"? CompletionService save Sea King The above scenario is different when we introduce CompletionService. Let's look directly at the usage: ExecutorService executorService = Executors.newCachedThreadPool(); ExecutorCompletionService<String> completionService = new ExecutorCompletionService<>(executorService); It is very convenient to use. You only need to wrap the thread pool with ExecutorCompletionService. Then, when submitting the task, use the submit method of the competitionService. The code is as follows: public class ExecutorCompletionServiceTest { public static void main(String[] args) throws Exception { ExecutorService executorService = Executors.newCachedThreadPool(); ExecutorCompletionService<String> completionService = new ExecutorCompletionService<>(executorService); System.out.println("Let's have dinner with some girls."); completionS."; }); completionS."; }); completionS."; }); TimeUnit.SECONDS.sleep(1); System.out.println("All notified,Wait."); //Loop 3 times because the above three asynchronous tasks are submitted for (int i = 0; i < 3; i++) { String returnStr = completionService.take().get(); System.out.println(returnStr + "I'll pick her up"); } Thread.currentThread().join(); } } You compile it in your eyes and output it in your heart Forget it, don't compile it. Just show you the results. I can't wait: Whoever puts on makeup first will pick up first. Writing here, I couldn't help clapping my hands when I saw the output result. The real sea king should be a master of time management. First, compare the output results, all stand up and applaud: Then compare the differences between the two versions: Little or even little change. The object that executes the submit method becomes ExecutorCompletionService. The method of obtaining task results becomes: String returnStr = completionService.take().get(); Let's not look at the principle. You will carefully sample this method to obtain the results. completionService.take() what came out, then called the get method. According to this get, my intuition tells me that what I take must be a future object. The future object must be placed in a queue. In the next section, I'll take you to confirm it. CompletionService principle First, CompletionService is an interface: ExecutorCompletionService is the implementation class of this interface: Take a look at the construction method of ExecutorCompletionService: You can see that a thread pool object needs to be passed in. The default queue is LinkedBlockingQueue. Of course, we can also specify which queue to use: Then take a look at its task submission method: Because the ExecutorCompletionService is mainly used to process the return value gracefully. Therefore, it supports two types of submit, both of which have return values. The above time management master version of Sea King uses the Callable method. Let's compare the difference between Executor direct submission and ExecutorCompletionService submission: The difference lies in the execute method. The ExecutorCompletionService submits tasks as follows: executor.execute(new QueueingFuture(f)); The difference lies in the Runable in the execute method: Take a look at what queueing future is: The secret is basically in this. QueueingFuture inherits from FutureTask. Override the done method and put the task in the queue. The meaning of this method is that when the task is completed, it will be put into the queue. In other words, all the tasks in the queue are done tasks, and this task is the future. If the task method of the queue is called, the wait is blocked. When you wait for the future that must be ready, you can call get to get the result immediately. What do you think this operation is doing? Isn't that what you're doing? Before you submit a task, you also need to directly care about the future returned by each task. Now CompletionService helps you track these future. The decoupling between the caller and future is completed. After the principle analysis, say a place that needs attention. When your usage scenario doesn't care about the return value, don't use CompletionService to submit tasks. Why? Because as I said earlier, there is a queue in it. When you don't care about the return value, you won't process the queue, resulting in more and more objects in the queue. Finally, it blew up and OOM. Application in open source framework As mentioned earlier, CompletionService is an interface. In addition, the ExecutorCompletionService of JDK implements this interface. There are also corresponding implementations in the open source framework. For example, Redisson: You see this implementation as like as two peas in ExecutorCompletionService, but there are some differences in implementation. When it puts the future on the queue, it does not override the done method, but uses the onComplete of responsive programming: The core idea of CompletionService is: Executor plus Queue. This idea reminds me of a class I saw in Dubbo: The core logic in the doInvoker method of this class is as follows: First, a queue is defined where the label is ①. The asynchronous task is submitted in the loop body where the label is ②. It takes several service providers to have several cycles. The sub thread puts the returned result into the queue at the place labeled ③. As soon as it is put in, it can be obtained at the place marked ④ (within the specified time), and then the program returns immediately. In this way, we can call multiple service providers in parallel and return immediately as long as one service provider returns. I think this idea has something in common with the idea of CompletionService. We should learn from CompletionService and its ideas.
https://programmer.help/blogs/send-you-a-strange-technique-of-concurrent-programming-to-take-off-comfortably.html
CC-MAIN-2021-49
refinedweb
1,763
58.58
IRC log of tagmem on 2003-06-02 Timestamps are in UTC. 14:05:17 [RRSAgent] RRSAgent has joined #tagmem 14:05:25 [tim-mit] RRSAgent, pointer? 14:05:25 [RRSAgent] See 14:08:31 [Norm] Ian, hello? 14:09:52 [Ian] zakim, call Ian-BOS 14:09:52 [Zakim] sorry, Ian, I don't know what conference this is 14:09:56 [Ian] zakim, this is TAG 14:09:56 [Zakim] sorry, Ian, I do not see a conference named 'TAG' 14:10:05 [tim-mit] Ian, we are NOT on Zakim 14:10:10 [Chris] ian, dial direct 14:10:16 [Chris] conf is not on zakim 14:11:26 [Norm] yes 617-258-7910 14:12:15 [Ian] Roll Call: SW, TB, TBL, NW, CL, IJ 14:12:42 [Ian] Regrets: PC, DO 14:12:55 [Ian] Missing: DC, RF 14:14:45 [Ian] # Accept 27 May teleconference minutes? 14:17:06 [Ian] Correction: Previous meeting 12 May. 14:17:13 [Ian] No resolution on state of those minutes; held over. 14:18:20 [Ian] Review of agenda: 14:19:06 [Ian] Next meeting: Proposed 9 June (regular teleconf on Monday) 14:19:23 [Ian] NW: Likely partial regrets (will be on West Coast) 14:19:33 [Ian] Resolved: next meeting 9 June 2003 14:19:51 [DanC] DanC has joined #tagmem 14:19:55 [Ian] Next virtual teleconf scheduled for 19 June. Conflicts with DC, RF, SW, TB. 14:20:50 [Ian] +DanC 14:20:54 [Ian] =================== 14:21:00 [Ian] Feedback from Budapest. 14:21:12 [Ian] CL: I think the AC session went well. Gave AC reps an idea of what we do. 14:21:48 [Ian] CL: I suspect they feel more comfortable about the TAG. 14:21:54 [Ian] CL: We did two straw polls. 14:22:18 [Ian] Member-only slides: 14:22:32 [Ian] CL: I think AC said they wanted arch doc both better and sooner. 14:22:53 [Ian] TBL: For that question, I think people wanted us to cover historic Web, Sem Web, Web services, ... 14:23:11 [Ian] TBL: I think our best course is to give them what we've got when we've got it. 14:23:25 [Ian] SW: What about feedback on RDDL doc? 14:23:40 [Ian] TB: I've received queries from Robin Berjon about future of RDDL. 14:24:08 [Ian] TB latest version: 14:24:22 [Ian] TB: Web3D folks interested in knowing about RDDL 14:25:15 [DanC] agreed? 14:25:18 [Ian] SW: I heard from IJ that at least one suggestion was to publish as a Note, then take it from there. 14:25:45 [Roy] Roy has joined #tagmem 14:27:28 [Ian] DC, IJ: I don't remember much feedback to TAG presentation at WWW2003. 14:27:37 [Roy] I've been trying to get into the telecon... what is the passcode? 14:27:45 [Ian] IJ: other ideas: 14:27:55 [Ian] - Allow people to register as customers of an issue. 14:28:22 [Roy] bugger 14:28:38 [Ian] + RoyF 14:29:58 [Ian] ======================== 14:30:21 [Ian] [TAG looks at using VNC for collaboration during this meetnig] 14:31:14 [Stuart] xvncviewer -shared norman.walsh.name:1 14:32:47 [Ian] Read-through of Arch Doc 14:33:06 [Ian] 14:33:20 [Ian] NW: My goal here is to walk through doc with eye to finding show-stoppers for last call. 14:34:08 [Ian] NW: In particular, look at section 4. 14:35:04 [Ian] TB: I sent comments on section 4: 14:35:15 [Ian] [Lots of people read TB's proposal] 14:35:17 [DaveO] DaveO has joined #tagmem 14:35:34 [Ian] NW: I agree work with TB on getting this written. 14:37:12 [Ian] [TAG gives up on vnc for now] 14:37:24 [Ian] + DO 14:37:50 [Ian] CL: Section 4.3 Presentation/Content/Interaction is misplaced. 14:37:57 [Ian] TB: There is content in the draft on this section. 14:38:07 [Ian] TB: I thought it could be made to all out of 4.2.2 of my proposal. 14:38:17 [Ian] TB: I wonder whether related to final form/re-usable. 14:38:25 [Ian] CL: I've been working on a draft finding on this topic! 14:38:55 [Ian] CL summarizes broadly: We seem to be making the arch doc more a summary doc and pointing to findings for detail. 14:39:05 [Ian] NW: Seems reasonable to move content/presentation to 4.2.2 14:39:26 [Ian] TB: I am willing to draft section 4 by end of week. 14:40:01 [Ian] Looking at 14:41:47 [Stuart] Reviewing TB Draft: Open endedness 14:41:47 [tim-mit] tim-mit has joined #tagmem 14:43:47 [DanC] see also: QA spec guidelines 14:43:54 [Stuart] CL: add attention to authoers needs 14:44:12 [Norm] that was me, actually, not that it matters 14:44:43 [Chris] CL said 'availability of test suite' (and implementation report) 14:44:46 [Stuart] TBL: Under 4.1.1 - formal specification/machine readable form 14:45:01 [DanC] QA Framework: Specification Guidelines 14:45:01 [DanC] W3C Working Draft 10 February 2003 14:45:15 [Stuart] TB: that's what I'd intnded with attention to programmers' needs 14:46:05 [Stuart] DO: Want to push back on the need for 4.1.1 on the basis that this is generic good spec wrinting characteristics 14:46:28 [Stuart] CL: In think its useful and should be expanded. 14:46:54 [Stuart] TBL/DC: The QA activity covers this. 14:47:10 [Stuart] TB: Could go either way. 14:48:07 [Stuart] CL: <scibe has lost thread> 14:48:33 [Stuart] TB/CL: Does give us a place to put the error handling thing. 14:49:08 [Stuart] TB: Want to avoid the 'debacle;' of RDF rewriting their own BNF for XML. 14:49:26 [Chris] re - if XML, defined at the Infoset level 14:49:47 [tim-mit] you mean XPath level? 14:50:16 [Chris] make some of these 'good practice notes' 14:52:07 [Stuart] TB:Action: write up a first draft of section 4 of Arch doc base on email: 14:52:54 [tim-mit] PDF is better than PNG 14:52:59 [tim-mit] gtom the point of view 14:53:11 [tim-mit] [discussion of terms for 4.2.2] 14:53:26 [Stuart] Discussion of Semantics v Presentation 14:53:45 [tim-mit] from the point of view that you can index it, cut and paste text to a certian extent. 14:55:06 [Stuart] CL: I'm working on a finding contentPresentation-26. Want to be sure we're headed in the same direction. 14:55:43 [DanC] ah! 2 bits of writing. I see now. 14:55:52 [Stuart] TB: I'm happy draft the whole section, and I'm happy to accept some paragraphs on Content/Presentation from Chriss 14:56:50 [DanC] pointer to finding? 14:56:55 [Stuart] TB: Makes sense to move 4.2.3 under 4.2.2 14:57:25 [Stuart] DO: Extensibility issues? Composable or reusable bit? 14:57:57 [Chris] file:c:\cygwin\home\Chris\w3c\WWW\2001\tag\doc\contentPresentation-26-20030602.html 14:58:15 [Chris] oops 14:58:17 [Stuart] DO: Concrete example: Schema restriction if designing for extensibility. Deterministic/non-deterministic content models 14:58:54 [Stuart] DO: Two types of composability: "backward compatible forward revision". 14:59:03 [Stuart] CL: Sounds more like interop 14:59:40 [Stuart] TB: Propose DO submits a couple of paragraphs to the group. 14:59:47 [tim-mit] Backward compatability is an interesting discussion but big and long. 14:59:50 [Chris] related issue for 4.2.3 is 14:59:51 [Stuart] DO: Agrees 14:59:52 [tim-mit] q+ 15:00:31 [Stuart] Action DO: Write up a couple of paragraphs on extensibility for section 4. 15:01:51 [Stuart] TBL: Designing languages for extensibility and backward comaptibility is very complicated. Maybe we need to get back to what URI's meand 15:02:03 [Stuart] s/meand/mean 15:02:23 [DaveO] q+ 15:02:51 [Stuart] TBL: ... 1st thing we have to do is document 'how things work'. 15:02:57 [Stuart] q+ 15:03:03 [Stuart] ack tim 15:03:17 [Chris] q+ 15:03:32 [Stuart] ack Dave 15:03:58 [timbl] - Extensability is a good thing 15:04:20 [timbl] - If you use it distinuguish optional from mandatory extension 15:04:22 [Stuart] DO: Think TBL is saying versioning is hard... shouldn't go there (for now)... 15:04:34 [timbl] - For RDF, OWL provides powerful forwrad compatability tools 15:04:43 [Chris] ack chris 15:04:47 [timbl] q+ 15:04:52 [Stuart] TB: Yep.. versioning is hard, multi dimensional and needs to be designed in up front. 15:05:02 [Stuart] DO: <missed question> 15:05:15 [timbl] DO: Don't tyou think there are sepcific features for this? 15:05:20 [Norm] DO: Don't you think there are features in W3C specs to deal with this? 15:05:23 [Stuart] TB: Anything defined in XML is more extensible/flexible 15:05:54 [Chris] q+ 15:05:59 [DanC] hmm... " Web Architecture: Extensible Languages" 15:06:07 [DaveO] q+ 15:06:24 [timbl] q+ to say the 3 "-" things which he typed in above. 15:06:30 [timbl] q+ timbray 15:06:51 [Stuart] SW: What expectations should we set wrt composablity of XML vocabs. 15:07:39 [Stuart] TB: Yes we should say something about what folks should consider if thery want their langauages to composable and extensible 15:08:39 [Stuart] TBL: Worth say that *if* you have extensibility to be able to distinguish between mandatory and optional extensionss 15:09:05 [Stuart] TBL: OWL/SemWeb can describe the relationships between versions of things. 15:09:46 [DanC] yes, the mandatory/optional bit is an architectural point that The Director has found wanting in a spec in the past; let's please document it clearly to prevent the sort of uncomfortable end-game stuff that happened before. 15:10:02 [Stuart] DO: Ref to TBL design notes on mandatory/optional 15:11:29 [Stuart] DO: I have a number of constituents for help on versioning, extensibility and evolution - larger community than those interested in resolving httpRange14 15:11:34 [timbl] ack tim 15:11:43 [timbl] ack chris 15:11:53 [timbl] ack daveo 15:12:14 [timbl] DanC, we cannt hear you at all 15:12:26 [timbl] Who dropped? 15:12:50 [timbl] We heard dialtime, ian 15:12:55 [timbl] We heard dialtone, ian 15:13:24 [Stuart] DC: Version in general is impossible... but maybe a few things worth saying. Leave space for versioning/extending; use URI as piviot points. 15:14:19 [Stuart] DC: All sorts of mechanisms in Schema... but I don't know if there are any that really generalise. 15:14:51 [DanC] ... generalize to issues of web architecture. 15:15:39 [Chris] q+ to talk about namespaces as opposed to namespace documents 15:15:49 [DanC] ack danc 15:15:50 [Chris] ack DanC 15:16:15 [DanC] 15:16:19 [DanC] 4.3.1. When to use XML 15:16:32 [Stuart] Looking at 4.3.1 in WebArch doc "When to use XML" 15:17:00 [Stuart] NW: Volunteered some text for 4.3.2 15:17:05 [Ian] + Ian on land line. 15:17:49 [Stuart] CL: 4.3.2 starts of saying should use namespaces and then focusses on what should be in a namespace doc, rather than 15:18:20 [Stuart] talking about why ns's should be used. 15:18:39 [Stuart] NW: It's in the text I proposed. 15:18:58 [Stuart] 4.3.5 Complex XML with references to TAG issues 15:19:08 [Chris] binary would go into the proposed 4.2.1 Binary vs. Textual 15:19:08 [Stuart] CL: Move to ??? 15:20:05 [Ian] TB: I think 4.4 is a little long; would try to boil down. 15:20:15 [Ian] CL: 4.4.1 in existing doc is rubbish 15:20:30 [Ian] CL: I am expecting to replace section on content/presentation 15:21:04 [Ian] TB: 4.4.1 disappears. 15:21:26 [Chris] 4.4.1 is notes about what to talk about later. some of it is already talked about 15:21:31 [Ian] NW: I hear proposal that, for the purpose of advancing doc to last call, abandon 4.4.1 and rely on people to resurface issues discussed there in a more concrete form if they want them addressed. 15:21:37 [Ian] [No objections to that course of action] 15:22:10 [Ian] CL: Not ready for www-tag yet. 15:22:44 [Ian] CL: I can commit to having this ready for www-tag next week. 15:23:00 [Ian] [NW 4.5 arch doc] 15:23:05 [Chris] kill 4.5. Ideas and issues 15:23:16 [Ian] NW: This is a laundry list; sounds like we would be inclined to throw 4.5 in bit bucket. 15:23:27 [Ian] TB: I don't see good solution to xlinkScope-23, item 8. 15:23:44 [Ian] TB: Not sure where in the arch doc this belongs. 15:24:21 [Ian] TB: I propose - new section 4.5 in my draft outline - "Hyperlinking in Resource Representations" 15:24:34 [timbl] q+ 15:24:42 [Ian] DC: Perhaps earlier in section 4 would be better (than 4.5) 15:25:03 [Ian] DC: Rationale - links are the most important architectural piece for the Web. Should be earlier 15:25:18 [Ian] CL: We need to put into the proposed section a difference between a URI and a link. 15:25:23 [timbl] q+ to say that we should keep the issues from 45. in general 15:25:29 [timbl] ack Chris 15:25:29 [Zakim] Chris, you wanted to talk about namespaces as opposed to namespace documents 15:25:34 [Ian] NW: How about just before TB's 4.2? 15:25:40 [Chris] difference between a uri and a link needs to go in there 15:25:54 [Ian] TB: Not convinced by DC; I think this document will not be accessed linearly. 15:26:05 [Ian] ack Chris 15:26:17 [Ian] ack timbl 15:26:17 [Zakim] timbl, you wanted to say that we should keep the issues from 45. in general 15:26:48 [timbl] s/45/4.5 15:26:54 [Ian] IJ: What are other homes for items in 4.5? 15:27:05 [Chris] so we decided to remove 4.5 but first find suitable homes for each point and it s associated link to an issue 15:27:20 [Ian] TB: Perhaps another top-level section on mime type being authoritative? 15:27:25 [Ian] IJ: Is that part of protocols or formats? 15:27:41 [Ian] CL: Maybe mime type stuff for beginning of section 4 15:27:50 [Ian] CL: And maybe needs a section title (for first few paragraphs) 15:28:07 [Ian] TB: I'll think of a title for that section. 15:28:25 [Chris] need a name for the 'representation is mime metadata plus the actual bits' section 15:28:43 [Ian] Action NW: Take a stab at proposed new 4.5, wherever it ends up. 15:28:49 [Chris] 5. Machine-to-machine interaction 15:29:06 [Ian] TB: I propose we leave 5 skeletal and delete 6. 15:29:25 [Ian] TB: I think we won't have time to do much on 5 and get 4 more baked. 15:29:29 [Roy] just "Interaction", please 15:29:55 [Ian] TBL: I think that 5 is not about multi-modal interaction. 15:30:17 [Ian] CL: I think so too, but don't know where to put 5.1 DI and MMI 15:30:29 [Ian] DC: Nothing in section 5 that I can't live without. 15:30:37 [Ian] TB: I think 5.2 is valuable. 15:30:47 [Ian] RF: It just looks weird. :) 15:30:59 [Chris] 5.1. Device Independence and Multimodal Interaction needs to go in a 'Human to Machine interaction' section 15:31:15 [Chris] maybee, some aspexcts of linking would also go in that section 15:31:28 [Chris] Web is not static, so we need to deal with interaction 15:31:28 [Ian] TBL: If we had something saying "Don't make protocol stateful when you can make it stateless" that would be reasonable. 15:33:21 [Ian] [Discussion of what is scope of work about which we can declare victory] 15:33:33 [Ian] TBL: You can make the last call "complete" by listing sections that are not baked. 15:33:35 [Stuart] q? 15:33:52 [timbl] q+ 15:33:57 [Ian] DC: The scope should be stated at the beginning of the doc. Don't require reader to read chapter 5 and fall off edge of earth. 15:34:25 [Ian] NW: Propose - (1) Delete section 5 and (2) state in scope of doc that we don't cover this (3) List a few issues that we don't cover. 15:34:30 [Ian] DC: Don't leave it there as a place-holder. 15:34:53 [Ian] DC: The document should be complete for its chosen scope. I wouldn't want to read highly polished text then unpolished text. 15:35:03 [Ian] CL: I would like it to be clear that the scope is limited. 15:35:41 [Ian] TB: I think that a placeholder section would have tutorial value, just to point out to reader that they don't have everything they need in this doc. 15:35:49 [Ian] NW: What about moving 5 to a finding? 15:37:19 [Ian] CL: Proposal - leave section 5 in place (arch issues about m-to-m interaction) [2] move 5.2 to 5.2 [3] move 5.3 to 5.2 and list open issues. 15:37:37 [Ian] TBL: Sounds to me like scope of section 5 is (1) working of http and (2) web services 15:37:46 [Ian] (Arch principles behind them) 15:38:22 [Ian] TBL: (a) Web Services arch and (b) Message passing that supports REST model 15:38:34 [Ian] TBL: Design (a) of HTTP and (b) use of HTTP POST 15:38:57 [Chris] if 5 is called interaction then we need 5.1 machine-machine interaction (ws, etc) and 5.2 human-machine interaction 15:39:32 [Ian] RF: 5 is about communication between two separable components of the architecture. 15:39:40 [DanC] q+ timbray 15:39:42 [Stuart] q+ tbray 15:39:53 [Ian] q- tbray 15:39:57 [Ian] doh! 15:40:16 [Ian] TB: I like the architectural tripod. 15:40:35 [Ian] TB: I disagree with TBL - I think that interaction is qualitatively different and can be discussed separately. 15:40:54 [Ian] TB: HTTP and Web Services models of interaction are what we would discuss here. 15:41:11 [Ian] TBL: Interaction between client/server is done to create an information space view (quasi-static) to the user. 15:41:28 [Ian] TBL: That illusion (reality!) is created by a bunch of http messages running back and forth. 15:41:43 [Ian] TBL: We can talk about that, and we can talk about Web services 15:41:55 [Ian] RF, CL: I would also add to that section - human/machine interaction (e.g., voice) 15:42:31 [Ian] TB: I think that Human/Computer Interaction doesn't belong in section 5. I"m having a hard time thinking how hci issues fit into web architecture. 15:42:46 [Ian] TB: I think the web arch should impose zero constraints on interaction from people. 15:42:47 [Roy] q+ 15:43:01 [Ian] CL: I disagree with TB on this point. 15:43:05 [Stuart] ack timbl 15:43:31 [timbl] HCI and WS are different. 15:43:40 [Ian] TB: I propose we create a new top-level section on Human/Computer Interaction 15:43:51 [DanC] eek! new sections now? hmm... i thought we were trying to figure out which parts we could finish this summer. 15:44:03 [Ian] RF: The reason that HCI is part of this is *latency*. 15:44:12 [Chris] good point 15:44:16 [Ian] RF: Latency is important because of HCI part. It's part of the notion of the arch of the system. 15:44:23 [timbl] We are trying to figure out ho wto describe the scope of what he have done -- this involves enummerating things out of the current scope. 15:44:58 [timbl] latency? 15:45:10 [Stuart] q+ 15:45:17 [Roy] ack Roy 15:45:21 [Ian] IJ: can we scatter HCI issues as rationale throughout the doc? 15:45:36 [Chris] latency example - server-side imagemaps vs client-side imagemaps 15:45:40 [Ian] TB: That makes sense; another example is transcribable URIs, or composability for formats. 15:45:50 [Ian] TB: Maybe not a top-level section, but scattered throughout. 15:46:07 [Chris] another latency example - 'getting a new html doc on every interaction' vs 'rollovers' 15:46:12 [Ian] SW: Someone should take a close look at 5 and suggest an outline has TB has done for 4. 15:46:22 [timbl] "y reducing the average latency of a series of interactions " - RF ch 5 15:46:40 [timbl] ok 15:46:49 [timbl] Tim: Propose we sjip section5 fornow 15:46:55 [timbl] that was timBray 15:47:27 [Chris] so we go to last call with a bipod - an architecture for a static, non-interactive web 15:47:32 [timbl] DanC: Propose we remove section 5 for practicality. 15:48:36 [timbl] STRAW poll 15:48:43 [timbl] Stuart: section syaing work to be done 15:48:45 [timbl] Norm: too 15:48:51 [timbl] TBRary too 15:48:57 [timbl] TimBL: abstain 15:49:09 [timbl] DanC: Remove entirely 15:49:19 [timbl] Chris: section saying work to be done 15:49:25 [timbl] Roy: " " " "" " 15:49:33 [timbl] DO: wrok to be done. 15:49:52 [timbl] (Ian off teh call) 15:50:30 [Chris] break?? 15:50:44 [timbl] 6:1 in favor of putting it in as a placeholder explaining what work needs to bedone. 15:50:46 [Roy] I will take action item to fill-out section 5 15:51:03 [timbl] HCI, HTTP internals and WS 15:51:11 [timbl] must be convreed somewher in that lot. 15:51:14 [timbl] -- timbl 15:51:28 [Roy] within two weeks 15:51:40 [timbl] Roy: I volunteer to write the placeholder section 15:52:33 [timbl] Roy will be unavailable for TAG class Aug 30. 15:52:44 [DanC] Zakim, remind us in 15 minutes to resume from break 15:52:44 [Zakim] ok, DanC 15:52:57 [DanC] Zakim, room for 10? 15:52:57 [Zakim] sorry, DanC, not enough capacity right now to add a 10 person conference 15:53:22 [timbl] Interetsing to see whether the Zakim really is full or overvooked. 15:53:26 [timbl] booked 15:56:13 [DanC] only 12 ports in use, per 15:55Z 16:03:30 [Ian] Zakim is available since PPWG not meeting 16:07:44 [Zakim] DanC, you asked to be reminded at this time to resume from break 16:09:40 [Roy] nobody here 16:09:53 [DanC] here=? 16:10:03 [Roy] the new number 16:10:05 [DanC] I'm on 617-258-7910 16:10:11 [Ian] Please let's move to zakmi 16:10:13 [Ian] zakim 16:10:24 [Ian] code 0TAG 16:10:25 [Chris] zakim, dial chris-work 16:10:25 [Zakim] sorry, Chris, I don't know what conference this is 16:10:32 [Ian] zakim, this is TAG 16:10:32 [Zakim] sorry, Ian, I do not see a conference named 'TAG' 16:10:35 [Chris] zakim, list 16:10:35 [Zakim] I see WS_TF()12:00PM active 16:10:36 [Zakim] also scheduled at this time are PP_PPWG()11:30AM, XML_XSLWG()12:00PM, XML_QueryWG()12:00PM, TAG_()10:00AM, QA_QAWG()11:00AM 16:10:40 [Roy] no your not ;-) 16:10:42 [Ian] zakim, this is TF 16:10:43 [Zakim] ok, Ian 16:10:49 [Stuart] Just dailing in 16:10:50 [Ian] zakim, this is TAG_() 16:10:50 [Zakim] sorry, Ian, I do not see a conference named 'TAG_()' 16:10:52 [Zakim] WS_TF()12:00PM has been moved to #ws-desc by Philippe 16:10:53 [Chris] zakim, dial chris-work 16:10:53 [Zakim] sorry, Chris, I don't know what conference this is 16:10:58 [Ian] zakim, this is not TF 16:10:58 [Zakim] sorry, Ian, I do not see a conference named 'not TF' 16:11:06 [Ian] zakim, this is TAG 16:11:06 [Zakim] sorry, Ian, I do not see a conference named 'TAG' 16:11:13 [Ian] zakim, this is 0TAG 16:11:13 [Zakim] sorry, Ian, I do not see a conference named '0TAG' 16:11:23 [Norm] zakim, this is tag 16:11:23 [Zakim] ok, Norm 16:11:23 [Chris] zakim, this is tag_ 16:11:24 [Zakim] Chris, this was already TAG_()10:00AM 16:11:25 [Zakim] ok, Chris 16:11:25 [Zakim] +??P3 16:11:29 [Chris] zakim, dial chris-work 16:11:29 [Zakim] ok, Chris; the call is being made 16:11:30 [Zakim] +Chris 16:11:34 [Chris] hooray 16:11:40 [Norm] zakim, ??P3 is Roy 16:11:40 [Zakim] +Roy; got it 16:11:43 [Ralph] Ralph has joined #tagmem 16:11:45 [Norm] zakim, who's on the phone? 16:11:45 [Zakim] On the phone I see Norm, Roy, Chris 16:11:52 [Ian] Hi Ralph, zakim seems to be happy now. 16:11:57 [Ian] zakim, what conference is this? 16:11:57 [Zakim] this is TAG_()10:00AM conference code 0824 16:12:05 [Ian] zakim, call Ian-BOS 16:12:05 [Zakim] ok, Ian; the call is being made 16:12:06 [Zakim] +Ian 16:12:13 [Ralph] Ralph has left #tagmem 16:12:15 [Ian] there is echo 16:12:37 [Zakim] -Norm 16:12:48 [Ian] zakim, drop Ian 16:12:48 [Zakim] Ian is being disconnected 16:12:48 [Zakim] -Ian 16:12:51 [Zakim] +Norm 16:12:59 [Ian] zakim, call Ian-BOS 16:12:59 [Zakim] ok, Ian; the call is being made 16:13:00 [Zakim] +Ian 16:13:04 [Ian] zakim, drop Ian 16:13:04 [Zakim] Ian is being disconnected 16:13:04 [Zakim] -Ian 16:13:07 [Ian] zakim, call Ian-BOS 16:13:07 [Zakim] ok, Ian; the call is being made 16:13:08 [Zakim] +Ian 16:13:14 [Chris] zakim, drop chris 16:13:14 [Zakim] Chris is being disconnected 16:13:15 [Zakim] -Chris 16:13:48 [Zakim] + +1.703.502.aaaa 16:13:57 [Chris] zakim, passcode? 16:13:57 [Zakim] the conference code is 0824, Chris 16:14:07 [Chris] too late 16:14:19 [Zakim] +??P4 16:14:30 [Zakim] +??P6 16:14:40 [Stuart] zakim, ??P4 is me 16:14:40 [Zakim] +Stuart; got it 16:14:50 [Ian] zakim, ??P6 is TimBray 16:14:50 [Zakim] +TimBray; got it 16:14:54 [Ian] zakim, who's here? 16:14:54 [Zakim] On the phone I see Roy, Ian, Norm, +1.703.502.aaaa, Stuart, TimBray 16:14:55 [Zakim] On IRC I see timbl, Roy, DanC, RRSAgent, Stuart, Norm, Zakim, Chris, Ian 16:15:44 [Zakim] +DanC 16:15:55 [Ian] ============== 16:16:02 [Ian] Walkthrough of arch doc 16:16:28 [Ian] 16:16:31 [Ian] [Section 1] 16:17:42 [Ian] [On section 6] 16:17:46 [Ian] IJ: Move these to better home. 16:18:36 [Ian] IJ: This was meant to be factoring out. 16:18:45 [Ian] RF: I think ok to repeat in each section. 16:19:20 [DanC] yes, let's nix section 6. 16:19:36 [Ian] Proposed: Nix 6.1. 16:19:47 [Ian] [RF may or may not reuse this text] 16:19:52 [Ian] Resolved: Nix 6.1 16:19:58 [Ian] Resolved: Nix 6 16:20:28 [Ian] -------------- 16:20:29 [Ian] Abstract 16:20:52 [Ian] DC: "When followed" unclear to me. 16:21:10 [Chris] sdection 1: Interaction. Agents exchange representations via a non-exclusive set of protocols, including HTTP, FTP, and SMTP1 16:21:16 [Ian] TB: See my comments on adding "efficient..." 16:21:17 [Chris] is not actualy adressed 16:21:40 [Ian] DC: I think it means to say "When Web Arch is followed...." 16:22:18 [Ian] RF: I wouldn't say that I'm happy with the abstract. 16:22:32 [Ian] NW: Sounds like DC's comment is editorial. 16:22:45 [Ian] DC: I'll try to wordsmith offline. 16:22:56 [Ian] DC: I think it's worth giving the tripod a few words in the abstract. 16:23:54 [Ian] RF: The abstract should say "we organized" not that "we did it" 16:24:25 [Ian] RF: I read the abstract and it seems wrong; I don't have a better counter-proposal right now. 16:24:50 [Ian] TB: Tripod is mentioned in abstract.... 16:25:19 [Ian] DC: I don't like "this document". Web architecture has three parts. 16:25:30 [Ian] ============= 16:25:33 [Ian] [Status] 16:26:07 [Ian] TB: Update to say 1-4 in decent shape; 5 absent 16:26:22 [Ian] SW: Also, section numbering is all wrong. 16:26:36 [DanC] pls don't refer by section # alone. 16:27:05 [Ian] -------------- 16:27:10 [Ian] TOC: 16:27:15 [Ian] IJ: Ok highlighting principles there ok? 16:27:16 [Ian] TB: Yes. 16:27:18 [Ian] DC: Don't know. 16:27:20 [Ian] SW: Fine for now. 16:28:01 [Ian] DC: Lost a lot - sentence disappeared. 16:28:04 [Ian] RF: I prefer that approach. 16:28:15 [Ian] RF: I prefer that over shortening the essence. 16:28:30 [Ian] DC: So I'm not happy with it but don't have a replacement. 16:29:11 [Ian] NW: I may comment on TOC at later date. 16:29:18 [Ian] ------------- 16:29:24 [Ian] 1.1 Scenarios. 16:29:33 [Ian] NW: I think that frag id discussion too confusing this soon. 16:29:41 [Ian] NW: What about losing it from the scenario? 16:29:49 [timbl] Zakim, who is here? 16:29:49 [Zakim] On the phone I see Roy, Ian, Norm, +1.703.502.aaaa, Stuart, TimBray, DanC 16:29:51 [Zakim] On IRC I see timbl, Roy, DanC, RRSAgent, Stuart, Norm, Zakim, Chris, Ian 16:30:14 [Zakim] +TimBL 16:30:26 [Ian] TB: Agree to lose the frag id part. 16:30:52 [Ian] RF: Because it's a scenario, just say "found the URI in a travel magazine." I like the fact that he has to transcribe the URI. 16:31:27 [Ian] Proposed: Lost frag id from 1.1 scenario. 16:32:06 [Ian] IJ: I propose picking up the scenario with frag id part in section on frag ids. 16:32:25 [Ian] Proposed: 16:32:32 [Ian] - Lost frag id from scenario in 1.1 16:32:42 [Ian] - Dan finds URI in a printed magazine 16:33:00 [Ian] - IJ to try to move frag id part of scenario to section on frag ids (picking up thread from intro) 16:33:08 [Ian] Resolved: adopt these proposals. 16:33:31 [Ian] NW: Propose TAG ftf meeting in Oaxaca. 16:34:54 [Stuart] q- 16:35:10 [Ian] IJ: Running with idea, continue scenario, e.g., in section on deep linking (e.g., DanC has to subscribe to magazine to get info) 16:35:22 [Ian] RF: Highlight when you are talking about the scenario. 16:35:47 [Ian] DC: I have heard TBL and RF say that have issues with terminology. 16:35:54 [Ian] (e.g., "resource") 16:35:59 [Ian] DC: Is the current 1.1 ok? 16:36:02 [Ian] RF: Seems fine to me. 16:36:36 [Ian] DC: Fix inconsistency between <b>agents</b> and "resources". 16:36:39 [Ian] TB: Lose the bold. 16:37:06 [Chris] 16:37:06 [Ian] TBL: I don't think that the text in 1 is affected by the TBL/RF issues. 16:37:17 [Ian] TB: Use <b> consistently. 16:37:23 [timbl] use <defn> 16:37:37 [Ian] DC: It's useful for me to have distinctive markup for term definitions. 16:38:10 [DanC] ... and have the glossary point to them 16:38:26 [Ian] IJ: Spec source in xhtml for now. 16:38:43 [Ian] TB, DC: Highlight first instance of terms. 16:38:51 [Ian] (And be consistent) 16:39:03 [Chris] dfn 16:39:07 [Ian] TB: Hyperlink back to first instances. 16:39:37 [Chris] over-linked spec problem 16:39:52 [DanC] I don't need all uses of a term marked up 16:41:14 [timbl] Bets practice: When chosing a data format, it is best to chose one which is defined by a spec designed in the light of an architecture document which has been publiched using a namespace which supports multiple link destinations. 16:41:17 [timbl] ;-) 16:41:24 [Ian] ======================== 16:41:27 [Ian] Section 2 16:42:19 [Ian] 2.1 16:42:23 [Ian] TB: Drop "Some of the principles, practice, etc. may conflict with current practice, and so education and outreach will be required to improve on that practice. " 16:43:13 [Ian] NW: Could be 1.2 16:43:19 [Ian] DC: Prefer under section 1 16:43:38 [Ian] TB, IJ: Don't feel strongly. 16:43:55 [Ian] TB, DC: Keep scenarios up front. 16:43:55 [Norm] NW: concerned that the scenario device may become too chatty for a normative spec 16:44:05 [Ian] TB: I'd moderately prefer to keep a top-level section. 16:44:11 [Chris] prefer 1.x, don't care over much 16:44:43 [Ian] Remain top-level section: TB, IJ both mildly 16:44:47 [timbl] STRAW Poll: 16:44:50 [Chris] i think we have more important things to worry about 16:45:06 [Ian] Subordinate: SW, NW, 16:45:15 [Ian] Go with majority: CL, TB 16:45:17 [DanC] suboridnate. 16:45:20 [Roy] subordinate 16:45:22 [Stuart] subordinate 16:45:37 [timbl] The subordinates have it 16:45:39 [Ian] Resolved: subordinate 2. under 1, after 1.1 16:45:42 [timbl] ------------- 16:46:00 [Chris] add WSA 16:46:05 [Ian] Resolved: Add WSA to list of arch specs in 2.2 16:46:43 [Ian] TB: Furthermore, do we need the bulleted list at all? There will be other architectural things that bubble to the surface. This document is likely to be long-lived. 16:47:12 [Ian] DC: I'd prefer to have citations to the actual specs : See charmod and the I18N Activity. 16:47:13 [timbl] q+ pointers to specs works for me' 16:47:40 [Ian] TBL: Create a references section and link down to that. 16:47:43 [Stuart] q? 16:47:45 [Ian] Proposal: 16:47:48 [Ian] - Mention Web services 16:47:55 [Ian] - Link to references section in bottom of doc. 16:48:03 [timbl] Other specifications in eth area of acc'y, di, and i18n apply [13-19]. 16:48:06 [Ian] [Consensus around that] 16:48:39 [Ian] ===================== 16:48:47 [Ian] 3. Identification and resources 16:49:20 [Ian] RF: Having just rewritten intro to URI spec; need to sure that section 3. consistent with it. 16:49:23 [DanC] I started reviewing RFC2396bis; my main comment at this point is: wow! lots of work went into the revision! 16:49:26 [Ian] IJ: I read RFC; like it. 16:49:50 [Zakim] +DOrchard 16:49:52 [Ian] RF: I've had positive comments only on the definitions of URIs, resources. 16:49:55 [Chris] q+ to talk about opacity 16:50:07 [DanC] ack danc 16:50:08 [Zakim] DanC, you wanted to confirm terminology in section 1 and to ask about losing bold 16:50:28 [Ian] 16:50:37 [timbl] rev 2002? ok 16:50:45 [Ian] RF: I propose that we refer to the new document (rev-2002)... 16:51:10 [Ian] Proposed: Refer to new URI spec (draft). 16:51:19 [Ian] TB, NW, RF, IJ: Ok to point to this draft. 16:51:51 [Chris] draft-fielding-uri-rfc2396bis-0x 16:52:16 [Ian] TB: Yes, I think it's ok to point to an internet draft on this one. Tell people it's a draft in progress. 16:52:31 [Ian] CL: Ok, I'm happy to point to existing RFC2396 and say that it's being revised. 16:52:45 [Ian] CL: Point to both. 16:52:47 [Ian] q+ 16:53:08 [Roy] 16:53:08 [Ian] TBL: We can call out an Internet Draft, even if it's not a standard. 16:53:51 [Ian] TB: People are far-better served by RFC2396bis. 16:54:22 [Ian] q? 16:54:35 [Ian] q- 16:55:30 [Ian] q+ 16:55:52 [Roy] Can we please just refer to it as [URI] inside spec, and use references to indicate which one. 16:56:01 [Ian] TBL: Suggest referring to rfc2396bis and in references section, indicate that this is an update.. 16:56:24 [Ian] q? 16:56:30 [Ian] ack Chris 16:56:30 [Zakim] Chris, you wanted to talk about opacity 16:56:31 [DaveO] DaveO has joined #tagmem 16:56:33 [DanC] q+ 16:56:37 [Roy] q+ 16:56:50 [timbl] I suggested that we point to the one whose concepts and vocabulary we use in our doc. 16:57:42 [timbl] note the currently endorsed standard has a number of differences and we refer to and uyse th terminolgy of th ebis draft indicated. 16:57:43 [Ian] IJ: I second referring to [URI] and telling the story in the refs section. 16:57:51 [Roy] ack Ian 16:57:52 [Chris] I am happy to point to it, and to say why; I object to saying that it superceeds the RFC because it does not 16:58:02 [DaveO] q+ 16:58:04 [Ian] IJ: Seems ok to have docs and their refs evolve in tandem, even if drafts now. 16:58:07 [Chris] the maturity axis and the we-like-it axis are orthogonal 16:58:09 [Ian] ack Roy 16:58:13 [Ian] ack DaveO 16:58:31 [Chris] q+ to talk about opacity 16:58:58 [Ian] DO: When do specs start referring to draft specifications. What is relation with, e.g., IRI spec references from other W3C docs? 16:59:25 [Ian] RF: As an IETF writer, as long as we are not defining a technology standard (that would depend on others), we can refer to whatever we want. 16:59:45 [Ian] DO: If Arch Doc refers to 2396bis, won't others do so as well? 16:59:59 [Ian] DO: I agree that there is a diff between arch doc and a protocol/format spec. 17:00:00 [Chris] ok so we can say that we should refer to IRI on the same basic, because its better .... 17:00:12 [DanC] i thought I heard consensus: PROPOSED: "defined by [URI]" where [URI] points to and explains its relationship to RFC2396 17:00:18 [Ian] RF: We do set precedent in some senses, but we shouldn't refer to JUST RFC2396. 17:00:25 [timbl] q+ re sequnec fo RFCs 17:00:26 [Ian] RF: The technology is defined by a sequence of specifications. 17:00:35 [timbl] q+ to re sequnec fo RFCs 17:00:58 [Ian] RF: My suggestion is to say in web arch that URIs are defined by "The URI specification". In Refs refer to both RFC2396 and the current effort to revise it. 17:01:18 [Ian] q? 17:01:25 [Ian] ack Chris 17:01:25 [Zakim] Chris, you wanted to talk about opacity 17:01:56 [Chris] 17:03:11 [Ian] q+ 17:03:37 [Ian] CL: Say in section 3 (w.r.t the scenario) that the URI COULD HAVE BEEN something else. Tie opacity into scenario. 17:04:51 [Ian] ack timbl 17:04:51 [Zakim] timbl, you wanted to re sequnec fo RFCs 17:05:01 [timbl] 17:05:16 [DanC] we can move on 17:05:25 [Ian] Resolved: "Create [URI] and deal with this in refs" 17:05:35 [Ian] Proposed: 17:05:39 [Chris] say could have been 17:05:40 [Ian] - Tie in opacity to scenario 17:05:52 [Ian] - Drop a reference to Stuart's finding. 17:05:54 [Chris] or even, confusingly, example.com/manchester 17:06:38 [Ian] q+ TBray 17:06:41 [Chris] q+ timbray 17:06:44 [Ian] ack Ian 17:06:47 [Norm] ack tbrya 17:06:49 [Norm] ack tbray 17:06:52 [Norm] ack timbray 17:07:22 [Ian] TB: Delete "The term "resource" encompasses all those things that populate the Web: documents, services ("the weather forecast for Oaxaca"), people, organizations, physical objects, and concepts. " 17:07:39 [Ian] TB: (from this paragraph) 17:08:43 [Ian] IJ: I propose moving opacity to another para; keep first para about resources. 17:08:45 [Ian] q? 17:08:55 [Ian] q+ 17:09:12 [Ian] TB: Third paragraph - Delete technical usage note. 17:09:36 [Ian] TB: Promote Editor's note to regular text. 17:09:56 [Ian] TB: "TAG believes that internationization of URIs is architecturally important." 17:10:25 [Ian] DC: "TAG considers". Say instead "IRIs are a valuable mechanism." 17:10:28 [Ian] TB: 17:10:36 [Ian] - IRIs are valuable work 17:10:41 [Ian] - We think that are architecturally important. 17:10:57 [Ian] RF: Create a "future directions" section for each leg of the tripod. 17:12:01 [Ian] TB: "I18N of ids is important...here's where the work'd being done." 17:12:09 [timbl] Propose: s/which the TAG considers to be a valuable mechanism /which are proposed as a mevhamism/ 17:12:22 [timbl] Propose: s/which the TAG considers to be a valuable mechanism /which are proposed as a mechanism/ 17:12:25 [Ian] DC: What's valuable is the work on internationalization of ids. 17:13:11 [Ian] Proposal: (1) Internationalization of URIs is architecturally important (2) We appreciate the work going on over here. 17:13:35 [Ian] DC: I'd prefer saying "Future directions: IRIs are an open issue." 17:14:09 [timbl] q+ 17:14:17 [Ian] CL: On previous grounds, why can't we point to IRI draft? 17:14:27 [Ian] RF: I think DC's objection is to "it IS a good mechanism" 17:14:36 [Ian] DC: I note that we have NOT made a decision on IRIs. 17:14:40 [timbl] q+ to ask whether we are discussing the IRI issue now or not. 17:14:53 [Ian] q- 17:15:12 [Ian] DC: I'm not anywhere near closing IRIEverywhere-27 17:15:39 [Norm] ack timbl 17:15:39 [Zakim] timbl, you wanted to ask whether we are discussing the IRI issue now or not. 17:15:47 [Chris] well, I am fairly close so please educate me 17:16:18 [Ian] TBL: I've written down some pieces about IRIs and ties to other specs. Quite complex. I think that a number of people on the call are as yet unsure about the implications. 17:16:21 [DanC] what is it that we've almost agreed to, chris? 17:17:05 [Ian] Proposal: (1) TAG considers interationalization of URIs is architecturally important (2) We appreciate the work going on related to IRIs. 17:18:23 [Chris] i like that proposal 17:18:35 [Ian] TB: I think that several people are discovering that when you try to write down what we actually think, consensus is further off than we think. 17:19:05 [Ian] IJ: TBL, how close are you to sending your proposal to www-tag? 17:19:31 [Ian] TBL: Not sufficiently complete. The gist of the proposal involves defining the types of comparisons you want to do. 17:20:10 [Ian] [More discussion of the proposal...presumably to illustrate to CL some of TBL or DC's issues] 17:20:25 [Ian] [...Canonicalization..] 17:20:46 [Ian] TBL: TAG has not decided whether it wants to recommend that people use canonicalization. 17:22:41 [Ian] Re-Proposal: (1) TAG considers interationalization of URIs is architecturally important (2) We appreciate the work going on related to IRIs. 17:23:57 [Ian] DC: One of the most substantive discussion on IRIEverywhere occurred when I was not present. I didn't see "resolved" in minutes; so that's where I'm coming from. 17:24:03 [timbl] (1) The tag recognises that the identitification relations implied by the URI and IRI specs are important and must be understood clearly before the architecture is defined in this area. 17:24:10 [Ian] DC: Whatever we agree to on this issue, we need to be able to provide test cases. 17:24:18 [timbl] ^counter proposeal by timbl 17:24:41 [timbl] q+ 17:24:54 [Ian] ack timbl 17:25:11 [Ian] [Still discussion details of IRI issues] 17:25:22 [Chris] q+ to point out errors 17:25:37 [Ian] TBL: There is an implication in IRI specs that people will do comparisons. This means [scribe thinks he heard] that some string comparisons won't work. 17:25:47 [Chris] it explicitly says you cant convert back and expect to get what you started with 17:25:52 [DanC] yes, I did some fact-checking about timbl's claims about the IRI spec, and I couldn't confirm. 17:26:08 [Ian] TBL: The IRI spec says loudly that URIs and IRIs are both valuable (and both can appear in an href) . That blows up strcmp 17:26:20 [Ian] q? 17:26:27 [Chris] it explicitly says to convert as late as possible and only to squeeze through hostile transports 17:27:21 [Chris] straw poll please 17:27:32 [Ian] Re-Proposal: (1) TAG considers interationalization of URIs is architecturally important (2) We appreciate the work going on related to IRIs. 17:27:39 [DanC] "important" meaning: we accepted it as an issue. 17:27:52 [Ian] IJ: We could jus tsay "we're following what's going on on IRIs" 17:28:05 [Ian] Re-Proposal: (1) TAG considers interationalization of URIs to be architecturally important (2) We are following IRI work. 17:28:29 [Chris] yes 17:28:29 [Norm] NW: Yes 17:28:48 [timbl] tim: no 17:28:52 [Ian] TB by proxy: Yes 17:28:56 [Stuart] concur 17:28:58 [DaveO] do: yes 17:29:43 [Ian] RF: My problem is "I18N of URIs" prefer "Localization of identifiers". 17:29:49 [Ian] TB: I would prefer "I18N of identifiers" 17:30:14 [Ian] RF: We are talking about localization here; they are only usable within a certain context. 17:30:45 [timbl] no outcome. 17:30:49 [Chris] localization is not the same as internationalzation 17:31:06 [Ian] RF: I think we should have a section on future directions that talks about the IRI process and that's it. 17:31:29 [Ian] CL: Why would we identify that as a future direction that we are interested in? 17:31:41 [Ian] RF: We are interested in it; we need to find a solution to it. It's future work in the space of identifiers. 17:31:54 [Ian] ack Chris 17:31:54 [Zakim] Chris, you wanted to point out errors 17:32:45 [Ian] DO: We haven't tackled, e.g., use of identifiers in Web Services. 17:32:55 [Ian] DO: CL's point is "what's the problem we are trying to solve" 17:32:58 [Ian] ack DanC 17:33:10 [Ian] DC: The fact that we accepted the issue is evidence that we think this is important. 17:33:28 [Chris] editor of what? 17:34:50 [Ian] IJ: I would be comfortable saying "I18N of ids is important; TAG is following IRI work." Not sure if would be part of "future directions" section. 17:35:24 [Ian] [Moving on] 17:35:59 [Ian] NW: Throughout the document, we use widely varying domain names. Readers might think there is significance to the domain names. Let's pick on. 17:36:24 [Chris] use example.com consistently unless the domain name is actually significant 17:36:26 [Ian] q+ 17:36:31 [Ian] ack DanC 17:36:31 [Zakim] DanC, you wanted to suggest weather.example 17:36:41 [Ian] DC: Use weather.example 17:36:46 [Norm] ack danc 17:36:49 [Norm] ack ian 17:36:53 [Chris] weather.exampler.org or .com 17:37:07 [Ian] 1) Don't use real domains 17:37:17 [Ian] TB: example.com better than example.weather. 17:37:27 [Ian] TB: That looks weird to people. 17:37:40 [Ian] s/example.weather/weather.example/ 17:37:52 [DanC] ".example" is recommended for use in documentation or as examples. -- 17:38:06 [Ian] q+ 17:38:16 [DaveO] I support 17:38:16 [Norm] ack ian 17:38:27 [Ian] DC: I can live with weather.example.com 17:38:31 [Ian] Proposed: 17:38:42 [Ian] - Remove "real domain" names (not example.*) 17:38:45 [DanC] e.g. yahoo 17:38:47 [DanC] yes. 17:38:58 [Norm] I actually propose "example.com", not "example.*". 17:39:09 [Ian] - Reduce overall number of sample domain names by hearkening back to scenario. 17:39:25 [Ian] ================ 17:39:32 [Ian] 3.1. Comparing identifiers 17:39:40 [Ian] TB: Take pointer to obsolete draft from para 4 17:40:32 [Ian] TB: Section 6 of RFC2396bis improves on comparison text. I propose to strike 4th paragraph and say at end of first paragraph that discussed in [URI]. 17:41:44 [Ian] Delete "Issue..." at end of 3.1 17:42:55 [Ian] IJ: DO talked to me about URIs in Budapest about, e.g., 1-1 relationship between URI/Resource. 17:43:33 [Ian] DO: I liked our Irvine discussion. 17:43:42 [Ian] DC: We have a hello world example in a scenario. We don't have a picture yet. 17:44:19 [Ian] SW: I could do a draft of an illustration. 17:44:28 [Ian] DC: Two world views on one-to-oneness... 17:44:30 [DaveO] More than liked our irvine, found it useful though lots of it have faded from my memory, and my knowledge failed the test of trying to explain it. 17:44:43 [Ian] RF: From discussion on URI mailing list, I'd be happy to just drop it. 17:44:52 [DanC] 3.1. Comparing identifiers 17:44:59 [DanC] ^that's what we're talking about, yes? 17:46:26 [Ian] DO: I think that our discussion in Irvine should be captured here (two models) 17:47:12 [Ian] DO: In the WSDL WG, they are looking at adding an attribute to services (proposed name "resource") to annotate a service description with a resource identifier for the thing that is behind the service. 17:47:28 [Ian] DO: This would allow WSDL descriptions in multiple files but all related to the target resource. 17:47:30 [timbl] "behind"? 17:47:50 [Ian] DO: E.g., "David's printer". Web service A and Web service B (offered by company B) both relate to David's printer. 17:48:01 [Ian] DO: The services are related because they both operate on David's printer. 17:48:44 [Ian] q+ 17:48:57 [timbl] device? 17:50:24 [Ian] TBL: Not sure to me that for a Web service generally, there is only one attribute (e.g., behind). I think this is an application level vocabularly. 17:51:39 [timbl] I would describe issue 14 as a semantic web issue 17:51:39 [Ian] [Seems to be about httpRange-14] 17:52:15 [Ian] DO: This about httpRange-14, and fleshing out the two world views in the arch doc. 17:52:16 [Norm] q? 17:52:38 [Ian] DC: I think that we can agree on the first picture, but that there are N views that conflict when you look at the details. 17:53:49 [Ian] DO: Question we might want to answer - do two different URIs identify the same resource? 17:54:14 [Ian] DO: i:identifiers, t:identifiers. 17:54:49 [Ian] TBL: I think everyone agrees that you can have multiple ids for the same thing. 17:55:24 [Chris] agree, lets carry on next monday 17:55:40 [Ian] q+ 17:55:57 [Ian] [Seems to be general agreement that today's discussion beneficial] 17:56:29 [timbl] q+ 17:56:36 [Ian] TB: I think that section 3 currently hits good 80/20 point. Not sure that we are missing some super important points before going forward. 17:56:42 [Ian] DC: Consider my tree shaken. :) 17:56:45 [DanC] 80/20: 2nded. 17:56:48 [Ian] s/DC/DO 17:56:57 [Ian] q 17:57:05 [Ian] ---------------- 17:57:11 [Ian] Next session scheduled for 19 June. 17:57:20 [Ian] SW: A number of people seem at risk on 19 June. 17:57:20 [Roy] on plane 19 June 17:57:26 [Chris] I am still ok for 19th 17:57:34 [Ian] SW, RF, TB, DC at risk. 17:58:55 [Ian] Proposed: 6 June. 17:59:03 [Ian] TBL: I'm out of town 6 June. 17:59:13 [Ian] Proposed: 9 June. 17:59:20 [Ian] NW: Not sure I can arrange 4 hours. 17:59:26 [Ian] TBL: 9 June is pentecost 18:00:11 [Chris] proposal is 9:00-11:30 on monday? 18:00:17 [Ian] Proposal: 3pm-5:30pm ET 18:00:31 [Roy] okay 18:00:36 [Ian] Yes: TB, CL, SW, DC, NW, IJ, TBL, RF 18:00:41 [Ian] Yes: DO 18:00:49 [Chris] resolved:! 18:00:55 [Ian] Resolved: Cancel 19 June, meet as proposed 9 June 18:01:34 [Ian] q- 18:02:22 [Ian] Summarizing work for next week or so: 18:02:27 [Ian] 1) RF rewrite section 5 18:02:35 [Ian] 2) NW to rewrite section on hyperlinking 18:02:36 [Chris] I will get Separation of semantic and presentational markup, to the extent possible, is architecturally sound 18:02:41 [Ian] 3) TB rewriting 4 18:02:44 [Chris] ready for next wek 18:02:49 [Ian] 4) CL polishing up section on content/presentation 18:02:59 [DanC] I'm working on HTML-in-RDF-nn 18:02:59 [Ian] 5) DO: Update issue 37 18:03:10 [Ian] 6) SW: Rewrites on opacity finding. 18:03:16 [Zakim] -Roy 18:03:18 [Ian] 7) IJ start working on rewrite of arch doc 18:03:23 [Ian] ADJOURNED 18:03:25 [Zakim] -DanC 18:03:26 [Ian] RRSAgent, stop
http://www.w3.org/2003/06/02-tagmem-irc.html
CC-MAIN-2016-30
refinedweb
9,372
71.44
Headers, and their purpose As programs grow larger (and make use of more files), it becomes increasingly tedious to have to forward declare every function you want to use that is defined in a different file. Wouldn’t it be nice if you could put all your forward declarations in one place and then import them when you need them? C++ code files (with a .cpp extension) are not the only files commonly seen in C++ programs. The other type of file is called a header file. Header files usually have a .h extension, but you will occasionally see them with a .hpp extension or no extension at all. The primary purpose of a header file is to propagate declarations to code files. Key insight Header files allow us to put declarations in one location and then import them wherever we need them. This can save a lot of typing in multi-file programs. Using standard library header files Consider the following program: This program prints “Hello, world!” to the console using std::cout. However, this program never provided a definition or declaration for std::cout, so how does the compiler know what std::cout is? The answer is that std::cout has been forward declared in the “iostream” header file. When we #include <iostream>, we’re requesting that the preprocessor copy all of the content (including forward declarations for std::cout) from the file named “iostream” into the file doing the #include. #include <iostream> When you #include a file, the content of the included file is inserted at the point of inclusion. This provides a useful way to pull in declarations from another file. Consider what would happen if the iostream header did not exist. Wherever you used std::cout, you would have to manually type or copy in all of the declarations related to std::cout into the top of each file that used std::cout! This would require a lot of knowledge about how std::cout was implemented, and would be a ton of work. Even worse, if a function prototype changed, we’d have to go manually update all of the forward declarations. It’s much easier to just #include iostream! #include iostream When it comes to functions and variables, it’s worth keeping in mind that header files typically only contain function and variable declarations, not function and variable definitions (otherwise a violation of the one definition rule could result). std::cout is forward declared in the iostream header, but defined as part of the C++ standard library, which is automatically linked into your program during the linker phase. Best practice Header files should generally not contain function and variable definitions, so as not to violate the one definition rule. An exception is made for symbolic constants (which we cover in lesson 4.14 -- Const, constexpr, and symbolic constants).). In this example, we used a forward declaration so that the compiler will know what identifier add is when compiling main.cpp. As previously mentioned, manually adding forward declarations for every function you want to use that lives in another file can get tedious quickly. Let’s write a header file to relieve us of this burden. Writing a header file is surprisingly easy, as header files only consist of two parts: Adding a header file to a project works analogously to adding a source file (covered in lesson 2.7 -- Programs with multiple code files). If using an IDE, go through the same steps and choose “Header” instead of “Source” when asked. If using the command line, just create a new file in your favorite editor. Use a .h suffix when naming your header files. Header files are often paired with code files, with the header file providing forward declarations for the corresponding code file. Since our header file will contain a forward declaration for functions defined in add.cpp, we’ll call our new header file add.h. If a header file is paired with a code file (e.g. add.h with add.cpp), they should both have the same base name (add). Here’s our completed header file: add.h: In order to use this header file in main.cpp, we have to #include it (using quotes, not angle brackets). add.cpp: When the preprocessor processes the #include "add.h" line, it copies the contents of add.h into the current file at that point. Because our add.h contains a forward declaration for function add, that forward declaration will be copied into main.cpp. The end result is a program that is functionally the same as the one where we manually added the forward declaration at the top of main.cpp. #include "add.h" Consequently, our program will compile and link correctly. Including a header in the corresponding source file You’ll see that most source files include their corresponding header, even if they don’t need it. Why is that? Including the header in the source file increases forward compatibility. It’s very likely that in the future, you’ll add more functions or modify existing ones in a way that they need to know about the existence of each other. Once we get more into the standard library, you’ll be including many library headers. If you need an include in a header, you probably need it for a function declaration. This means that you’ll also need the same include in the source file. This would lead to you having a copy of your header’s includes in your source file. By including your header in your source file, the source file has access to everything the header had access to. In library development, including your header in your source file can even help to catch errors early. For an example, see this comment. When writing a source file, include the corresponding header (If one exists), even if you don’t need it yet. Troubleshooting If you get a compiler error indicating that add.h isn’t found, make sure the file is really named add.h. Depending on how you created and named it, it’s possible the file could have been named something like add (no extension) or add.h.txt or add.hpp. Also make sure it’s sitting in the same directory as the rest of your code files. If you get a linker error about function add not being defined, make sure you’ve added add.cpp in your project so the definition for function add can be linked into the program. Angled brackets vs double quotes You’re probably curious why we use angled brackets for iostream, and double quotes for add.h. It’s possible that a header file with the same filename might exist in multiple directories. Our use of angled brackets vs double quotes helps give the compiler a clue as to where it should look for header files. iostream add.h When we use angled brackets, we’re telling the preprocessor that this is a header file we didn’t write ourselves. The compiler will search for the header only in the directories specified by the include directories. The include directories are configured as part of your project/IDE settings/compiler settings, and typically default to the directories containing the header files that come with your compiler and/or OS. The compiler will not search for the header file in your project’s source code directory. include directories When we use double-quotes, we’re telling the preprocessor that this is a header file that we wrote. The compiler will first search for the header file in the current directory. If it can’t find a matching header there, it will then search the include directories. Rule Use double quotes to include header files that you’ve written or are expected to be found in the current directory. Use angled brackets to include headers that come with your compiler, OS, or third-party libraries you’ve installed elsewhere on your system. Why doesn’t iostream have a .h extension? Another commonly asked question is “why doesn’t iostream (or any of the other standard library header files) have a .h extension?”. The answer is that iostream.h is a different header file than iostream! To explain requires a short history lesson. When C++ was first created, all of the files in the standard runtime library ended in a .h suffix. Life was consistent, and it was good. The original version of cout and cin were declared in iostream.h. When the language was standardized by the ANSI committee, they decided to move all of the functionality in the standard library into the std namespace to help avoid naming conflicts with user-defined identifiers. However, this presented a problem: if they moved all the functionality into the std namespace, none of the old programs (that included iostream.h) would work anymore! To work around this issue,> In addition, many of the libraries inherited from C that are still useful in C++ were given a c prefix (e.g. stdlib.h became cstdlib). The functionality from these libraries was also moved into the std namespace to help avoid naming collisions. When including a header file from the standard library, use the no extension version (without the .h) if it exists. User-defined headers should still use a .h extension. Including header files from other directories Another common question involves how to include header files from other directories. One (bad) way to do this is to include a relative path to the header file you want to include as part of the #include line. For example: While this will compile (assuming the files exist in those relative directories), the downside of this approach is that it requires you to reflect your directory structure in your code. If you ever update your directory structure, your code won’t work anymore.. For Visual Studio users Right click on your project in the Solution Explorer, and choose Properties, then the VC++ Directories tab. From here, you will see a line called Include Directories. Add the directories you’d like the compiler to search for additional headers there. For Code::Blocks users In Code::Blocks, go to the Project menu and select Build Options, then the Search directories tab. Add the directories you’d like the compiler to search for additional headers there. For GCC/G++ users Using g++, you can use the -I option to specify an alternate include directory. The nice thing about this approach is that if you ever change your directory structure, you only have to change a single compiler or IDE setting instead of every code file. Headers may include other headers It’s common that a header file will need a declaration or definition that lives in a different header file. Because of this, header files will often #include other header files. When your code file #includes the first header file, you’ll also get any other header files that the first header file includes (and any header files those include, and so on). These additional header files are sometimes called “transitive includes”, as they’re included implicitly rather than explicitly. The content of these transitive includes are available for use in your code file. However, you should not rely on the content of headers that are included transitively. The implementation of header files may change over time, or be different across different systems. If that happens, your code may only compile on certain systems, or may compile now but not in the future. This is easily avoided by explicitly including all of the header files the content of your code file requires. Each file should explicitly #include all of the header files it needs to compile. Do not rely on headers included transitively from other headers. Unfortunately, there is no easy way to detect when your code file is accidentally relying on content of a header file that has been included by another header file. Q: I didn't include <someheader.h> and my program worked anyway! Why? This is one of the most commonly asked questions on this site. The answer is, it’s likely working, because you included some other header (e.g. <iostream>), which itself included <someheader.h>. Although your program will compile, per the best practice above, you should not rely on this. What compiles for you might not compile on a friend’s machine. The #include order of header files If your header files are written properly and #include everything they need, the order of inclusion shouldn’t matter. However, including your header files in a certain order can help surface mistakes where your header files may not include everything they need. Order your #includes as follows: your own user-defined headers first, then 3rd party library headers, then standard library headers, with the headers in each section sorted alphabetically. That way, if one of your user-defined headers is missing an #include for a 3rd party library or standard library header, it’s more likely to cause a compile error so you can fix it. Header file best practices Here are a few more recommendations for creating and using header files. after copying and trying to compile the second example about writing my own header files, it says: _main already defined in Main.obj i know where Main.obj is, but i'm not sure if it's best to rename my project, rename Main.obj or just delete Main.obj also not really sure what objects are yet. And why should i name my file with .h or what is .h.. :) Using a .h extension on your header files is common convention. If you do otherwise, it will probably work, but it would be confusing to other people (or maybe your future self). I don't understand why i got linker error every time i copile the program. I type (#include then my file) but still get linker error. Can someone explin please. #include "Cal.cpp" (IN main.cpp) Then type in some code and execute it: I get linker error Would the <string> library count as a "transitive include" of the <iostream> library? cppreference lists includes of <iostream> If you want to use `std::string`, include <string> #include <iostream> #include "add.h" #include "sub.h" #include "mul.h" #include "div.h" int getValueFromUser() { int num{}; std::cout << "Enter value: "; std::cin >> num; return num; } char getOperator() { char c; std::cout << "Enter an operator: "; std::cin >> c; return c; } int main() { int x { getValueFromUser() }; //first value get from user int y { getValueFromUser() }; //second value get from user char z{ getOperator() }; if (z == '+') { std::cout << "the sum of two value is : " << add(x, y) << "\n"; } else if (z == '-') { std::cout << "the subract of two value is : " << sub(x, y) << "\n"; } else if (z == '*') { std::cout << "the multiplication of two value is : " << mul(x, y) << "\n"; } else if (z == '/') { std::cout << "the division of two value is : " << divide(x, y) << "\n"; } else { std::cout << "You enter an invalid operator!!" << "\n"; } } //Thank for tutorial and this code is an easy version of calculator Thank you for posting this calculator is was a good practice to re-create it without looking at it or copy/pasting! Suggestion (I'm a total beginner so maybe my suggestions are irrealistic at this stage) 1- At the "Else", the program simply end if someone enter an invalid operator. It would be interesting to find a way to send back the user to "getOperator" and make the program run again? 2- At the end of the program, there could be a std::cout/cin asking the user if he wants to perform another operation (cin yes/no) Those are just suggestion, i'm sure you are better at coding compare to me! Good luck in your learning :) You Could have just put all of them in one file called calculate.h I have a very simple question. Maybe I haven't understood the concept correctly. But, if cin and cout are forward declared in iostream, then where are the actual definitions? In the compiled standard library that's placed somewhere in your system. wht do you mean 'compiled' is it a .cpp file like in the examples ? thanks It's the file a compiler outputs when compiling a library. On linux, either .so or .a file. These files are not human-readable. Right now, I have 2 separate projects. (too lazy to type it all)...\Documents\Winter_Break_2020\C++\ThisProjectDoesNOTHAVEADD (too lazy to type it all)...\Documents\Winter_Break_2020\C++\ThisProjectHasTheAdd In the first project, I have made a header file that has the line int add(int x, int y); In the first project, that is "ThisProjectDoesNOTHAVEADD",I have made a program main that calls on add. This project also has add.cpp that contains Right now, it should NOT run since I commented out the forward declaration. However, I did the whole Properties -> VC++ Directories -> Include Directories -> Edit and added a new line with (...)\Documents\Winter_Break_2020\C++\ThisProjectHasTheAdd\ThisProjectHasTheAdd However, the project "ThisProjectDoesNOTHAVEADD" is still broken. It won't be able to pick up the header file from the other project. Am I doing something wrong? Any guidance would be much appreciated! ^-^ Hey Everybody, BIG HEADS UP! In Virtual Studio 2019... The method for adding an item is a little different than the examples provided in this book. By this point, I was getting compiler errors using the "Windows Desktop Wizard" preset for the project. Also, the default location my projects were being saved, "C:\Users\User\source\repos", was creating compiler errors because my computer has weird permissions. This was demotivating until I realized it was not my fault and found the solution. At this point, I began saving my project in "My Documents". I also began starting all of my projects with the "Empty Project" project option. All of these file-related compiling issues went away after doing this. Once you're in the "Empty Project", go to Project > Add item > Create .cpp or .h file. It will add the files to the correct location and you won't have any weird file-related compiling issues. Hope this helps some people working with Virtual Studio 2019! :) Hola que tal como te va, mira yo soy nuevo en el tema de la programacion y me es todo muy verde a la hora de aprender algo... yo tengo un problema similar que el tuyo solo que yo uso visutal studio code y no sabria como es el tema de anidar elementos a mi proyecto. por ejemplo yo ahora segui con el tutorial este pero no pude lograr agregar funciones de cabecera en mis proyectos y queria saber si te manejabas con el vs code como para poder darme una mano ya que yo sigo adelantando pero hay una parte muy importante que es esta que no estaria logrando comprender!! Muchas gracias saludos! - Every header you write should compile on its own (it should #include every dependency it needs) if i understand this correctly, this means that if main.cpp includes add.h and add.h includes subtract.h then i must also include subtract.h in main.cpp ? main.cpp #include "add.h" #include "subtract.h" add.h #include "subtract.h" subtract.h if this is correct, my question is 1. would this violate the Only #include what you need (don’t include everything just because you can). rule ? 2. wouldn't this bloat the size of our file because we added the contents of subtract.h in main.cpp and add.h ? Only include "subtract.h" in "main.cpp" if you need it so if i need subtract.h in main.cpp and add.cpp uses subtract.h should i add #include "subtract.h" in main.cpp or does adding #include "add.h" in main.cpp enough since it includes subtract.h ? If "add.cpp" needs "subtract.h", then "add.cpp" has to include "subtract.h". Source files don't know about each other, so includes in one source file don't affect another source file. What "add.cpp" includes is irrelevant when you decide what "main.cpp" needs to include. "main.cpp" only has to include what "main.cpp" needs. If "add.h" needs "subtract.h" (I doubt it does), then "add.h" has to include "subtract.h". "main.cpp" only includes "subtract.h" if "main.cpp" itself needs something from "subtract.h". Each file should explicitly #include all of the header files it needs to compile, so yes. Under best practices you mention "Every header you write should compile on its own". I'm confused about this statement as I thought that only the .cpp files were compiled, not the .h files. Can you clarify? Headers get compiled as part of every source file that includes them. What this bullet point means it that you have a source file which includes your header and is otherwise empty, the source file should compile. So, for testing can we take empty main.cpp file, with included tested .H file and .cpp in project? If compile pass, files ok. io.cpp (tested file) io.h (tested header) Yes you can test it like that. You're not testing "io.cpp", "io.cpp" gets compiled independent of "main.cpp", you're only testing "io.h". It doesn't matter which source file you use to test the header. If you already have something in "main.cpp" and don't want to remove it, you can simply create a new .cpp file to include the header. nvermind, i think my question is covered in the next section (header guard) but can't find a way to delete this comment. Name (required) Website Save my name, email, and website in this browser for the next time I comment.
https://www.learncpp.com/cpp-tutorial/header-files/
CC-MAIN-2021-17
refinedweb
3,636
66.44
. address book: A collection of Address Book objects, each of which are contained in any number of address lists. address list: A collection of distinct Address Book objects.. attachments table: A Table object whose rows represent the Attachment objects that are attached64 encoding: A binary-to-text encoding scheme whereby an arbitrary sequence of bytes is converted to a sequence of printable ASCII characters, as described in [RFC4648].. blind carbon copy (Bcc) recipient: An addressee on a Message object that is not visible to recipients of the Message object. body part: A part of an Internet message, as described in [RFC2045]. calendar: A date range that shows availability, meetings, and appointments for one or more users or resources. See also Calendar object. carbon copy (Cc) recipient: An address on a Message object that is visible to recipients of the Message object but is not necessarily expected to take any action. character set: The range of characters used to represent textual data within a MIME body part, as described in [RFC2046].. contact: A person, company, or other entity that is stored in a directory and is associated with one or more unique identifiers and attributes, such as an Internet message address or login name. contact attachment: An attached message item that has a message type of "IPM.Contact" and adheres to the definition of a Contact object. Contact object: A Message object that contains properties pertaining to a contact.).. delivery status notification (DSN): A message that reports the result of an attempt to deliver a message to one or more recipients, as described in [RFC3464]. display name: A text string that is used to identify a principal or other object in the user interface. Also referred to as title. distinguished name (DN): A name that uniquely identifies an object by using the relative distinguished name (RDN) for the object, and the names of container objects and domains that contain the object. The distinguished name (DN) identifies the object and its location in a]. Embedded Message object: A Message object that is stored as an Attachment object within another Message object. encapsulation: A process of encoding one document in another document in a way that allows the first document to be re-created in a form that is nearly identical to its original form. EntryID: A sequence of bytes that is used to identify and access an object. flags: A set of values used to configure or report options or settings.). header: A name-value pair that supplies structured data in an Internet email message or MIME entity. Hypertext Markup Language (HTML): An application of the Standard Generalized Markup Language (SGML) that uses tags to mark elements in a document, as described in [HTML]. Internet Mail Connector Encapsulated Address (IMCEA): A means of encapsulating an email address that is not compliant with [RFC2821] within an email address that is compliant with [RFC2821]. Internet Message Access Protocol - Version 4 (IMAP4): A protocol that is used for accessing email and news items from mail servers, as described in [RFC35. language code identifier (LCID): A 32-bit number that identifies the user interface human language dialect or variation that is supported by an application or a client computer. locale: A collection of rules and data that are specific to a language and a geographical area. A locale can include information about sorting rules, date and time formatting, numeric and monetary conventions, and character classification. Mail User Agent (MUA): A client application that is used to compose and read email messages. mailbox: A message store that contains email, calendar items, and other Message objects for a single recipient. message body: (1) The content within an HTTP message, as described in [RFC2616] section 4.3. (2) The main message text of an email message. A few properties of a Message object represent its message body, with one property containing the text itself and others defining its code page and its relationship to alternative body formats.. Messaging Application Programming Interface (MAPI): A messaging architecture that enables multiple applications to interact with multiple messaging systems across a variety of hardware platforms. metafile: A file that stores an image as graphical objects, such as lines, circles, and polygons, instead of pixels. A metafile preserves an image more accurately than pixels when an image is resized. MIME analysis: A process that converts data from an Internet wire protocol to a format that is suitable for storage by a server or a client. MIME body: The content of a MIME entity, which follows the header of the MIME entity to which they both belong. MIME content-type: A content type that is as described in [RFC2045], [RFC2046], and [RFC2047]. MIME entity: An entity that is as described in [RFC2045], [RFC2046], and [RFC2047]. MIME generation: A process that converts data held by a server or client to a format that is suitable for Internet wire protocols. MIME message: A message that is as described in [RFC2045], [RFC2046], and [RFC2047]. MIME part: A message part that is as described in [RFC2045], [RFC2046], and [RFC2047]. MIME reader: An agent that performs MIME analysis. It can be a client or a server. MIME writer: An agent that performs MIME generation. It can be a client or a server. Multipurpose Internet Mail Extensions (MIME): A set of extensions that redefines and expands support for various types of content in email messages, as described in [RFC2045], [RFC2046], and [RFC2047]. named property: A property that is identified by both a GUID and either a string name or a 32-bit identifier. non-delivery report: A report message that is generated and sent by a server to the sender of a message if an email message could not be received by an intended recipient. Object Linking and Embedding (OLE): A technology for transferring and sharing information between applications by inserting a file or part of a file into a compound document. The inserted file can be either embedded or linked. See also embedded object and linked object. one-off address: An email address that is encoded as a mail-type/address pair. Valid mail-types include values such as SMTP, X400, X500, and MSMAIL. one-off EntryID: A special address object EntryID that encapsulates electronic address information, as described in [MS-OXCDATA]. Out of Office (OOF): One of the possible values for the free/busy status on an appointment. It indicates that the user will not be in the office during the appointment. Personal Information Manager (PIM): A category of software packages for managing commonly used types of personal information, including contacts, email messages, calendar appointments, and meetings. plain text: Text that does not have markup. See also plain text message body. Post Office Protocol - Version 3 (POP3): A protocol that is used for accessing email from mail servers, as described in [RFC1939].). property set: A set of attributes, identified by a GUID. Granting access to a property set grants access to all the attributes in the set. property type: A 16-bit quantity that specifies the data type of a property value. PS_INTERNET_HEADERS: An extensible namespace that can store custom property headers. pure MIME message: A MIME representation of an email message that does not contain a Transport Neutral Encapsulation Format (TNEF) body part. recipient: (1) An entity that can receive email messages. (2) An entity that is in an address list, can receive email messages, and contains a set of attributes. Each attribute has a set of associated values. recipient table: The part of a Message object that represents users to whom a message is addressed. Each row of the table is a set of properties that represents one recipient (2). reminder: A generally user-visible notification that a specified time has been reached. A reminder is most commonly related to the beginning of a meeting or the due time of a task but it can be applied to any object]. resource: Any component that a computer can access that can read, write, and process data. This includes internal components (such as a disk drive), a service, or an application running on and managed by the cluster on a network that is used to access a file. Rich Text Format (RTF): Text with formatting as described in [MSFT-RTF]. S/MIME (Secure/Multipurpose Internet Mail Extensions): A set of cryptographic security services, as described in [RFC5751]. Simple Mail Transfer Protocol (SMTP): A member of the TCP/IP suite of protocols that is used to transport Internet messages, as described in [RFC5321]. spam: An unsolicited email message.. To recipient: See primary recipient. top-level message: A message that is not included in another message as an Embedded Message object. Top-level messages are messaging objects. Transport Neutral Encapsulation Format (TNEF): A binary type-length-value encoding that is used to encode properties for transport, as described in [MS-OXTNEF]. Transport Neutral Encapsulation Format (TNEF) message: A MIME representation of an email message in which attachments and some message properties are carried in a Transport Neutral Encapsulation Format (TNEF) body part.). Unified Messaging: A set of components and services that enable voice, fax, and email messages to be stored in a user's mailbox and accessed from a variety of devices.]. universally unique identifier (UUID): A 128-bit value. UUIDs can be used for multiple purposes, from tagging objects with an extremely short lifetime, to reliably identifying very persistent objects in cross-process communication such as client and server interfaces, manager entry-point vectors, and RPC objects. UUIDs are highly likely to be unique. UUIDs are also known as globally unique identifiers (GUIDs) and these terms are used interchangeably in the Microsoft protocol technical documents (TDs). Interchanging the usage of these terms does not imply or require a specific algorithm or mechanism to generate the UUID. Specifically, the use of this term does not imply or require that the algorithms described in [RFC4122] or [C706] must be used for generating the UUID. UTF-16LE: The Unicode Transformation Format - 16-bit, Little Endian encoding scheme. It is used to encode Unicode characters as a sequence of 16-bit codes, each encoded as two 8-bit bytes with the least-significant byte first. vCard: A format for storing and exchanging electronic business cards, as described in [RFC2426]. MAY, SHOULD, MUST, SHOULD NOT, MUST NOT: These terms (in all caps) are used as defined in [RFC2119]. All statements of optional behavior use either MAY, SHOULD, or SHOULD NOT.
https://docs.microsoft.com/en-us/openspecs/exchange_server_protocols/ms-oxcmail/aa312b4b-0cd0-4553-8139-d621db401745
CC-MAIN-2021-17
refinedweb
1,735
53.92
. +---------+ | START | +---------+ |; enum PinStateType { PIN_OFF, PIN_ON }; enum PinStateType If PIN was not prepended a conflict { would occur as OFF and ON are probably PIN_OFF, already defined. PIN_ON }; enum { STATE_ERR, STATE_OPEN, STATE_RUNNING, STATE_DYING}; Some subtle errors can occur when macro names and enum labels use the same name. #define MAX(a,b) blah #define IS_ERR(err) blah int some_bloody_function() { }: I don't really consider unit tests a question answering device because if you can't understand the code by reading it, reading something else about the code you don't understand won't help you understand it better.. // . #ifndef XX_h #define XX_h // SYSTEM INCLUDES // #include // standard IO interface #include // HASA string interface #include // USES auto_ptr { // Block1 (meaningful comment about Block1) ... some code { // Block2 (meaningful comment about Block2) ... some code } // End Block2 } // End Block1. ... Another benefit is that we can pass Jumpable objects to the GUI, not specific objects like Horse or Frog: class Gui { public: void MakeJump(Jumpable*); } Gui gui; Frog* pFrog= new Frog; gui.MakeJump(pFrog); We also removed the recompile dependency. Because Gui doesn't contain any Frog objects it will not be recompiled when Frog changes. All classes derived from a base class should be interchangeable when used as a base class.; } class SunWorkstation { public: void UpVolume(int amount) { mSound.Up(amount); } SoundCard mSound; private: GraphicsCard mGraphics; } SunWorksation sun; Do : sun.UpVolume(1); Don't: sun.mSound.Up(1);_ #include "XX.h" // class implemented /////////////////////////////// PUBLIC /////////////////////////////////////// //============================= LIFECYCLE ==================================== XX::XX() { }// XX XX::XX(const XX&) { }// XX XX::~XX() { }// ~XX //============================= OPERATORS ==================================== XX& XX::operator=(XX&); { return *this; }// = //============================= OPERATIONS =================================== //============================= ACESS =================================== //============================= INQUIRY =================================== /////////////////////////////// PROTECTED /////////////////////////////////// /////////////////////////////// PRIVATE /////////////////////////////////// #ifndef filename_h #define filename_h #endif #ifndef namespace_filename_h #define namespace_filename_h #endif(); }; /** Assignment operator. * * * @param val The value to assign to this object. * @exception LibaryException The explanation for the exception. * ); * strcpy(somebuf, "test") * int AnyMethod( int arg1, int arg2, int arg3, int arg4); To see why ask yourself: class X { public: int GetAge() const { return mAge; } void SetAge(int age) { mAge= age; } private: int mAge; } class X { public: int Age() const { return mAge; } void Age(int age) { mAge= age; } private: int mAge; } class X { public: int Age() const { return mAge; } int& rAge() { return mAge; } const String& Name() const { return mName; } String& rName() { return mName; } private: int mAge; String mName; }"; When possible use this approach to attribute access.; }; The constructor code must still be very careful not to leak resources in the constructor. It's possible to throw an exception and not destruct objects allocated in the constructor. There is a pattern called Resource Acquisition as Initialization that says all initialization is performed in the constructor and released in the destructor. The idea is that this is a safer approach because it should reduce resource leaks. Do Work in Open Do not do any real work in an object's constructor. Inside a constructor initialize variables only and/or do only actions that can't fail. Create an Open() method for an object which completes construction. Open() should be called after object instantiation. Example class Device { public: Device() { /* initialize and other stuff */ } int Open() { return FAIL; } }; Device dev; if (FAIL == dev.Open()) exit(1); Don't Over Use Operators C++ allows the overloading of all kinds of weird operators. Unless you are building a class directly related to math there are very few operators you should override. Only override an operator when the semantics will be clear to users. Justification Very few people will have the same intuition as you about what a particular operator will do. Create an Open() method for an object which completes construction. Open() should be called after object instantiation. class Device { public: Device() { /* initialize and other stuff */ } int Open() { return FAIL; } }; Device dev; if (FAIL == dev.Open()) exit recompiled, possibly retested, and rereleased. /*virtual*/ void Class::method() { } /*static*/ void Class::method() { } I've only seen this format once in source code, but it is interesting enough that I thought I would include here for your consideration. Notice how the method name sits alone on its own line. This looks more like C code and looks very clean for some reason. an Agile methodology to your liking. For more information see.. It seems we need something more detailed than requirements as requirements tend to be fairly high level and are usually not directly useful for development. A requirement will cut across many subsystems and each of the cross-cuts must be identified. Thus requirements need to go through refinement and elaboration to be used in development. If we get too detailed then we get bogged down in detailed design before we are ready. If we aren't detailed enough then we can't know if our requirements are met. Nor can we convince ourselves that we know how to solve the problem. For more information see. can discuss issues and work them out. Email and quick pointed discussions work well. This approach meets the goals and doesn't take the time of 6 people to do it. Some issues to keep in mind: Make a web page or document or whatever. New programmers shouldn't have to go around begging for build secrets from the old timers. There are many other options out there. It doesn't matter which one you pick as much as it does that you pick one and use it. Programmers generally resist bug tracking, yet when used correctly it can really help a project: FYI, it's not a good idea to reward people by the number of bugs they fix :-). As for source code control systems there are many available bug tracking systems. It's more important that you use one than which one you use.. A wiki is perfect support for software development. I've written up my hard one advice for using wikis at Getting Your Wiki Adopted.. Process automation the: You are much more careful and more thorough when you really thing about all the personas, all the different people and all their different roles and purposes. If there's a failure email should go out to the developer's who made the check-ins. This policy will allow you to catch errors as early as possible which will make your system more stable over time.. if (condition) while (condition) { { ... ... } } if (condition) { while (condition) { ... ... } } (...) { .. } if (1 == somevalue) { somevalue = 2; } if (1 == somevalue) somevalue = 2; while(1) { if (valid) { } // if valid else { } // not valid } // end forever void func() { if (something bad) { if (another thing bad) { while (more input) { } } } } if (condition) { } while (condition) { } strcpy(s, s1); return 1; if (condition) // Comment { } else if (condition) // Comment { } else // Comment { }: ... // FALL THROUGH; } From the above example, a further rule may be given: Mixing continue with break in the same loop is a sure way to disaster. (condition) ? funct1() : func2(); or (condition) ? long statement : another long statement;
http://docs.huihoo.com/gnu/CppCodingStandard.html
crawl-001
refinedweb
1,122
64
About This Project I wanted to make a fun practical project using the Adafruit Huzzah Feather and the integrated LiPo charge circuitry. Mining AliExpress for various sensors and bits, I happened upon the MQ3 sensor. Mine came with a comparator that trips an LED if over a certain level, but I’m using the analog output here. Connections The MQ-3 is analog so will work fine of the 3.3V supply of the Huzzah Feather. Connect power and ground from the Feather to the MQ-3. Then connect the analog out to the ADC pin on the Feather. It is supposed to go full scale, but the concentrations I’ve been blowing into it don’t register more than half a volt, so don’t need a resistor divider. Note that the ADC pin on the ESP-12 is max 1V, but I have exceeded it without causing any damage. Next you are going to want to ground GP0 and GP15 and the ground from your USB to Serial cable. Connect RX from your serial cable to TX on the Feather, and TX to the Feather’s RX. Press the reset button, you are ready to program. ##Tips Select the Huzzah Feather from the Arduino IDE. Plug in your serial cable and verify that the IDE has the right serial port selected. Enter your wifi and Cayenne Token and let her rip. When creating your dashboard widget, you want to create a analog sensor widget, and use Virtual 1 like in the sketch. Selecting A0 from the integrated ADC selection does not work (I think A3 might). I like the gauge as you can set ranges that change colours as you increase in inebriation This could easily be scaled using the Arduino Map function to give you more BAC type values. Also note that this sensor is sensitive to temperature, so if you blow hard on it, you will actually get erroneous results. I suggest an apparatus that restricts airflow like a straw. Will probably 3D print a housing for this. I plan on putting the Cayenne dashboard up on the big screen TV and everyone gets to watch the level of stupor ##Code #define CAYENNE_DEBUG #define CAYENNE_PRINT Serial // Comment this out to disable prints and save space #include “CayenneDefines.h” #include “BlynkSimpleEsp8266.h” #include “CayenneWiFiClient.h” #include <ESP8266WiFi.h> // Cayenne authentication token. This should be obtained from the Cayenne Dashboard. char token[] = "..."; // Your network name and password. char ssid[] = "..."; char password[] = "..."; void setup() { Serial.begin(9600); Cayenne.begin(token, ssid, password); } void loop() { Cayenne.run(); } CAYENNE_OUT(V1) { Cayenne.virtualWrite(V1,analogRead(A0)); Serial.print("Read:"); Serial.println(analogRead(A0)); } What’s Connected Adafruit Feather Huzzah 1200 mAH LiPo battery with JST connector MQ3 board USB to Serial Cable. Triggers & Alerts Not using triggers yet, but if someone is blowing over, could sure send a text to my wife to come get me Scheduling It’s a bit early to be drinking yet no? Dashboard Screenshots Upps! Red. Better go to bed. Photos of the Project The Basics Programming Connections
http://community.mydevices.com/t/diy-breathalyzer/3738
CC-MAIN-2018-39
refinedweb
511
66.33
Custom numeric data type for Python 3 with some additional properties Project description Cond - Numeric data type Cond is a custom numeric data type for Python 3. It supports most operations that are legal on a regular data type, while adding some new properties. It can be installed using pip3 install cond. The idea The basic idea of a Cond object, is that it can hold various options at once, but only represent one of them. That option is called the main option. Importing The module is called cond, so it can be imported using import cond. The only three, public members of the module that are available are Cond, which is the class that is used to instantiate objects, LinkedCond, which is a modified version of the Cond class, and require, which is the function used to set "limitations" for Cond objects. Some other functions will also be imported, but these are implementation functions (indicated with a trailing underscore). All the following code assumes that the line from cond import Cond, LinkedCond, require has been called. Initializing Arguments To initialize a Cond object, simply use the syntax x = Cond(*args). In this case, argsis a list that contains an arbitrary amount of elements, all of the same numeric, built-in data type (e.g. integer). Of course, the arguments can also be passed as numeric literals, x = Cond(1, 5, 10). Whenever a Cond object is created in this manner, the main option of the object, which essentially represents it, will always be chosen to be the first passed argument, unless otherwise specified. At least one argument must be passed when a Cond object is created, unless the rangekeyword is included (discussed below). Keyword arguments There are currently two optional keyword arguments that can be used for the initialization of a Cond object. These are mainposand range. The former is used to specify the index of the main option of the object. For example, x = Cond(1, 5, 10, mainpos = 1)will initialize a Cond object with the options 1, 5 and 10, but the option that initially represents the object will be the second one (5). The latter is used to specify a range, which the Cond object will use to generate options. The rangekeyword argument has to either be a single positive integer (in which case all the numbers from 0 to it will be included), or a tuple, where the first argument specifies the start, the second argument specifies the end, and the (optional) third argument specifies the step. If this keyword argument is included, it is optional to also include arguments, as long as the range created by the numbers passed isn't empty. Basic Operations A Cond object will operate just like the numeric type that it represents in basic operations. The value that will be used for any actual operation will be the main value of the object. For instance: x = Cond(5, 17) y = x + 3 print(y) OUTPUT: 8 The reason the above output is 8 for this, is that the main option of x is 5 (because it was passed first, and no mainpos was specified), so that value will be used for any operations. If, in the above code, the line where the Cond object is initialized was replaced with x = Cond(17, 5), or even, x = Cond(5, 17, mainpos = 1), the resulting value of y would be 20, because in both cases the main option of x would be 17. This works similarly for all other operations, including subtraction, multiplication, division, exponentiation, bitwise operations, etc. Basic Evaluation Just like in basic operations, a Cond object will also act in the same way as the numeric type that it represents in evaluation statements. For example: x = Cond(11.5, 20.3) y = 11.5 z = 20.3 print(x == y) print(x == z) OUTPUT: True False As expected, the above code yields True and False. This is because, as stated before, x here is represented by the float 11.5, whereas 20.3 is just an option. So when x is evaluated to be equal to y, which is also 11.5, that yields 1, whereas when it is evaluated to be equal to z, which is 20.3, that yields 0. Incrementing & Decrementing A Cond object also has the ability to be incremented, decremented, or have any other syntax of the type [operation_sign]= be used on it. For example, dividing a Cond object's main option by 2 can be done using x /= 2, or for an integer, x >>= 1. Note that the former will only work if the options of x are of type float. If they are not, and in general, whenever an operation attempts to change the main option of a Cond object to a different type, TypeChangeError is thrown. The latter is a right shift by 1 bit and therefore, equivalent to x //= 2, for an integer number. Here, it is also important to note that, while for a regular number, x += 1 is completely equivalent to x = x + 1, for a Cond object, these are two completely different statements. In fact, the latter should be used with caution. The reason for this, is that stating x += 1, is essentially saying "increment the main option of x by 1", whereas stating x = x + 1 is saying "take 1, add it to the main value of the Cond object x and store the result in x". However, since the resulting value (x + 1) will be a numeric type, by default, x will also be set to be a numeric type. Therefore, the Cond object x will be overwritten by an integer, which will use the name, x, after the statement, and will have the final value that the Cond object would have had. This is demonstrated by this example: x = Cond(5) x += 1 print(x, type(x)) x = x + 1 print(x, type(x)) OUTPUT: 6 <class 'cond.Cond'> 7 <class 'int'> Iterable Properties A Cond object, even though it is represented by one main option, can hold many options at one time. This allows it to have properties that are usually found in iterables, such as lists and tuples. Firstly, calling len(x) will return the amount of options that the Cond object currently holds. Using bracket syntax, it is possible to view the option at a specific index; in addition to that, assigning and deleting options is possible, unless the given index corresponds to the main option. A Cond object can be used just like any other iterable in a for loop, looping through its options. The in syntax can also be used to check whether a given value is inside the list of options of the Cond object. Adding a new option to a Cond object can be achieved by using the x.append(value) syntax, just like one would use in a list. Similarly, x.remove(value) and x.index(value) are also valid expressions. Finally, calling x.all() will return a list of all the options that x currently holds. The below code snippet demonstrates the usage of all of these properties. x = Cond(-8, 14, 3) print("x length: ", len(x)) print("Third option: ", x[2]) x[2] = -x[2] print("Third option: ", x[2]) del x[2] print("All x options using for loop:") for option in x: print(option) print(5 in x) x.append(5) x.remove(14) print("Index of 5 in x: ", x.index(5)) print("All x options: ", x.all()) print("Main x option: ", x) OUTPUT: x length: 3 Third option: 3 Third option: -3 All x options using for loop: -8 14 False Index of 5 in x: 1 All x options: [-8, 5] Main x option: -8 LinkedCond objects LinkedCond objects can be instantiated just like Cond objects. The only difference between them and Cond objects, is that they have a "history"; in other words, they "remember" the limitations that are applied on them. An implication of this, is that they can be linked to each other; this is why they are called LinkedCond objects. LinkedCond objects, and their differences with Cond objects, are discussed later. The require function Cond objects As mentioned above, the require function is the function which applies "limitations" to Cond objects and changes their main options, based on those limitations. In reality, the require function does nothing more than brute-forcing solutions until it potentially gets a correct one. More specifically, the syntax is: require(expression, cond_objects, eval_sign, eval_number). Following is the explanation for each of these arguments. - expression: the expression is actually a string representation of an algebraic expression. In essence, it is the "left part" of the equation. Each Cond object that is part of the expression must be represented with a single-digit letter. Just as in regular Python, addition is represented by the "+" symbol, subtraction by the "-" symbol, multiplication by the "*" symbol, division by the "\" symbol and modulo by the "%" symbol. Contrary to how it works in regular Python, exponentiation is represented by the "^" symbol (though "**" can also be used). In addition to this, the multiplication symbol can be skipped in cases where algebra allows it (between single-letter variables, numbers or variables followed by parenthesis, numbers followed by variables, etc). - cond_objects: this argument can either be a single Cond object, or a tuple containing multiple of them. However, the amount of Cond objects passed through this argument must be exactly equal to the amount of variables that the expression contains. The correspondence between variables and Cond objects is 1-1, meaning the first variable is paired with the first object, the second with the second, and so on. This means that, if the expression was "x - y"and the tuple was (y, x), the actual operation that the function would attempt to satisfy would be y - xand not x - y, because the object y was written first, so it corresponds to the first available variable, x. In essence, variable names inside the expression are no more than conventions -they don't represent any actual variable names. - eval_sign: this argument is a string of the evaluation sign. This can be either one of: "=", ">", ">=", "<", "<=", "!=". - eval_number: this argument is a numeric value, representing the "right side" of the equation. This can be any built-in numeric value, or it can be of type Cond. However, note that if type Cond is used, it will not be edited in any way. It will simply be used for its value and not take part in the actual expression. An expression cannot be used for this argument, therefore any equation should be solved so that there is only a single numeric value on the right side of it before using the function. This argument can also be a tuple of multiple numeric types, but only if the eval_sign argument is "!=". The full equation can be recreated by substituting the variable names with the Cond object names, with the correct correlation and then appending the sign and the evaluation number at the end. The require function will return Trueif any combination of existing options for each included Cond object is found, which satisfies the given equation and will change the main value of the object to that which was found. If no combination of values that satisfy the equation are found, Falseis returned and no changes are made onto the Cond objects. Following are some examples of the require function's use. x = Cond(1, 5, 10, 25) res = require("x", x, ">", 5) print(res) print(x) OUTPUT: True 10 In this example, it is essentially requested that x be greater than 5. The function searched through the values, found that the value 10, which is the third option of the object, is greater than 5 and set that option to be the main option of the object. Because it was successful in satisfying the equation, it returned True. x = Cond(-5, 10, 102, 54) y = Cond(4, 12, 66) z = Cond(3, 0, -1) res = require(r"xy/(2z) + 15%x - 3y", (x, y, z), "=", 8) print(res) print(x, y, z) OUTPUT: False -5 4 3 In this example, no combination is found that satisfies the expression. Therefore, the function returned Falseand the Cond objects passed kept their initial main options. Note: raw string was used instead of skipping the %character. res = require(r"15a - a^2 - b", (x, y), ">", 1) print(res) print(x, y) OUTPUT: True 10 4 ( x, yused from Example 2): In this example, notice that x is not passed twice. Even though the variable a appears twice before the variable b, once the object x is mapped to the variable a, whenever a is found again, it is dismissed. Therefore, it is safe to pass y afterwards, so it is mapped to b, without worrying about passing x twice. Note: passing x twice would actually cause an error. LinkedCond objects The require function can take LinkedCond objects instead of Cond objects as arguments. However, note that both LinkedCond and Cond objects cannot be passed as arguments in a single require call. This is explained further later on. The syntax for the require function, when it is run with LinkedCond objects, is exactly the same as that for when it is run with Cond objects. For example: a = LinkedCond(1, 2, 3) res = require("x", a, ">", 2) print(res) print(a) OUTPUT: True 3 Here, the LinkedCond object will act exactly like a Cond object. However, consider that the following code was run after the above code: res = require("x", a, "<", 2) print(res) print(a) OUTPUT: False 3 If we had initially made abe an instance of the Cond class, the above code would have printed out Trueand 1. For LinkedCond objects, this is not the case; because a LinkedCond object will remember previous require calls onto it, it is impossible to have abe smaller than 2, when amust already be greater than 2. In other words, after the first require call, acannot ever be smaller than or equal to 2. Since that is what we request with the second require call, Falsewill be returned. Multiple LinkedCond objects can also be passed to a single require call. This will, in a way, link them. What this actually means, is that the main option of each of these objects will not only depend on require calls onto them, but also on require calls onto the objects that they are linked with. A big difference between Cond and LinkedCond objects, is that a Cond object can never have its main option altered, unless it is directly passed to require, using the cond_objectsargument; this is untrue for LinkedCond objects. It is possible for their main option to change indirectly. This would have to occur via a require call, to which a LinkedCond object, that is linked to them, is passed. An example of this follows. a = LinkedCond(range = 10) b = LinkedCond(range = 10) res = require("xy", (a, b), "=", 24) #a*b = 24 print(res) print(a, b) res = require("x", b, "<", 6) print(res) print(a, b) OUTPUT: True 3 8 True 8 3 In the above example, notice that even though the second require call only contained bin the passed objects, awas also altered. This is the reason why bis considered linked to a: it can be indirectly affected. When a require call attempts to change a, it will also attempt to change bso that the new limitation for ais satisfied, whereas the previous limitations, which included b, are also satisfied. Consider a situation where we wanted, just like before, the product of aand bto be 24, however we also needed ato be greater than b. As shown, ais not greater than bwhen the initial require call, for the product, is made. Even though it could be, if we really wanted to ensure that it is, we could do the following: a = LinkedCond(range = 10) b = LinkedCond(range = 10) res = require("xy", (a, b), "=", 24) #a*b = 24 print(res) print(a, b) res = require("x - y", (a, b), ">", 0) #x-y > 0 => x > y print(res) print(a, b) OUTPUT: True 3 8 True 8 3 Here, just like before, it is first requested that the product of the two LinkedCond objects be equal to 24. Since the options for both objects are the digits, it is found that, if abecomes 3 and bbecomes 8, the product will be 24. The second require call makes sure that the difference of awith bis greater than 0. This is the correct way to require that abe greater than b. If that require call was replaced with require("x", a, ">", b), it would return False. This is because, as stated before, object bwas passed as a final argument, meaning it is the evaluation number. For this reason, it will only be used for its value and will not take part in the expression. The function will attempt to make agreater than the value of b, which is 8. In other words, it will attempt to make abe 9; this won't work, since it must also be true that the product of awith bis 24, which is impossible for a = 9, no matter what the main option bis. Nevertheless, when the require call is written as it was in the code snippet, it makes sure that both the values of aand bcan be re-evaluated, so matching values can be found. The second solution, a = 8and b = 3satisfies both equations; the function might also, in this case, pick the values a = 6and b = 4. This would also be correct in this case. Additional LinkedCond properties LinkedCond objects share all their methods with Cond objects, except for two extra ones: - The first new method is getlims(). This method will return a set, containing all the limitations that currently apply for the object. As for the names that will be used in place of the single-letter variables, it will be attempted to replicate the variable names that have been used. For example: obj = LinkedCond(range = 20) require("x", obj, ">", 11) print(obj.getlims())OUTPUT: In place of {"obj > 11"} x, obj, which is the variable name of the object, was used. This is done so that it is easier to see which objects the limitation is really referring to. Nevertheless, above, it was stated that it will be attempted to replicate the variable names. This is because, in some cases, it will be impossible to replicate those names. Consider the following: def func(): obj = LinkedCond(range = 20) require("x", obj, ">", 11) return [obj] obj_list = func() print(obj_list[0].getlims()) for v in obj_list: print(v.getlims())OUTPUT: Here, after the function {"obj_list[0] > 11"} {"v > 11"} funcquits, all the variables declared in its scope are destroyed; it is impossible to get the variable name objafter that. However, since the object is saved inside of the list named obj_list, when using the getlimsmethod on it, the name of the list that contains it followed by brackets and the index will be used instead. When iterating through the list, the name of the temporary variable will be used. The only case in which the single-letter variable names will be used, is when it is impossible to find variable names for all the included objects. For example, consider the following: def func(): obj = LinkedCond(range = 10) obj2 = LinkedCond(range = 20) require("xy", (obj, obj2), ">", 10) return {obj: "Hello", obj2: "World"} thedict = func() for key in thedict: print key.getlims()OUTPUT: Here, the objects are used as dictionary keys. This means that no variable names, at all, exist for these objects. Because they are the dictionary keys, meaning that they are the ones used to refer to the dictionary values, there is no real way to refer to them. For this reason, the only thing that the {"x*y > 10"} {"x*y > 10"} getlimsfunction can do is use the single-letter variables, passed when require was called, instead. Note that, when using a Cond object as a dictionary key, the main option of the object cannot be used to retrieve its corresponding value. In order to do so, the actual Cond object itself has to be passed as the dictionary key. - The second method is clearlims(). As would be expected, this method will clear all the limitations for a LinkedCond object. The only thing that needs to be noted here, is that clearlimswill not only clear the limitations for the object it is called on, but also for linked objects, that link the object to them. Consider the example where the product of two LinkedCond objects is 24 and one of them is smaller than 6. a = LinkedCond(range = 10) b = LinkedCond(range = 10) require("ab", (a, b), "=", 24) require("b", b, "<", 6) print(b.getlims()) a.clearlims() print(b.getlims())OUTPUT: In this case, when {'b < 6', 'a*b = 24'} {'b < 6'} clearlimswas called on a, the limitation a*b = 24was also removed from b. However, the limitation of bthat didn't include a, b < 6, did not change. Credit The inspiration for this project was solely taken from a lightning talk by Jason Orendorff, on Youtube. His github profile can be found here. This description on "Quantum Bogosort" also helped. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/cond/1.2.5/
CC-MAIN-2021-31
refinedweb
3,578
59.64
C Interview Questions Several jobs require candidates to have a profound knowledge of C. These C Interview Questions have been designed specially to get you acquainted with the nature of questions that you may encounter during your interview for the subject of C. 1. What is C language?. 2. What are the features of C Programming Language? Some important features of C programming language are given below: - Simple Language - C is a simple language. It is very easy to understand and learn. - Mid-Level Language – C is a mid-level language and enables low-level language featuress like development of system applications such as kernel and drivers. It also supports the feature of high-level language. Hence C is a mid-level language. - Machine Independent Language – C can be interpreted on various operating systems including UNIX-based systems, Linux, Mac OS and various versions of Windows. - Case-Sensitive Language - C is a case sensitive language and treats the uppercase and lowercase characters in a different manner. - Structured Programming Language – C is a structured programming language that means any C program can be achieved in parts using functions. This makes any C program easy to understand and modify. - Rich Library Support – C provides lots of inbuilt functions which makes programming faster and easier. These functions can be accessed by including appropriate header file in the C program. - Powerful & Fast Language - C is a fast language as it takes very less time in compilation and execution. - Dynamic Memory Allocation - C supports the use of pointers which means a user can directly interact with memory and allocate memory dynamically. 3. What is a pointer in C? A pointer is a variable which stores address of another variable.The address of a variable can be obtained by using ampersand sign (&), known as address-of operator followed by the name of a variable. The value of a variable can also be accessed by using pointer. The * operator, known as dereference operator followed by the name of the pointer gives the value stored in the address pointed by the pointer. //p1 is a pointer which stores address of variable p1 = &Var; //*p1 gives value of variable x = *p1; In the below example, an integer variable called MyVar and a pointer called p1 are created. pointer p1 is assigned the address of MyVar. Then, *p1 is used to get the value of MyVar as shown in the output. #include <stdio.h> int main (){ int MyVar = 10; int *p1; p1 = &MyVar; printf("%p\n",p1); printf("%i",*p1); return 0; } The output of the above code will be: 0x7ffed13c2db4 10 4. Write a program to swap two numbers without using the third variable. This are many ways to swap two numbers. The below example uses + operator to swap two numbers. #include <stdio.h> static void swap(int, int); static void swap(int x, int y) { printf("Before Swap.\n"); printf("x = %i\n", x); printf("y = %i\n", y); //Swap technique x = x + y; y = x - y; x = x - y; printf("After Swap.\n"); printf("x = %i\n", x); printf("y = %i\n", y); } int main() { swap(10, 25); } The output of the above code will be: Before Swap. x = 10 y = 25 After Swap. x = 25 y = 10 5. Write a program to print Fibonacci series. This are many ways to print the Fibonacci series in C. The below example uses dynamic programming to print the given term of Fibonacci series. #include <stdio.h> static int fib(int); static int fib(int n) { //creating array which contains Fibonacci terms int f[n+1]; f[0] = 0; f[1] = 1; for(int i = 2; i <= n ; i++) { f[i] = f[i-1] + f[i-2]; } return f[n]; } int main() { printf("Fibonacci 6th term: %i\n",fib(6)); printf("Fibonacci 7th term: %i\n",fib(7)); printf("Fibonacci 8th term: %i\n",fib(8)); } The output of the above code will be: Fibonacci 6th term: 8 Fibonacci 7th term: 13 Fibonacci 8th term: 21
https://www.alphacodingskills.com/interview/c-interview-questions.php
CC-MAIN-2021-04
refinedweb
666
63.7
The "Python SDK" module (hereafter: the PySDK) provides a lightweight, efficient, reliable input point to Alooma. The module provides a library of methods to record events and send them to Alooma from any Python application. The module contains two main classes: The PySDK: Enqueues events to be reported to the Alooma pipeline. Events are queued into a concurrent queue, from which they are later pulled by the Sender module to be sent to your dedicated Alooma server. The Sender: Dequeues messages and sends them to your dedicated Alooma serve. The Sender also runs as a daemon in the background. When initialized, the PySDK instantiates a Sender. The Sender starts a separate thread on which a TCP connection is opened. Any disconnection, send error, or other event does not interfere with other code nor crash the service. The module handles errors automatically. Upon disconnection, the module attempts to renew the socket and connection. To integrate the PySDK, follow these simple steps: Run pip install alooma_pysdkto install the SDK. Alternatively, the source can be downloaded directly from Alooma's GitHub repository. Import the module by adding the following line in your Python file: import alooma_pysdk Log in to your Alooma account and add an "Python App" input. Give your input a label (name), and copy the generated token. Initialize a PySDK instance by adding the following line in your Python file: sdk = alooma_pysdk.PythonSDK(YOUR_TOKEN) token- The unique identifier for this input you got at step 4. There are many more useful parameters that can be supplied to this function. Here are some key ones (refer to the in-file method documentation for more): servers- The server to send data to - only needed if indicated by Alooma support. port- The remote port to connect to - only needed if indicated by Alooma support. event_type- A string or callable which receives each event and returns a string. The event type for each event is placed in a _metadata.event_typefield, which determines the type of the event in the Alooma Mapper screen. callback_func- A callback function can be provided to be called whenever a log message is emitted from the PySDK. To report an event to Alooma simply add this one-liner to your code: sdk.report(event_dict) Alternatively, to report multiple events in one function call, use: sdk.report_many([event_dict, event_dict, ...]) In both cases, each event must be a valid Python dictionary or a string. If not, it will be discarded and the callback function, if provided, will be called with an appropriate message. To ensure all internal SDK queues are flushed before closing your program, make sure to call alooma_pysdk.terminate(). That's it, you're ready to send events to Alooma! As always, contact us with any questions. Article is closed for comments.
https://support.alooma.com/hc/en-us/articles/360000714792-Python-SDK-Integration
CC-MAIN-2018-17
refinedweb
461
65.73
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo You can subscribe to this list here. Showing 7 results of 7 In <20050417215335.GB15379@...>, Henrik Jacobsson wrote: >) I think only ROX-Session and the configuration applets use it so far. The capplets use it to communicate config changes to the session, and it's also used so that the app icon's menu can make the session do things like show its message log, using a simple script. -- TH * * Stephen Watson (stephen@...) wrote: >) Thanks alot. Great piece of software btw.. /Henrik. -- Stephen Watson If you read this on a mailing list, send any reply back to the list and not to me. Not even CC. Hi list. I recently installed Rox on my homebrewed GNU/Linux2.6/Xorg6.8.2-system. Looks really nice sofar, but I have a small problem with the way Rox remembers 'per-directory'-settings. If I understand it correctly, it is possible to make Rox remember settings like iconsize, icon/list-view, thumbnails et al. The settings is added to an xml-file named ~/Choices/ROX-filer/Settings.xml .? Google tells me nothing.. Thanks in advance. /Henrik New release of Archive: Zero Install users can refresh the rox.sourceforge.net directory to get it. It can be installed using 0alias, eg: $ 0alias Archive $ Archive /etc Changes since 1.9.4: - Added support for Ace archives (Rodolfo Borges). - Don't let the default name be the same as the original file. Try it without the extension, or with '.unpacked' if there is no extension (Thomas Leonard). - Use PipeThroughCommand from ROX-Lib, instead of our own version (Thomas Leonard). - Moved unit tests to a separate directory (Thomas Leonard). - Added Chinese translation (Babyfai Cheung). - Prefer gtk.main to gtk.mainloop and gtk.main_quit to gtk.mainquit (Stephen Watson). - Use GtkComboBox if available, avoids warning from pygtk 2.4 (Stephen Watson). - Support for Unix compressed (.Z) files (Guido Schimmels). - Detect a new kind of Zip format (PK00). - Added German translation (Guido Schimmels). Unless anyone objects, this will probably become version 2.0.0 soon. -- Dr Thomas Leonard GPG: 9242 9807 C985 3C07 44A6 8B9A AE07 8280 59A5 3CC1 On Sun, Apr 17, 2005 at 11:50:15AM +0100, Stephen Watson wrote: > ROX-Lib 2.0.0 is now released. Since there does not appear to have been a > 1.0 release, this is the first stable version available of the library. The Zero Install and injector versions have also been updated. Zero Install users can upgrade with: $ 0refresh rox.sourceforge.net Injector users do: $ 0launch --gui Click 'Refresh all now' to update. (ensure that 'Help test new versions' is ticked in the dialog box that appears, or it will show 1.9.18 instead) -- Dr Thomas Leonard GPG: 9242 9807 C985 3C07 44A6 8B9A AE07 8280 59A5 3CC1 ROX-Lib 2.0.0 is now released. Since there does not appear to have been = a 1.0 release, this is the first stable version available of the library. All programs that use ROX-Lib should now be using this version, e.g: import findrox; findrox.version(2, 0, 0) import rox =20 Thanks to Thomas Leonard and all the other contributors. Release notes follow listing the changes since the last 0.1.x release (rather a lot as they go back to July 2002). Changes since 0.1.4: General: - Converted to GTK+ 2.0 - If 'gtk2' isn't available, see if 'gtk' is actually the 2.0 version. - Switch to use new pygtk versioning system (requires pygtk 1.99.13). - Trying to use ROX-Lib2 with earlier versions of python now gives a sensible error message. - More warnings about old pygtk versions. - Added Dialog class (does ref-counting, like Window). - If python is too old, raise SystemExit instead of quitting. - Ensure that True and False are defined. - New icon (Geoff Youngs). - Cope with % escaped URIs. - Work out correct application name even if invoked via a symlink. This stops applications from calling themselves 'text_plain', etc (reported by Joe= =20 Hill). =20 - Updated findrox to use ZeroInstall, if possible. - Try to get pygtk through ZeroInstall. - Give windows a default icon from .DirIcon (Chris Shaffer). - Better way to set the default icon. - Fall back to old way of setting default icon for older pygtk's (Stephen= =20 Watson). - Fix for setting icon in the old way when there was no icon (Stephen Watson). - Try to cope better with bad hostname setup (reported by Nathan Howell).= - New bug() function to quickly bring up a debugging prompt. - Added some version information to the dialog you get when you run ROX-L= ib directly. - findrox.py script now warns if ROX-Lib2 version is too old (unless requesting a version too old to have a version number). - If we can't get pygtk through Zero Install, try the local copy. - Try to run the filer through Zero Install, if possible. - Escape and unescape URIs, as required by ROX-Filer 2.1.0 (reported by Peter Geer). - Allow ROXLIB_DISABLE_ZEROINSTALL to override looking for ROX-Lib in Zer= o Install (useful for ROX-Lib developers). Prefer a locally installed version of ROX-Filer to the Zero Install version; this also allows users to sel= ect which Zero Install version they want (eg, stable or developer). - Updated examples in documentation for new findrox (Yuri Bongiorno). - Cope with pygtk2.2 API change (Stephen Watson, reported by Guido Schimmels). - Warn about old versions of findrox.py. - Removed code to turn off the separator in the debug dialog. GTK 2.4 doesn't need it, and generates a warning if you use it (Thomas Leonard). - Replace g.mainloop() and g.mainquit() with g.main() and g.main_quit() a= s recommended since pygtk 2.0.0 to avoid warnings with pygtk 2.4.0 (Steph= en=20 Watson). - If a 'rox' module is in PYTHONPATH, use that without any searching (Tho= mas Leonard). - Added TODO file to track bugs and change requests. Exceptions: - Added much improved exception reporting system. - Show local variables in report_exception, and allow expressions to be evaluated in the context of any frame. - Trap exceptions when activating menu items and report nicely. - Added rox.UserAbort exception, to be raised when users cancel something= manually. - Added a Help button to the exception dialog, which opens the new Help/Errors file (Thomas Leonard). Options: - Upgrade Options box to new style. - Work around bug in some versions of pygtk that stopped the options wind= ow from working. - Translate text in Options box. - Also translate section names in the Options box (reported by Arnaud Calvo). - Also translate <label> elements in the Options box (reported by Arnaud Calvo). - Cope with option values split into multiple DOM text nodes (Rene Ejury)= . - Added secretentry and button options to the OptionsBox (Stephen Watson)= . - Added widget_registry to OptionsBox, to provide an easy way to register= new option types. Removed build_button in favour of this method. - Stop themes from overriding the display of the colour-picker buttons in= the options box (Rene Ejury). - Better <label> widgets (wrapping and icon). Taken from LookAndFeel. - Added ToggleItem (based on a patch from Ken Hayber). - Added 'size-group' attribute to Options.xml elements. This allows group= ing elements so that their labels all appear the same width. - Added <slider> widget to options. - Translate menu items in options (Guido Schimmels). - Allow "" as a valid value in a OptionsBox menu (reported by Guido Schimmels). - Added options.ListOption to support multi-valued option widgets, such a= s lists (Stephen Watson). - Added fixedlist and varlist widgets to OptionsBox (Stephen Watson). Saving: - Saveable.save_to_file() should raise an exception when saving fails, rather than returning 0. The return value is now ignored (note that raising an= exception will still work with 1.9.0). - Added AbortSave exception. - Added save_set_permissions to Saveable interface, allowing the default save_to_file to preserve file permissions. - Added StringSaver class. - Use new fileutils module. - Simplified implementation of saving.SaveFilter by using the new processes.PipeThroughCommand class. The child_run() method of SaveFilte= r is no longer supported. - If a file's details were recorded (in document.save_last_stat) when it = was loaded, make sure they haven't changed when saving. Saving also updates= this variable (requested by Arnaud Calvo). - Be consistent about URI escaping. set_uri() is now always passed an unescaped UTF-8 string (Thomas Leonard). - Changes to XDSLoader: xds_load_from_stream(name, type, stream) replaces the old xds_load_data(data). - Renamed XDSLoader module to 'loading' (like 'saving'). - When loading data from another application, pass the suggested leafname= to xds_load_from_stream(). - Note that the xds_load_uris() method is now called with escaped URIs. Applets: - Added applet module for creating ROX panel applets. MIME: - Added 'mime' module for installing extra MIME information. - Added functions to read MIME database (Stephen Watson). - The glob patterns in mime.py are now sorted longest first.=20 - Added support in mime.py for an application to set MIME-type run actions and thumbnail programs (Stephen Watson). - Moved MIME handler installation into a separate module (mime_handler) a= nd made a few changes to the XML syntax. - Added method to MIMEtype to fetch its icon. Install list window for the MIME handler installation uses it (Stephen Watson). - mime_handler can now uninstall (Stephen Watson). - Get update-mime-database command through Zero Install, if possible. - Added MIME type matching by file contents (Stephen Watson). Processes: - Added 'processes' module, based on code in Archive, which makes controlling subprocesses easier. - Added PipeThroughCommand class (copied from Archive, with some minor changes). - In processes.PipeThroughCommand, a None input stream gets /dev/null instead of inheriting its parent's stream. The class has a new run_child_with_streams to make overriding the behaviour easier. Basedir: - Added 'basedir' module for freedesktop.org Base Directory specification= . - The choices module is now deprecated. You can use choices.migrate() to move settings over to the basedir system (Thomas Leonard). Also added basedir.load_first_config(), which works like the old choices.load(). fileutils: - Added fileutils module, which provides an interactive makedirs() functi= on (Richard Boulton). - Added position_fn argument to rox.Menu.popup. - Make default menu positioning function keep the pointer inside the menu= . - Allow stock icons in menus. - Allow event to be None for Menu.popup. This allows the menu to be activated from the keyboard, using the 'popup-menu' signal. - Added <menu> OptionMenu widget for Options box (Chris Shaffer). - New API for menus. Instead of passing tuples for menu items, python classes can be used. The old tuple interface is still supported. - Allow a list of values to be stored in each menu Action. These are used= as the arguments to the callback function. - Suppress warning when we use GtkItemFactory under pygtk 2.4 (Stephen Watson). =20 InfoWin: - Added InfoWin.py to provide standardized information window (Stephen Watson). - Changed 'Dismiss' to 'Close' (Chris Shaffer). Icon themes: - Added support for icon themes (doesn't do inheritance yet). AppInfo: - New AppInfo.py for parsing AppInfo.xml files (Christopher Arndt), InfoW= in converted to use it (Stephen Watson). Tasks: - Added 'tasks' module to provide a light-weight alternative to threads. - Added tasks.InputBlocker, which triggers when an input source becomes readable (Thomas Leonard). - Added OutputBlocker, which works in a similar way to InputBlocker (Thom= as Leonard). Proxy: - Added 'proxy' module. This allows one Python process to invoke methods = on another asynchronously. - Create_su_proxy now returns the MasterObject directly, not the MasterProxy. - You can call finish on the MasterObject (so the MasterProxy isn't neede= d). - Slave methods no longer take a 'request' argument. Instead, the return value of the function is returned. - Methods can only return one value. dequeue and dequeue_last have been replaced with a 'result' property. - Methods on MasterObject now return a RequestBlocker, not a Queue. This means you just yield the object itself, not object.blocker. su: - Added 'su' module for performing operations as root. - Added spawnvpe and waitpid methods to suchild. - Simplified su code and interface. - New suchild methods: open, close, read, write, chmod and rename, which work just like their normal Python counterparts (Thomas Leonard). - Cope better with user cancelling su operation. New interface to replace= create_su_proxy (Thomas Leonard). Bugfixes: - Saving code didn't cope with missing images (reported by Musus Umbra). - XDSLoader didn't handle the drag-drop signal, and so only worked for widgets with their own implementation! - Remember to call drop_finish() after a drag-and-drop operation. - Changing the keys lost any builtin shortcuts (Gtk behaviour has changed since 1.2). - If an option had an empty value, an exception was thrown on loading (reported by Stephen Watson). - With pygtk-1.99.14, only the first error message from a process would be detected; this meant that ROX-Lib wouldn't detect when the process quit (reported by joehill). - <hbox> and <vbox> layouts didn't work in the options box (Allen Leonard). - Set default in Options box to OK to avoid triggering a GTK bug (Thomas Leonard, reported by Guido Schimmels). - Python 2.3's new bool type broke options saving (reported by Lars Hansson). - Error reporting in mime.py failed to import the _ function (reported by Christopher Haines). - Fixed bug in findrox when not using Zero Install (Lionel Bringuier). - On non-zero-install systems, the sense of the version check in findrox was wrong. - Don't stop DnD working in the savebox just because no icon can be found= (reported by Martin Lucina). - If an error occurred loading the ROX theme at startup it could not be reported, because icon_theme wants debug, debug wants saving, and saving wants icon_theme (reported by Thomas Zajic). - Use filer module to show ROX-Lib help, rather than using os.system, so that it works with Zero Install (reported by Keith Hopper). - Icon themes didn't work with python versions older than 2.2.2 (reported by Thomas Zajic). - Bug in new menu API prevented submenus from working. - Colour buttons in the Options boxes didn't work with some themes (Jonatan Liljedahl). - If the unit field for a <numentry> option is blank, don't try to transl= ate it (reported by Guido Schimmels). - Failed to call the parent constructor to AbortSave correctly. - When a menu has more than one toggle item, only one is updated correctly (Ken Hayber). - Theme subdirectories are separated by ',' not ';' (reported by Denis Prost). - Use only text nodes when getting the tool tip from the Options.xml file (Stephen Watson). - Set default in Options box to OK to avoid triggering a GTK bug (Thomas Leonard, reported by Guido Schimmels). Translations: - Added support for translations. - Added Italian translation (Yuri Bongiorno). - Added German translation (Guido Schimmels). - Added French translation (Vincent Lef=C3=A8vre). - Added Chinese translation (Babyfai Cheung). --=20 Stephen Watson If you read this on a mailing list, send any reply back to the list and n= ot to me. Not even CC.
http://sourceforge.net/p/rox/mailman/rox-users/?viewmonth=200504&viewday=17
CC-MAIN-2015-14
refinedweb
2,454
70.19
Opened 4 years ago Last modified 3 years ago now by this I mean the application should not be required to know where it is installed. get_absolute_url forces the object to know the URL path it is coming in on. what might be nicer would be a get_relative_url which would return the part of the URL that the app is resposible for. so take this URL for an example I'm guessing that there is a 'news' app configured here with a 'story' table inside of it. what would be nice is if the 'story' could return '/2005/oct/15/serenity/' and have the framework 'know' that it should be prefixing that with '/news'. It could get this information out of the URL pattern couldn't it? regards and thanks for writing django Ian We need some consistent way to work around this problem. get_absolute_url() harms portability and reusability of Django apps. One way is to use urls.py module to do reverse translation. It is trivial in most cases --- just take the url regex and view parameters and replace () with these parameters in the same order. Complex cases can be tricky --- let's provide a way to do reverse mapping for them. Of course simplified hadling can be used. E.g., most apps use prefixes, which can be stored and reapplied. But I favor some form of reversal. I don't think that get_absolute_url should run along the urlpatterns - because the urlpatterns might have several different ways to reach an object or a list of object or stuff that isn't an object at all. I think get_absolute_url should produce an URL independed of the urlpatterns (I even have a situation where get_absolute_url retuns an URL that isn't in the urlpatterns at all - because it's on a different server). So the base mechanism of get_absolute_url should stay in, I think - allowing users to do something that might be incompatible with moving apps, but might be needed. But I think there should be some kind of app registry where appliations can hook into and request the site base url from. So that the standard case where your project defers url matching to the application via the include() mechanism will work - because in those cases the get_absolute_url can know everything about an object url, besides the base path used in the project urlpattern. This app registry would allow you to define base paths and matching appliation urlpattern includes, just like you do now - but will internally do the registry for get_absolute_url questions. Some code how it could look like: # this is in PROJECT/url.py from django.conf.urls.default import include, register urlpatterns = patterns('', # this just hooks up application patterns. register # returns the (regexpstring, urlpatterninclude) tuple register('/something', 'something.urls'), ) # this is in something.urls from django.conf.urls.default import * urlpatterns = patterns('something.views', ('^object/(?<P>slug)/$', 'show'), ) # this is in something.models.something.Something def get_absolute_url(self): from django.conf.urls.default import prefix return '%s/%s/' % (prefix('something.urls'), self.slug) A sample implementation of register and prefix could be: appregistry = {} def register(prefix, urlmodule): appregistry[urlmodule] = prefix return ('^'+prefix, include(urlmodule)) def prefix(urlmodule): return appregistry[urlmodule] So this would allow urlpattern modules to hook into a global registry and store a prefix path with it's name for later inclusion in the get_absolute_url call of the object. Very simple, but it solves the main problem with moving apps between django projects without giving up on the flexibility of the django urlpattern matching. And it allows apps that need two different prefixes - the project author just needs to hook both includes in there (and the app will just have two different url modules). Good idea, hugo. And, I completely agree that trying to extract URLs from the url-configuration is a Bad Thing(tm). But, it'd be nice if multi-level includes just worked with the registry, for example I have something along the lines of: urls.devel includes urls.admin which includes apps.yeehaw.urls.admin. Although I could probably live with manually munging around in the registry, if making that work is much trouble. I think hierarchical structure of the registry should be doable and would be a worthwile thing to have, as projects can be more complexe like being constructed of different subprojects that bring in a whole bunch of applications (I might envision my CMS stuff to be something like that one day). Oh, and just to give a real world example for why register() should work with the urlpattern module name: in my gallery I have paths like /images/hugo/ and /calendar/hugo/ - it's the same object, but different structure. The /images/ structure is folder based while the /calendar/ structure is blog oriented. The user will be able to switch by setting some boolean on the picturefolder object what structure should be used. So get_absolute_url would look something like this: def get_absolute_url(self): if self.is_calendar: return '%s/%s/' % (prefix('picturefolder.calendar_urls', self.slug) else: return '%s/%s/' % (prefix('picturefolder.images_urls', self.slug) And the urlpatterns would be stored in either calendar_urls (for all calendar related stuff) and images_urls (for all folder related stuff). If this gallery project would be used as a sub project, we would get exactly the above situation where hierarchical registry would be needed. Maybe this could be done with some registersubproject function like this: import copy projectregistry = {} appregistry = {} def register_subproject(prefix, urlmodule): global appregistry, projectregistry projectregistry[urlmodule] = prefix saved_appregistry = copy.copy(appregistry) res = ('^'+prefix, include(urlmodule)) for k in appregistry.keys(): if k not in saved_appregistry: projectregistry[k] = prefix return res def register(prefix, urlmodule): global appregistry, projectregistry appregistry[urlmodule] = prefix return ('^'+prefix, include(urlmodule)) def prefix(urlmodule): if projectregistry.has_key(urlmodule): return projectregistry[urlmodule]+appregistry[urlmodule] else: return appregistry[urlmodule] I didn't test this, it's just the idea of what to do - if you are in a project registry, just check what apps where registered new with this call (by the subsequent register() calls) and store the relations of urlmodules to their main projects in the global projectregistry. This only handles project-subproject-app, though - a more complex solution might be needed if we want that to be fully hierarchical (in those cases the registries will need to keep track of intermittent paths and modify the stored prefix based on that - should be rather simple, too, only we would need some place where to keep track - but because urlpatterns are loaded up front, they could just keep track in a global variable). It looks way too complex for simple things: registration mechanism, subprojects, extra code to write even for simple apps... Of course, you can "have several different ways to reach an object" (and so on), but how common is that? Ultimately the over-engineering is The Bad Thing (tm), which trumps a lot of other Bad Thingies. Simple things should be simple, complex things should be possible. I still think that primitive url-derived back-translator will cover 95% of real world apps. For the rest 5% I wouldn't mind to write a method, which will do back-translation. From your example '^object/(?<P>slug)/$' --- if your view can get the URL pattern, which was used to invoke it, it can trivially substitute (?<P>slug) with actual value, and off you go. It is easy to automate --- Django knows the regex and extracted parameters before calling views. I wish we leveraged it. BTW, about "several different ways to reach an object...". I was under impression that the only thing you can reach is a view function. Consequently it is The Bad Thing (tm) to make objects involved in presentation layer, which is represented by views. This is what I don't like about get_absolute_url(). Request routing is an application level or even web site level functionality. Models have no business in it. Obviously it is possible to have different views for the same object (or objects) but is it an object's business to know your web site? Ultimately it would be a simple regex, which will route requests. So far I didn't feel a need for complex regexes, which require some arcane back-translations. I can imaging that some convoluted legacy url structure may need it. Should we worry about it? The whole idea of REST is to make this mapping as simple as possible. Do you really think we have to create some complex mechanism with registration and subprojects to cover the basics? Uh - for a simple case you just use what is there or just use register('prefix', 'urlmodule') - there is no code to write. The code to write would be in django.conf.urls.default, not in your application. In simple cases your urlpatterns will just look like this: urlpatterns = patterns('', register('/prefix', 'project.apps.application.urls'), ) I don't think that the simple case of an app that brings it's own url mappings can be anything simpler than that :-) URL driven back-translators _can't_ work - my URLs are only 5% object urls, the rest is view stuff like day views, month views, calendars, admin forms, whatever. And the get_absolute_url content isn't a function of the view that happens to render the output - it's a function of the model, where an object itself decides what it's absolute url is. Only that way it is guaranteed that any object only has one absolute url. Of course your objects might be reacheable by different ways, but the object should decide how to reach it primarily. Because those URLs are what google and friends will spider and you want to make damn sure that there is only one URl for google to see for one object ;-) Of course objects _need_ to be interwoven with the view layer at exactly one point: the get_absolute_url method. Because that one is the mediator between the object layer and the view layer (the other is the DB API, that is the mediator between the view layer and the object layer). And yes, we should worry about "arcane url pattern" because that's one of the niceties of Django: the urlpatterns are _not_ necessarily structured by some app internal structure, but are highly flexible. There are very good reasons to keep it that way. For example with Django url patterns it is dead easy to override some urlpatterns of a standard app by just providing your own urlpattern that matches part of that other urlpattern and route it to your own view function. An example: I port over a CMS from one django based CMS to a new one. I keep around the old app and the content in it's tables and start anew with some other app. It's dead easy to put in patterns for the archive urls of the old app with constant values for the years and months it was active and route all other archive urls to the new app. That way I have a common URL space with common format, but two different applications to provide content. URL based back-translation won't cut it here, as that would have to happen based on the same knowledge of when the switch was - but that can't be done in the root urlconf. But model-object based get_absolute_url will still work, because old-app objects still exist in their original place! Uh - for a simple case you just use what is there or just use register('prefix', 'urlmodule') - there is no code to write. In your previous examples you had some code in applications, namely get_absolute_url(), which is simple, yet repetitive, and goes contrary to DRY principles. I say, if code repetitive and looks similar in majority of cases, let Django handle it for you. And the get_absolute_url content isn't a function of the view that happens to render the output - it's a function of the model, where an object itself decides what it's absolute url is. This is exactly why it is bad. I understand your 'convoluted legacy url pattern' argument but bad url schemas should be fixed, not propagated. In your specific case it makes more sense to do one-way translation from legacy urls to new urls and be done with it. Data objects should not meddle in presentation, which can depend on many things. For starters it makes difficult to reuse apps in same web site, e.g., different categories/tags for different things. From your examples I see that data objects should know where they are used (your 1st example) and how many times they are reused (your 2nd example). Your data object should be acutely aware that I changed a url pattern and now it should use a different view. It doesn't sound right. It is wrong wrong wrong. :) BTW, are we still talking about making reusable apps easy? I hope it is not off the agenda. The code I provided does exactly that: make apps easily reuseable without changing Django semantics. That's exactly what it is about. Sure, there might be reasons why one would like to change the semantics of Django, but I don't think that's needed here - we can get easily shareable applications where the project user can just shuffle apps around in the URL space without needing to change one line of code in the applications. If given the choice of wether I can accomplish a solution for a given problem (here: "make apps easy reuseable without the need of users to change app sources") by not changing anything in djangos semantics, I am happy to do exactly that. I think that the present semantics of get_absolute_url() in data models prevents writing truly portable apps. Your solution helps a bit without solving the problem. Probably I am missing something. Could you help me to understand your point? def get_absolute_url(self): from django.conf.urls.default import prefix return '%s/%s/' % (prefix('something.urls'), self.slug) As far as I can tell this object assumes that the url is exactly prefix/slug/. It cannot be prefix?object=slug&view=abbr or whatever I use on my site. Additional assumption that prefix is a constant string, which doesn't depend on other parameters, like a view mode, and so on. And this object makes all assumptions of the previous one and it assumes that it may be in a calendar or in pictures (?). What if I decided to use it in a different place? In more than 2 places? It looks to me that apps using this code are hardly reusable. Those places are all part of the same application. So it's no contradiction - the get_absolute_url of the picturefolder object is part of the same application as the urlpatterns. The same application should indeed know about both sides of the URL. You will reuse the _whole_ app - if your tear an application appart into different parts, you are bound to have problems. Outside knowledge is limited to sticking a prefix before the application urlspace - that's what the register() call does. It just allows a project to use an application and put some prefix in front of it. It won't change the apps url structure as that is only known to the app, not to the project. And inside knowledge of the outside is limited to a way to pull out that prefix that is sticked in front of the URLs - that's what the prefix() call does. The app doesn't need more information than just _where_ it is located in the projects urlspace. It already knows all about how it's own local urlspace is structured. You didn't comment on specific examples. Am I right assuming that Calendar and Pictures are one application? Really? Let me give you an example. I want to have a gallery of pictures, which have comments, RSS/Atom feed, and links to related articles from my blog. Imagine that I have Hugo's excellent gallery app, which doesn't do comments, I have Sune's great comments, I have my own blog, and Django's RSS feed generator. Now I want to keep them all together using this pretty url schema: Can I do it now? Of course! Instead of using supplied url mappers, I'll write my own, which will call your views. (I hope customization of templates is a solved problem.) Can I combine them in one view? Yes! I can write my own wrapper view, which will call your views, and combine results in the single view any way I like. Custom templates will help me to do that. What if my web site templates require some extra parameters? No problems! I can do it too - it is a part of url module. This is what I like about Django - nice separation of concerns. This sunny picture breaks immediately by get_absolute_url() because it is produced by objects directly. Prefixes don't help much. I can do a bolt on translation from assumed urls to my pretty urls :) using mod_rewrite or something. But using mod_rewrite defeats the purpose. I assume that something like that was the starting point of ticket #672 ("get_absolute_url isn't nice"). :) The whole point of the ticket is to get rid of it and improve Django. That's why it is categorized as an enhancement. Nope, the ticket is about what the original author writes: "what might be nicer would be a get_relative_url which would return the part of the URL that the app is resposible for." That's exactly what the register/prefix thing does - it reworks the responsibility of the application for exactly the thing that it is responsible for, the relative part of it's own urlspace. That's the way django currently works: by using include() you pass on responsibility for url patterns that are _below_ some prefix. The only thing that my current proposal doesn't do is to make that prefix a full regexp, as it is possible with the current include() approach - but that can be easily solved. Sure, there might be the wish to get rid of get_absolute_url and the whole urlpattern responsibility thingy, but that would be a completely different thing. Current state of affairs is that Django has a concept of urlspaces - which are bundles of urlpatterns matched for a given application. And Django has a concept of prefix patterns that tell when to send a request over to some applications urlspace. My proposal just gives a way to get from the application back to the surrounding urlspace to find out what to put in front of the application URL to produce a full absolute path. If you stay within the current way Django structures url spaces and add something like my proposal, you can happily push applications around in your main urlspace and move them from one place to another place, without the need of changing the code in the application. It's about that and only about that. It's not out to rescue the world, solve Fermats last theorem or do other silly things. Of course there might be different ways to do things - but nobody showed any code that will do it. And no, just passing on the urlpattern that triggered an object wouldn't cut it: if I have a list view, there are lot's of objects on that page that all need to give out their own URL, but the urlpattern that triggered the list view tells you zilch about how the object url would be constructed. That's knowledge that is only in the application itself and the model IS PART OF THE APPLICATION. So I really don't see a problem if the application model has a method that returns a URL that's completely under control of the application - we only need to solve the "prefix from outside" problem. But as I said, if somebody shows working code (or at least enough code fragments so that we can see what it will look like and how it might be implemented), I am open to discussion. But just repeating "get_absolute_url is bad" won't cut it. Oh, and the last word on this is with Jacob and Adrian, anyway :-) Nope, the ticket is about what the original author writes: "what might be nicer would be a get_relative_url which would return the part of the URL that the app is resposible for." Okay, I'll bite. Where does it say about prefix? ;) I suggested to go beyond get_absolute_url() and related get_relative_url(). I don't recall proposals to get rid of "the whole urlpattern responsibility thingy". I proposed to reuse "the thingy" for simplified automatic implementation. Django does have a concept of prefix patterns, which can be easily avoided, if one decided to do so. It is optional and can be used in parallel with custom url patterns. get_absolute_url() breaks it. if I have a list view, there are lot's of objects on that page that all need to give out their own URL Here it is: you assume that _objects_ should provide their own url. They don't. Objects are responsible for their identification. Period. Urls should be constructed by external means. In your particular example I assume it would be the list view. Different list view may create different urls. Views are called by urls, let them work with urls. but nobody showed any code that will do it This is an excellent technical argument, which states that any code is better than no code. :) I prefer to think before coding. That's why I like to discuss it first before investing time and efforts into making a patch. I explained why I didn't like your solution (I am sorry for that) and asked specific questions about the code. I didn't get answers. Maybe your solution fits the bill and I overlooked hidden gems but so far it is unexplained. That's knowledge that is only in the application itself and the model IS PART OF THE APPLICATION. Hmm. I say that if we want to combine reusable components (==applications in our case), "that's knowledge is only in the web site itself and any application is part of the web site", and it should play nice with other apps. You assume that application is totally isolated chunk, which owns its fiefdom, which is separated from all neighbors by the prefix. I think it is wrong because it doesn't give a lot of value for portable apps. If apps are not interacting, it would be more productive to use WordPress, Flickr, and other existing services instead of making new web sites. we only need to solve the "prefix from outside" problem ...and that's exactly what I am talking about. Eugene, it sounds like you want to have reusable views, not reusable apps. The way things work now is simple, easy to unserstand, actually implemented and Good Enough(tm) (it works for LJ-World). Also, if you want to create a franken-app there's nothing stopping you, just ignore get_absolute_url, write your own URL-space and write your own templates; this should work 99% of the time (objects doing redirects is the only thing I can think of which wouldn't.) And, if you really want that last 1% of cases to work, you could implement a MODEL_URL_OVERRIDES setting, which would map model-classnames to get_absolute_url-override-functions. This would play well with hugo's prefix and it would not be a performance hit, since the work would be done when django.core.meta does it's magic. You lost me here. What are reusable views? Generic views? I assume that an app consists of model (which is usually backed by db in Django), views, url mapping, optional template tags, optional templates. Imagine that you packed your app (as an egg) and gave it to me. I want to be able to use it on my web site. In order to do that you have to provide a reasonable customization. IMHO, out of these 5, the most serious is #3. get_absolute_url is used in a number of places inside Django, most notably in the admin. If it is used in 3rd-party views/templates/template tags it renders #3 impossible. Prefixes may be a solution but they look more like a band-aid: the proposed code had them hardcoded, which prevents reuse of the application inside one web site. Of course it prevents rewriting url schema. Given that templates may create a link to a view, they should be aware of url mapping somehow. It means that it should be reachable through Context. Right now it is done by passing an object (which is needed anyway), which defines an infamous get_absolute_url or something like that (e.g., get_relative_url). This is the only reason to have get_absolute_url - extreme simplicity. I don't see other reasons. It makes the whole system very rigid and prompts hardcoding of urls in more then one place. It prevents customization of urls. The only solution I see is to add extra parameters to a view, which can be passed on Context. These parameters should be optional. If parameters are not given, view, and related chain, use the defaults. There are many ways to do it. Two possible ways are: One possible example of such parameter can be something like that (pseudo-code): { # stuff 'get_object_url': lambda info: '%s/%s/%s/%s/' % (info['site'], info['app_name'], info['module_name'], info['slug']), # more stuff } Obviously view/templates/template tags should be aware of existance and purpose of such parameters. E.g., if view uses 2 different objects, it should understand more than one parameter. This is one more example for some fictious data_base-like set of views to demonstrate flexibility (pseudo-code): info_dict = { # some stuff may go here # view all 4 parameters 'get_year_url': lambda info: '%s/' % info['year'], 'get_month_url': lambda info: '%s%s/' % (info['month'], info['month']), 'get_day_url': lambda info: '%s%s%s/' % (info['day'], info['month'], info['year']), 'get_object_url': lambda info: 'docs/%s/' % info['slug'], } With these parameters all templates used by these views, can generate links between each other, and to some custom detailed view of the object. A little bit more extravaganzas (not sure if it is actually needed): info = { 'year': '\d{4}', 'month': '[a-z]{3}', 'day': '\d{1,2}', 'slug': '\w+', } This dictionary applied to get_* parameters in my previous example would produce urls I want to match. I can rewrite my example like this (pseudo-code): info_dict = { 'app_label': 'blog', 'module_name': 'documents', 'date_field': 'pub_date', 'slug_field': 'slug', # defaults 'year': '\d{4}', 'month': '[a-z]{3}', 'day': '\d{1,2}', 'slug': '\w+', # 1 parameter 'get_object_url': lambda info: 'docs/%s/' % info['slug'], } urlpatterns = patterns('django.views.fictional.date_based', (r'^/?$', 'archive_index', info_dict), (object_url(info_dict), 'object_detail', info_dict), (day_url(info_dict), 'archive_day', info_dict), (month_url(info_dict), 'archive_month', info_dict), (year_url(info_dict), 'archive_year', info_dict), ) I hope you get the idea. I think that something like this is more flexible than prefixes, more generic, and not overly complex. Obviously it doesn't come free --- it requires some cooperation from view/template/template tag writers. It may require a modification of Django to ease the pain. One possible place to extend Django without major breaking is to add one extra parameter to {{include}} function: urlpatterns = patterns('', # urls (r'^my_app/', include('my_app.urls')), (r'^super_app/', include('super_app.urls', info_dict_from_above_which_redefines_urls)), # more urls ) ...or something like that. I hope people, who waited to see some code, can put on the critics cap and swamp me in constructive criticism. :) Eugene, I've got to say I found your code a bit confusing, but I think I understood what you were driving at in the end. I still don't get what you couldn't acheive with ABSOLUTE_URL_OVERRIDES. From ABSOLUTE_URL_OVERRIDES Default: {} (Empty dictionary) A dictionary mapping "app_label.module_name" strings to functions that take a model object and return its URL. This is a way of overriding get_absolute_url() methods on a per-installation basis. Example: ABSOLUTE_URL_OVERRIDES = { 'blogs.blogs': lambda o: "/blogs/%s/" % o.slug, 'news.stories': lambda o: "/stories/%s/%s/" % (o.pub_year, o.slug), 'blogs.blogs': lambda o: "/blogs/%s/" % o.slug, 'news.stories': lambda o: "/stories/%s/%s/" % (o.pub_year, o.slug), } Is it that you end up defining the url in two places? The ABSOLUTE_URL_OVERRIDES are good. but shouldn't these also set at the app level? and just removed from the model class altogether? (why does the model need to know the URL? Sorry for confusing code --- it was a pseudo-code with lots of faults. I provided it to continue a discussion, because some people think better in code. I don't think that this is a perfect solution, but I hope we will come up with the perfect solution after discussion. :) I think that defining the url in two (or more) places is a minor problem (but it counts!). :) The bigger problem is ABSOLUTE_URL_OVERRIDES/get_absolute_url() family provides us with 1-to-1 mapping, which has a problem spotted by Hugo: "...because the urlpatterns might have several different ways to reach an object..." It's quite possible to have several views to present an object in different context. Please read his 2nd post for a code example, where he branches in get_absolute_url() to alleviate the problem. IMHO it is not up to object to decided which representation should be used in a particular context. I may use the same application twice (or more) in a single web site. I have two sets of url patterns, parameters, and templates, which change look and feel of a single app (e.g., a category app). The problem is it takes a lot of boiler plate code to propagate everything down to templates and template tags, which goes contrary to DRY. Additionally it is non-portable --- other writers would implement it differently, when confronted by a problem. IMHO it makes sense to do it in a less hackish more organized way by providing appropriate helpers. The lack of url customization is not a big deal for a single web site, where all code is yours. Hugo's solution is good for it --- it does go against the principle of separation of concerns (already violated by Django with get_absolute_url()), but it is a simple and practical solution, which Just Works (tm). Anyway the lack of url customization is a problem for: I can tell you how these two problems are solved in The Real Life (tm) now: Does it sound familiar? :-( There must be a better way to do it! see also #682 and #66 This is how I'm solving the problem: 1. Adding two parameters to $PROJECT_DIR/settings.py PRJ_ROOT_URL = '' APP_ROOT_URLS = { 'polls': 'polls', 'tasks': 'tasks' } 2. Adding $PROJECT_DIR/context_processors/processors.py import urlparse from django.conf.settings import PRJ_ROOT_URL, APP_ROOT_URLS def aru(request): for u in APP_ROOT_URLS.itervalues(): url, urlpath = get_app_urls( u ) if request.path.startswith(urlpath): return { 'aru': url } else: return { 'aru': PRJ_ROOT_URL } def get_app_urls( tail ): url = urlparse.urljoin( PRJ_ROOT_URL, tail ) if url.endswith('/'): url = url[:-1] urlpath = urlparse.urlsplit(url)[2] return ( url, urlpath ) 3. Add the custom processor to $PROJECT_DIR/settings.py TEMPLATE_CONTEXT_PROCESSORS = ( "django.core.context_processors.auth", "django.core.context_processors.debug", "django.core.context_processors.i18n", "myproject.context_processors.processors.aru" ) 4. In custom views, call return render_to_response() as such, import DjangoContext? of course: return render_to_response('polls/polls_detail', { 'object': p, 'error_message': "You didn't make a choice." }, context_instance = DjangoContext(request) ) 5. Generic views automatically get it for free 6. In templates, just use {{ aru }} when you need to access the application URL 7. Anywhere else (most likely URLConf or views, for redirects and such), use: from myproject.context_processors.processors import get_app_urls from django.conf.settings import APP_ROOT_URLS myurl = get_app_urls( APP_ROOT_URL['tasks'] )[1] or myurl = get_app_urls( APP_ROOT_URL['tasks'] )[0] depending on what you need. 8. By having $PROJECT_DIR/URLConf (wherever yours may be) handle only app level urlpatterns: from django.conf.urls.defaults import * urlpatterns = patterns('', (r'^polls/', include('myproject.apps.polls.settings.urls.main')), (r'^tasks/', include('myproject.apps.tasks.settings.urls.main')), (r'^admin/', include('django.contrib.admin.urls.admin')) ) 9. Have $PROJECT_DIR/$APP_DIR/URLConf handle the relative urlpatterns: from django.conf.urls.defaults import * info_dict = { 'app_label': 'polls', 'module_name': 'polls' } generics = patterns('', (r'^$', 'django.views.generic.list_detail.object_list', info_dict), (r'^(?P<object_id>\d+)/$', 'django.views.generic.list_detail.object_detail', info_dict), (r'^(?P<object_id>\d+)/result/$', 'django.views.generic.list_detail.object_detail', dict(info_dict, template_name='polls/polls_result')) ) poll_urls = patterns('myproject.apps.polls.views', (r'^(?P<poll_id>\d+)/vote/$', 'vote.vote'), ) urlpatterns = generics + poll_urls This is fairly similar to Hugo's idea of a registry, I think. Wouldn't the patch for #66 address this issue without having to change anything?? jbowtie: No, that only makes the app root available from the request object, which model instances and methods (such as get_absolute_url) don't have access to, so it only fixes a small subset of the problem. Closing this now that we have django.core.urlresolvers.reverse() and django.db.models.permalink. Documentation is forthcoming. Hi all <a href="">replica handbags</a> [url=]replica watches/url [url=]replica watches/url <a href="">replica watches</a> By Edgewall Software.
http://code.djangoproject.com/ticket/672
crawl-002
refinedweb
5,521
63.7
weight scale. In this article, all these aspects are briefly explained with the designing and programming. This project makes us capable of learning the mechanism of digital weight scale and how to design an instrument by keeping related complexities in mind. Introduction to Digital Weight Scale An instrument is a device that is used for making measurements of physical quantity (distance, pressure, voltage, current etc). OR We can also say that an instrument is a device that converts one form of parameter that cannot be measured to another form that can be measured easily. As all the electrical instruments convert the input signal to electrical signal. For example: An electrical thermometer converts the temperature to voltage which is calibrated to give exact readings. The most important part of instrument is sensing part that senses the input signal and gives appropriate voltages accordingly. The instrument that we will discuss in this report is a 10kg weight cell. The components that are required in making a digital weight scale are: - Load cell (10kg) - HX711 Module - Arduino UNO - L.C.D. (16x2) - Switches - Connecting Wires - Resistors - Varrow Board - Battery and Regulator (5V) - Wooden Base - Screws, Nuts and Bolts - Acrylic Sheet (for Body) - Solder Details of Main Components of Digital Weight Scale: Now we will give brief explanation of the main components in making weight scale below: Load Cell: A load cell is a transducer that is used to create an electrical signal whose magnitude is directly proportional to the force. The load cell that we will use in this project is a strain gauge 10kg bar type load cell that is shown below. Strain gauge load cells are the most common in industry. These load cells are particularly stiff, have very good resonance values, and tend to have long life.. A load cell usually consists of four strain gauges in a Wheatstone bridge configuration which is shown below: The four wires coming out from the Wheatstone bridge on the load cell are usually: - Excitation+ (E+) or VCC is red - Excitation- (E-) or ground is black - Output+ (O+) is white - Output- (O-) is green or blue The maximum excitation voltage applied to load cell is of 15V and the output signal is in millivolts so an amplifier or amplifying module is must to read the readings. While using the load cell the assembly will be as shown below: HX711 Module: As the loadcell will give the output in micro volts and Arduino is not capable of reading these values therefore an amplification is required. The best solution is to use HX711 amplifier which is a 24-bit analog to digital amplifier and gives best output from a loadcell. The HX711 amplifier consist of 10 pins the image of which is shown below: The Excitation pins provide input to the Wheatstone bridge and there are two channels in this module namely Channel A and Channel B. Channel A gives the gain of 128 whereas Channel B gives the gain of 32. To excite the module, we will apply 3-5V to VCC pin of the module. The DT pin (Serial Data Output) and SCK (Serial Clock) pins are the output pins of the module that give values in the form of 0’s and 1’s. The working of these pins need to understand the digital logic design completely so we will not discuss it here. Arduino It is programmed through its own software namely ARDUINO IDE when it is connected to PC by a data cable. Most of the modules that are available in electronic shops are manufactured in such a way that they can be used with Arduino easily. HX711 load cell can be used with Arduino and programmed easily but first we must add the library of HX711 to Arduino software. Calculation for Calibration Factor of Load Cell: To use any measuring instrument, we must have to calibrate it for exact readings. Such like that in our project of 10kg weight scale, as we are using a load cell of 10kg and HX711 module with Arduino and when we observe the readings of HX711 serially on PC then these readings does not give any sense that indicates the need of calibration. When the calibration factor is known then it is easy for the Arduino to give the exact readings of the weight placed on the weight scale. The value of calibration factor is placed inside the brackets of set_scale( ) built-in function inside the library of HX711 to complete the calibration. The connections of components for the calibration factor is shown below: How to Calibrate Your Digital Weight Scale with Arduino? - Call set_scale() with no parameter. - Call tare() with no parameter. - Place a known weight on the scale and call get_units(10). - Divide the result in step 3 to your known weight. You should get about the parameter you need to pass to set_scale. - Adjust the parameter in step 4 until you get an accurate reading. After connecting the components according to the figure above we will program the Arduino UNO by the program below which is according to the above 5 instructions: #include "HX711.h" #define DOUT 3 #define CLK 2 HX711 scale(DOUT, CLK); void setup() { Serial.begin(9600); scale.set_scale(); scale.tare(); } void loop() { Serial.println(scale.get_units()); } To complete the task of calibration factor we placed a known mass of 0.8kg (800grams) on the load cell. After placing the known weight, the serial output changed which is shown below: By taking the average of these readings, we calculated -163102 as the raw value for 0.8kg weight. Now to calculate the calibration factor we will divide this raw value by the known weight in such units for which we are designing our weight scale. Therefore, the calculation of -163102/0.8 will give -203878 as our calibration factor. This calibration factor will give readings near to 0.8kg when placed on load cell but not exact 0.8kg. To find the exact calibration factor we will make a program that will make us able to increase or decrease the calibration factor and at the same time observing the effect of changing calibration factor on readings of weight. Designing the Arduino based Weight Scale: Most of the measuring instruments contain switches on their bodies that are for different purposes. If we look at the digital weight scales used in General Store’s then they have some switches and numeric keys on them. To make our weight scale simpler we will use only push buttons and ON/OFF switches. So, we decided that one ON/OFF switch will power the weight scale, one switch (CAL) will be used when the user wants to calibrate the weight scale. One switch (x100) will be used when the user wants to increase or decrease the calibration factor by multiples of 100 but by default the weight scale increases or decreases the calibration factor by multiples of 10. Two push buttons are used; INC button is used to increase the calibration factor whereas DEC is used to decrease the calibration factor by multiples of 10 or 100 as required. The complete circuit diagram of the weight scale is shown below: Code: You can download basic sketch and working code (for learning purposes) of arduino based digital weight scale in attached document. You can extend this code according to your own convenience. For More Articles and projects stay tuned with us. Like our facebook page to stay tuned. In case of query and question, comment below or contact us. Share your experience with world with engineerexperiences.com
http://engineerexperiences.com/arduino-based-digital-weight-scale-with-load-cell.html
CC-MAIN-2018-51
refinedweb
1,263
58.72
Decimal part of a number in Python I have the following program def F_inf(a,b): x1=a.numerator/a.denominator x2=b.numerator/b.denominator if x1<x2: print "a<b" elif x1>x2: print "a>b" else: print "a=b" a=Fraction(10,4) b=Fraction(10,4) F_inf(a, b) When I execute it,x1 receive just the integer value of the fraction, for exemple if I have to compute 2/4 x1 is equal to 0 not 0.5. What should I do ? Thanks 1 answer - answered 2018-01-11 21:05 Holloway It sounds like you're using Python2. The best solution would be to switch to Python 3 (not just because of the division but because "Python 2.x is legacy, Python 3.x is the present and future of the language"). Other than that you have a couple of choices. from __future__ import division # include ^ as the first line in your file to use float division by default or a = 1 b = 2 c = a / (1.0*b) # multiplying by 1.0 forces the right side of the division to be a float #c == 0.5 here
http://quabr.com/48215731/decimal-part-of-a-number-in-python
CC-MAIN-2018-39
refinedweb
194
73.68
Time to learn C now. Managing to control LED's via GPIO no problem using a separate pins, but I wanted to learn how to use shift registers. I found an instructable on using them to drive LED's which I thought was perfect, so I bought a bundle of 74HC164N shift registers so the tutorial uses so there was less things to go wrong. However, the python code didn't work for some reason, everything was wired exactly as it is online but not working. I then seen that wiringPi has a Shift library, so I tried to use that to get the LEDs to flash just once, but the code I wrote(which compiles fine) just doesn't work at all. Not a hint of working. I read that the unused input must be set high, so it is and all the rest is set as outputs. Can someone please explain how to get these things working, as I'm clearly missing out something. I tried copying the Python script and rewriting it in C myself, and had a few flickers, but nothing like the instructable shows. Here is the crap code. - Code: Select all #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <wiringPi.h> #include <wiringShift.h> #include <stdint.h> int inputA = 15; int inputB = 16; int clock = 9; int clear = 7; setupPins(){ pinMode(inputA, OUTPUT); pinMode(inputB, OUTPUT); pinMode(clock, OUTPUT); pinMode(clear, OUTPUT); digitalWrite(inputB, HIGH); digitalWrite(clear, HIGH); delay(10); digitalWrite(clear, LOW); } int main (void){ if (wiringPiSetup () == -1){ exit (1); } setupPins(); int running = 1; int i = 1; while (running = 1){ shiftOut(inputA, clock, LSBFIRST, i); delay(1500); i++; } }
https://www.raspberrypi.org/forums/viewtopic.php?t=27190&p=243479
CC-MAIN-2015-35
refinedweb
277
73.17
BlankRenderEngine Since: BlackBerry 10.0.0 #include <bb/cascades/maps/BlankRenderEngine> To link against this class, add the following line to your .pro file: LIBS += -lbbcascadesmaps An empty render engine, which will be used when no other engine can be found. Overview Inheritance Public Functions Index Signals Index Only has inherited signals Public Functions Basic constructor. BlackBerry 10.0.0 virtual Destructor. virtualbb::platform::geo::BoundingBox Calculates a new bounding box based on the view properties provided. A bounding box that matches the limits of the view. BlackBerry 10.0.0 virtualbb::cascades::maps::RenderEngineInfo Gets the characteristics of this engine. The information concerning the characteristics of this engine. BlackBerry 10.0.0 virtual int Indicates the priority for which this engine should be used when two engines have coverage over the same area. Thus, if render engine A and B both have coverage over the current viewport, the render engine with the higher priority will be used. 5: Reserved for Application-provided plug-in 4: 3D System Render Engine 3: 2D System Render Engine 0: Blank Render Engine (no memory consumption) The priority for this render engine. A higher number equals a higher priority. Numbers range from 0 to 5. 5 indicates an RE should absolutely be used, while 3 indicates a "normal" priority. BlackBerry 10.0.0 virtual bool Indicates whether this render engine has code coverage for the given region. Note: Important factors to be considered are center, altitude and bounding box. true if this engine has map coverage for the entire region, false if partial coverage or no coverage. BlackBerry 10.0.0 virtualQString Gets the element ID of the interactable element at the given window coordinates. The ID of the element available, or an empty string if no element exists. BlackBerry 10.0.0 virtualbb::ImageData Converts the current map into an image. The viewport's content as an image. BlackBerry 10.0.0 virtual void Initializes the engine. BlackBerry 10.0.0 virtual bool Indicates whether base map data is included in the rendered output. Base map data includes items such as ground information, roads, and so on. true if the base map is included. BlackBerry 10.0.0 virtualbb::cascades::maps::RenderEngine * Creates a new instance of this render engine. This factory method is only used through the plug-in system. The new instance of the RenderEngine, never null. BlackBerry 10.0.0 virtual void Initiates a render cycle using the location information previously provided. Note: This operation will not be called again until it has returned. Thus, there might be a backlog of render requests. It is important that this operation return in a timely fashion so that other messages in the messaging queue can be delivered. BlackBerry 10.0.0 virtual void Sets whether or not the base map should be included in the rendered output. BlackBerry 10.0.0 virtual void Gives the render engine the mapping data container holding non-atlas data. BlackBerry 10.0.0 virtual void Changes the properties of the view. This call is not an explicit request to initiate a new render. To initiate a new render use RenderEngine::render. This is a blocking call (synchronous). See the class level comment titled "Communication Between MapView and RenderEngine". BlackBerry 10.0.0 virtual void Provides an opportunity for the engine to perform any shutdown work. BlackBerry 10.0.0 virtualbb::platform::geo::Point Converts the screen coordinates to world coordinates. This is a blocking call (synchronous). See the class level comment titled "Communication Between MapView and RenderEngine". The coordinates representing the window's coordinates. BlackBerry 10.0.0 virtualQPoint Converts a world coordinate into a screen/window coordinate. This is a blocking call. The window coordinates representing the world coordinates. The returned coordinates may not be within the current window's view. BlackBerry 10.0.0 bool Indicates whether there is inline traffic available for the current map view. true if traffic data is available for the current view, false if no traffic data is available. BlackBerry 10.2.0 bool Indicates whether inline traffic has been enabled within this RenderEngine. true to enable, false to disable inline traffic. BlackBerry 10.2.0 Base Constructor. BlackBerry 10.0.0 void Sets the flag indicating if inline traffic is available within the current map view. It is the RenderEngine's responsibility to call this operation whenever the availability state changes. BlackBerry 10.2.0 void Enables the inclusion of inline traffic within the map. Note: If the render engine doesn't support inline traffic, setting this value will have no effect. BlackBerry 10.2.0 Signals (Only has inherited signals) void Emitted when the "inline traffic available" state has changed. BlackBerry 10.2.0 void Emitted when the "enable inline traffic" state has changed. BlackBerry 10.2.0 void Indicates to observers that a render cycle has been completed. BlackBerry 10.0.0
http://developer.blackberry.com/native/reference/cascades/bb__cascades__maps__blankrenderengine.html
CC-MAIN-2014-15
refinedweb
816
50.73
Initializing static cv::Mat with cv::Mat::zeros causes segmentation fault Hi everybody, I have a problem initializing a static cv::Mat with the static method cv::Mat::zeros() using the GCC compiler version 4.7.1. (Opensuse 12.2). It compiles fine, but causes a segmenation fault during runtime. Here is a minimal code example: #include <iostream> #include <opencv2/core/core.hpp> static cv::Mat staticMatOne(3,3,CV_32FC1); static cv::Mat staticMatTwo(cv::Mat::zeros(3,3,CV_32FC1)); // This line causes the segmentation fault int main() { std::cout << "Hello World" << std::endl; return 0; } It is compiled with gcc -o OpencvTest OpencvTest.cpp -I{_PATH_TO_OPENCV_}/include -lopencv_core -lstdc++ -lm -lpthread -lrt -lz -L{_PATH_TO_OPENCV_}/lib where {_PATH_TO_OPENCV_} points to the OpenCV installation directory. I both tried the newest version of OpenCV from the git repository as well as the released version 2.4.3. and the released version 2.3.1. All versions have been linked as static libraries. The output of gcc --version is gcc (SUSE Linux) 4.7.1 20120723 [gcc-4_7-branch revision 189773] Copyright (C) 2012 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. The output of gdb is Program received signal SIGSEGV, Segmentation fault. 0x0804b616 in cv::MatExpr::operator cv::Mat (this=0xbffff330) at {_PATH_TO_OPENCV_}/include/opencv2/core/mat.hpp:1222 1222 op->assign(*this, m); Missing separate debuginfos, use: zypper install glibc-debuginfo-2.15-22.6.4.i686 libgcc47-debuginfo-4.7.1_20120723-1.1.1.i586 libstdc++47-debuginfo-4.7.1_20120723-1.1.1.i586 zlib-debuginfo-1.2.7-2.1.2.i586 (gdb) bt #0 0x0804b616 in cv::MatExpr::operator cv::Mat (this=0xbffff330) at {_PATH_TO_OPENCV_}/include/opencv2/core/mat.hpp:1222 #1 0x0804b2f9 in __static_initialization_and_destruction_0 (__initialize_p=1, __priority=65535) at OpencvTest.cpp:7 #2 0x0804b361 in _GLOBAL__sub_I_main () at OpencvTest.cpp:13 #3 0x08202282 in __libc_csu_init (argc=1, argv=0xbffff504, envp=0xbffff50c) at elf-init.c:125 #4 0xb7cbf36a in __libc_start_main () from /lib/libc.so.6 #5 0x0804b131 in _start () at ../sysdeps/i386/elf/start.S:119 It compiles and runs fine using gcc version 4.3.2 (Opensuse 11.1) and gcc version 4.6.2 (Opensuse 12.1) It also runs without a segmentation fault if the line static cv::Mat staticMatTwo(cv::Mat::zeros(3,3,CV_32FC1)); in the example is deleted. The problem is that the static instance (modules/core/src/matop.cpp line 205) static MatOp_Initializer g_MatOp_Initializer; gets not constructed before the construction of the static instance of the matrix. Does anybody has an idea of how to solve this initialization problem. It would be hard for us not to use a static instance of a cv::Mat in this case. I tried to reproduce it but I have gcc 4.5.2 and i don't have any issue with that code. I don't know if it is just a version number problem. Anyway, have you tried to initialize the matrix like static cv::Mat staticMatTwo = cv::Mat::zeros(3,3,CV_32FC1); S. I tried the version you proposed. But it leads to the same behaviour => segmentation fault during execution. The Problem only occurs if I compile with gcc 4.7.1 with gcc 4.6.2 everything works fine. The problem can also be fixed if you link against dynamic OpenCV libraries. It seems that the initialization of the matrix initializer is finished before the initialization of staticMatTwo in this case. J.
https://answers.opencv.org/question/4072/initializing-static-cvmat-with-cvmatzeros-causes-segmentation-fault/?answer=29952
CC-MAIN-2020-16
refinedweb
593
51.85
Day 6 - How to add inline svg icons to a vue projectlife UserBit code design entrepreneurship TLDR; jump to attempt 3 I ended up doing it without using any external plugins. Why inline svg? I wanted to create some icons for UserBit in sketch. In the past though I’ve had to update the icons color/size multiple times as I iterated over my products. So I decided to create SVG icons this time so I could tweak these properties in code itself. Upon some research I realized that for you to be able to control svg attributes through code, you need to have your SVG code inline in the template. Attempt 1 - vue-svg-inline-loader installed it > npm i vue-svg-inline-loader --save-dev after adding the respective configuration in vue.config.js, the render function started throwing a bunch of errors. I did not want to spend time dealing with this… so on to the next attempt Attempt 2 - vue-svg-loader So next attempt was to try vue-svg-loader and configuring it in vue.config.js:') } basically this says, when webpack encounters .svg file, use the vue-svg-loader to load it as opposed to the regular vue-loader. While this config seems to have worked, this blanket application of rule meant all my current svg font icons failed to render - I’d have to use them the svg-loader way. This would turn out to be more work than it was worth. Attempt 3 - create a component for each icon Ok, so after dealing with these plugins, I took a step back and thought about what I really wanted to do. I simply want my icons to render inline and for me to control the color attribute of these icons (and perhaps the size). I took a look at the svg file itself. <?xml version="1.0" encoding="UTF-8"?> <svg width="44px" height="44px" viewBox="0 0 44 44" version="1.1" xmlns="" xmlns: <g> <!- actual svg code -> </g> </svg> It occured to me that if I could just paste this code inline in my icon component, I’d be able to render it inline. I also noticed that the svg at several places referred to the color I was using. So if the svg was part of the template I could replace the hex with a prop (color): before <circle id="Oval-7" stroke="#8898AA" cx="7" cy="22" r="2"></circle> after <circle id="Oval-7" :</circle> and in props props: [ 'color' ] So the single file component looks something like: <template> <span> <svg width="44px" height="44px" viewBox="0 0 44 44" version="1.1" xmlns="" xmlns: <title>My Icon</title> <desc>Created with Sketch.</desc> <defs></defs> <g id="icons" stroke="none" stroke- <g id="Artboard" transform="translate(-90.000000, -64.000000)"> <circle id="Oval-7" :</circle> </g> </g> </svg> </span> </template> <script> export default { name: 'ub-circle-icon', props: [ 'color', ], }; </script> <style lang="scss" scoped> </style> Usage import the icon file import CircleIcon from './icons/circle-icon.vue'; add it to components components: { CircleIcon, } then use it on the template <circle-icon </circle-icon>
https://blog.crayonbits.com/2018/10/30/how-to-add-svg/
CC-MAIN-2019-22
refinedweb
526
69.82
The great thing with the Freedom FRDM-KL25Z board is its compatibility with Arduino Shields: a great set of board available at very reasonable prices. I had my hands on the Adafruit Data Logger shield, and now it was time to use the original Arduino Motor Shield R3. I already had an Arexx Robo Chassis available: a simple platform with two DC motors. So I added the FRDM-KL25Z with the Arduino Motor Shield R3 on top of it: Power Supply The battery pack (4 AAA NiMH rechargeable batteries) provide 5V to the motor shield. That 5V is powering the FRDM-KL25Z through trace on the Motor Shield. There is an outstanding issue: the Motor Shield expects that the CPU board provides both 3.3V and 5V. 3.3V is provided, but the FRDM-KL25Z only provides 5V is either the KL25Z or the K20 is USB powered. So for now I need to power the system as well with a USB cable until I find a different solution. Current Sensing The shield provides two analog pins for motor current sensing. According to the documentation 3.3V should be full-scale for 2A. However, I’m measuring around 60 mV even with no current. It is not clear to me from the shield schematics if the analog signal depens on the AREF signal or not: the problem could be because the Freedom board does *not* route that signal to the header without hardware modification. So not sure where I am now with this, as I’m measuring the wrong values. Console/Shell Interface To manually control the motor, I have added a simple shell interface: That way I can manually control the motors and get status information: Processor Expert The CodeWarrior for MCU10.3 project is using Processor Expert components to abstract from the hardware, and runs FreeRTOS: Motor Driver The motor driver functionality is in Motor.c and Motor.h. The interface is as below: /* * Motor.h * * Author: Erich Styger */ #ifndef MOTOR_H_ #define MOTOR_H_ #include "PE_Types.h" #include "FSSH1.h" typedef enum { MOT_DIR_FORWARD, /*!< Motor forward direction */ MOT_DIR_BACKWARD /*!< Motor backward direction */ } MOT_Direction; typedef int8_t MOT_SpeedPercent; /* -100%...+100%, where negative is backward */ typedef struct MOT_MotorDevice_ { bool brake; /* if brake is enabled or not */ MOT_SpeedPercent currSpeedPercent; /* our current speed in %, negative percent means backward */ uint16_t currPWMvalue; /* current PWM value used */ LDD_TError (*SetRatio16)(LDD_TDeviceData*, uint16_t); LDD_TDeviceData *PWMdeviceData; /* LDD device handle for PWM */ void (*DirPutVal)(LDD_TDeviceData *, bool); /* function to set direction bit */ LDD_TDeviceData *DIRdeviceData; /* LDD device handle for direction */ void (*BrakePutVal)(LDD_TDeviceData *, bool); /* function to enable/disable brake */ LDD_TDeviceData *BRAKEdeviceData; /* LDD device handle for brake pin */ uint16_t currentValue; /* current AD current sensor value */ uint16_t offset; /* current AD sensor offset value */ LDD_TDeviceData *SNSdeviceData; /* LDD current AD device handle */ } MOT_MotorDevice; /*! * \brief Sets the PWM value for the motor. * \param[in] motor Motor handle * \param[in] val New PWM value. */ void MOT_SetVal(MOT_MotorDevice *motor, uint16_t val); /*! * \brief Return the current PWM value of the motor. * \param[in] motor Motor handle * \return Current PWM value. */ uint16_t MOT_GetVal(MOT_MotorDevice *motor); /*! * \brief Change the direction of the motor * \param[in] motor Motor handle * \param[in] dir Direction to be used */ void MOT_SetDirection(MOT_MotorDevice *motor, MOT_Direction dir); /*! * \brief Returns the direction of the motor * \param[in] motor Motor handle * \return Current direction of the motor */ MOT_Direction MOT_GetDirection(MOT_MotorDevice *motor); /*! * \brief Shell command line parser. * \param[in] cmd Pointer to command string * \param[out] handled If command is handled by the parser * \param[in] io Std I/O handler of shell */ uint8_t MOT_ParseCommand(const unsigned char *cmd, bool *handled, const FSSH1_StdIOType *io); /*! * \brief Initialization function. */ void MOT_Init(void); #endif /* MOTOR_H_ */ Summary It was very easy to use the Motor Shield with the help of CodeWarrior for MCU10.3 and Processor Expert. The basic functionality with the exception of current sensing works and with the shell interface it is easy to explore and add further functionality. I still have an ultrasonic sensor to integrate :-). The CodeWarrior project and sources are available from this link. Happy Motoring :-) Reblogged this on Embedded Stories and commented: This looks cool, and I have that shield…I shall try it soon! Erich First let me say I am a big fan of yours! I love the content you post on mcuoneclipse.com. In context of Arduino-compatible shields, one to watch-out for is Avnet’s new “Wi-Go” (Wi-Fi on the Go…) wireless-datalogger “shield” designed specifically as a companion board to Freescale’s FRDM-KL25Z Freedom board. It has an 800mAH Li-Poly battery, Wi-Fi, multiple sensors plus low-power SPI serial Flash memory. Details available here: CE certification recently completed and a number of boards sent over for the dozen or so Kinetis-L workshops being hosted by Silica there in Europe (based on this Wi-Go kit). Their workshops started this week, more details on that available at: Regards Peter Fenn | Global Technical Marketing Manager, Microcontrollers (O) 949.789.4308 | (M) 949.922.3161 Avnet Electronics Marketing | 430 Exchange, Ste. 100 | Irvine, CA 92602 Hi Peter, thanks! that “Wi-Go” module really looks interesting. I checked the web site, but it looks that board is only available in the Americas? How can I get one in Europe? I would love to get my hands on it. Best regards, Erich Not available in Americas either. Says zero stock. Pingback: The Freedom Robot | MCU on Eclipse Very interested in this project. I have a similar motor shield from Pololu that I want to try. I have imported the project source from the link. After resolving some issues with missing PE components and different versions I was able to get the PE code to generate. Now when I build the project it is hanging on ‘Undefined reference to __copy_rom_sections_to_ram in __arm_start.c Line 231. Additionally, there are errors on __init_ccp, __init_registers and __init_user all in __arm_start.c. Is this a project property setting or a code warrior setting? How can I resolve the errors? Any help will be appreciated. Thanks! Hi Mike, that sounds like more of a project (library?) setting problem. I have uploaded my MCU10.3 project to git (), in case this makes a difference. Are you using the 10.3 beta or final? I’m using ‘final’, and I know that there might be a difference in the EWL library settings. Pingback: Pololu Line Following Robot with Freedom Board | MCU on Eclipse Hi Erich, Have you tried using Embedded Software and Motor Control Libraries () on KL25Z Freedom Board? There are some issues when importing the library into CodeWarrior v10.3. ~~~~~~ Issue 1: ~~~~~~ I create a New Bareboard Project for Kinetis KL25(without PE), drag & drop all the necessary files into “Project_Headers” (linking), add all the PATHs into “Includes” and “Libraries” under project properties (every steps provided in the User Guide). 2 Problems occur when I tried to build the project. ————– Problem 1: ————– #if defined(__CWCC__) Suppose the __CWCC__ should be defined automatically by the compiler. Therefore, in order to mute the error temporarily, I simply put everything outside the #if statement. Then, the 2nd problem occurs: ————– Problem 2: ————– intrinsic_cw.h – error – expected ‘(‘ before ‘{‘ token The error occurs everywhere in the header file where the line contain the “asm{ }”. ~~~~~~ Issue 2: ~~~~~~ So, I try to dig deep into the problem by revisiting the User Guide again. This time I create a New Project for Kinetis K40 family which is shown exactly in the UG. Then I realize that I am able to choose the “ARM Build Tools” in the wizard (GCC or Freescale) which is not available back then for Kinetis KL25. If I choose GCC the same problems occur and hence I select Freescale. Next, I just follow all the steps (as described earlier) and compile the project. Surprisingly, no error(s) is encountered this time. (the problem still occurs with the #if defined(__CWCC__), so it is bypassed temporarily) Please advise. Thanks. Hi Chiasyan, no, I have not used the library. But what I can tell is that __CWCC__ identifies the proprietary Freescale ARM compiler (and not the GNU gcc which is used for the Kinetis-L). So to me the problems you report indicate that this library has not been written/ported to gcc. Hi Erich, If I wish to use math library in GCC, I should just add the “math.h”? Thank you. Yes, that should do it. Pingback: 5V Generation from V_IN on the Freedom Board RevE | MCU on Eclipse
http://mcuoneclipse.com/2012/12/14/frdm-kl25z-with-the-arduino-motor-shield/
CC-MAIN-2014-41
refinedweb
1,402
57.16
D programs segfault when vararg functions are used Bug Description Binary package hint: gdc-4.2 Programs that are compiled with the Ubuntu build of GDC give a segmentation fault whenever vararg functions, such as writef and writefln are used. The following hello-world program, for instance, will crash: import std.stdio; void main() { writefln("Hello %s!", "world"); } This program compiles without errors, and the segfault happens when running the program. I am running Ubuntu 8.04 with GDC build 0.25-4. This issue has been discussed on the official D forum, and it seems only the Ubuntu version has this problem: http:// Have the same thing, also AMD64. Its realy annoying since I can't use any of the vararg functions in the std library. This bug is Confirmed. Any vararg functions fails with a segfault on x86-64. This bug is CRITICAL as it renders GDC almost unusable on x86-64 Bug confirmed: oshawk@earth:/tmp$ cat test.d import std.stdio; void main() { writefln( } goshawk@earth:/tmp$ gdc -v Using built-in specs. Target: x86_64-linux-gnu Configured with: ../src/configure -v --enable- Thread model: posix gcc version 4.1.3 20070831 (prerelease gdc 0.25, using dmd 1.021) (Ubuntu 0.25-4. goshawk@earth:/tmp$ gdc test.d goshawk@earth:/tmp$ ./a.out Hello world! goshawk@earth:/tmp$ gdc -v Using built-in specs. Target: x86_64-linux-gnu Configured with: ../src/configure -v --enable- Thread model: posix gcc version 4.2.3 20080225 (prerelease gdc 0.25 20071215, using dmd 1.022) (Ubuntu 0.25-4. goshawk@earth:/tmp$ gdc test.d goshawk@earth:/tmp$ ./a.out Segmentation fault I can also confirm this bug. A quick workaround is to use gcd-4.1. Any program compiled with gdc seem to have memory leaks. $ echo "void main(){}" > test.d ; gdc test.d $ valgrind ./a.out ==24118== Memcheck, a memory error detector. ==24118== Copyright (C) 2002-2007, and GNU GPL'd, by Julian Seward et al. ==24118== Using LibVEX rev 1804, a library for dynamic binary translation. ==24118== Copyright (C) 2004-2007, and GNU GPL'd, by OpenWorks LLP. ==24118== Using valgrind- ==24118== Copyright (C) 2000-2007, and GNU GPL'd, by Julian Seward et al. ==24118== For more details, rerun with: -v ==24118== ==24118== ==24118== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 17 from 1) ==24118== malloc/free: in use at exit: 8,248 bytes in 3 blocks. ==24118== malloc/free: 13 allocs, 10 frees, 41,652 bytes allocated. ==24118== For counts of detected errors, rerun with: -v ==24118== searching for pointers to 3 not-freed blocks. ==24118== checked 117,644 bytes. ==24118== ==24118== LEAK SUMMARY: ==24118== definitely lost: 8,200 bytes in 1 blocks. ==24118== possibly lost: 0 bytes in 0 blocks. ==24118== still reachable: 48 bytes in 2 blocks. ==24118== suppressed: 0 bytes in 0 blocks. ==24118== Rerun with --leak-check=full to see details of leaked memory. I get the same results with gdc-4.1. I have looked into this bug and reduced it to this: int main() { int x = 0; void test() { void test2() { x = 1; //Segfault because &x = 0x0 } test2(); } test(); return 0; } Somehow gdc-2.4 comepiles the test function to movq -24(%rbp), %rax movq %rax, -16(%rbp) movq %rdi, -24(%rbp) movq -24(%rbp), %rax When it should be (as compiled with gdc-4.1) movq %rdi, %rax movq %rax, -16(%rbp) This makes %rax corect for test() but it sends the value -24(%rbp) to test2() I hope someone with more tecnical insight into gdc can fix this bug In the meantime you can use gdc-4.1 by default by running $sudo ln -s /usr/bin/gdc-4.1 /usr/bin/gdc Could confirm on Ubuntu 8.10 with gdc-4.2 $ LANG=C) Yeah, confirmed goshawk@earth:~$) goshawk@earth:~$ nano test.d goshawk@earth:~$ gdc test.d goshawk@earth:~$ ./a.out Segmentation fault goshawk@earth:~$ Confirmed for PowerPC (32bit) The 3rd example of the "learning D" webpage with many writef/ln crashes. Rewriting to import C libs printf is an ugly workaround. Yeah, this bug is 1 year and half old, occurs only on ubuntu, but is kindly repeted in each release ! Great job ! Just like the gb php lib, that is buggy since 7.04 (at least). I just can't understand what's the goal. I forgot to mention: I am using the AMD64 version of Hardy.
https://bugs.launchpad.net/ubuntu/+source/gdc-4.2/+bug/235955
CC-MAIN-2017-17
refinedweb
738
78.25
); } } ?. ProductController List Index There were a couple rules in place for this to happen: [ControllerAction]. System.Object ToString() GetHashCode() GetType() Equals() The solution here is conceptually easy, we only look at public methods on classes that derive from our Controller class. In other words, we ignore methods on Controller and on Object. Controller. CoolController.(). get_PropertyName() set_PropertyName() .ctor(). . IronRubyViewEngine ... In my last post I set the stage for this post by discussing some of my personal opinions around integrating a dynamic language into a .NET application. Using a DSL written in a dynamic language, such as IronRuby, to set up configuration for a .NET application is an interesting approach to application configuration. With that in mind, I was playing around with some IronRuby interop with the CLR recently. Ruby has this concept called Monkey Patching. You can read the definition in the Wikipedia link I provided, but in short, it is a way to modify the behavior of a class or instance of a class at runtime without changing the source of that class or instance. Kind of like extension methods in C#, but more powerful. Let me provide a demonstration. I want to pass a C# object instance that happens to have an indexer to a Ruby script via IronRuby. In C#, you can access an indexer property using square brackets like so: object value = indexer["key"]; Being able to use braces to access this property is merely syntactic sugar by the C# language. Under the hood, this gets compiled to IL as a method named get_Item. get_Item So when passing this object to IronRuby, I need to do the following: value = $indexer.get_Item("key"); That’s not soooo bad (ok, maybe it is), but we’re not taking advantage of any of the power of Ruby. So what I did was monkey patch the method_missing method onto my object and used the method name as the key to the dictionary. This method allows you to handle unknown method calls on an object instance. You can read this post for a nice brief explanation. method_missing So this allows me now to access the indexer from within Ruby as if it were a simple property access like so: value = $indexerObject.key The code for doing this is the following, based on the latest IronRuby code in RubyForge. ScriptRuntime runtime = IronRuby.CreateRuntime(); ScriptEngine rubyengine = IronRuby.GetEngine(runtime); RubyExecutionContext ctx = IronRuby.GetExecutionContext(runtime); ctx.DefineGlobalVariable("indexer", new Indexer()); string requires = @"require 'My.NameSpace, Version=1.0.0.0, Culture=neutral, PublicKeyToken=...' def $indexer.method_missing(methodname) $indexer.get_Item(methodname.to_s) end "; //pretend we got the ruby script I really want to run from somewhere else string rubyScript = GetRubyCode(); string script = requires + rubyScript; ScriptSource source = rubyengine.CreateScriptSourceFromString(script); runtime.ExecuteSourceUnit(source); What’s going on here is that we instantiate the IronRuby runtime and script engine and context (I still need to learn exactly what each of these things are responsible for apart from each other). I then set a global variable and set it to an instance of a CLR object written in C#. After that, I start constructing a string that contains the beginning of the Ruby script I want to execute. I will pre-append this beginning section with the actual script I want to run. The beginning of the Ruby script imports the .NET namespace that contains my CLR type to IronRuby (I believe that by default you don’t need to import mscorlib and System). I then added a missing_method method to that CLR instance within the Ruby code via this snippet. missing_method def $indexer.method_missing(methodname); $indexer.get_Item(methodname.to_s) end At that point now, when I execute the rest of the ruby script, any calls from within Ruby to this CLR object can take advantage of this new method we patched onto the instance. Pretty nifty, eh? In my next post, I will show you the concrete instance of using this and supply source code.. UPDATE:.
http://haacked.com/Default.aspx
crawl-001
refinedweb
658
54.93
extract separat objects from Polgon-selction-tags On 27/05/2014 at 05:19, xxxxxxxx wrote: Hi There, I'm looking for a script like that: Is there still a possibility to extract separat objects from Polgon-selction-tags of a imported Object, witch contains many of those Polygon-selection-tags? It have been a multi-hierarchy-object but this structure is collapsed in to one object. Any one an idea? :) It would be very nice to get a script or any thing else for those issues. Thx On 28/05/2014 at 03:06, xxxxxxxx wrote: Try this in the Script Manager (select your Polygon Object) : import c4d is_polyselection = lambda x: x.CheckType(c4d.Tpolygonselection) def main() : if not op or not op.CheckType(c4d.Opolygon) : c4d.gui.MessageDialog('Please select a Polygon Object') selections = filter(is_polyselection, op.GetTags()) if not selections: return result = [] points = op.GetAllPoints() polys = op.GetAllPolygons() for tag in selections: sel = tag.GetBaseSelect() new_polys = [] for i, v in enumerate(sel.GetAll(len(polys))) : if not v: continue new_polys.append(polys[i]) obj = c4d.PolygonObject(len(points), len(new_polys)) obj.SetAllPoints(points) for i, p in enumerate(new_polys) : obj.SetPolygon(i, p) obj.SetName('%s - %s' % (op.GetName(), tag.GetName())) c4d.utils.SendModelingCommand(c4d.MCOMMAND_OPTIMIZE, [obj]) result.append(obj) root = None if len(result) == 1: root = result[0] else: root = c4d.BaseObject(c4d.Onull) root.SetName(op.GetName()) for obj in result: obj.InsertUnderLast(root) doc.StartUndo() doc.InsertObject(root) doc.AddUndo(c4d.UNDOTYPE_NEW, root) doc.EndUndo() c4d.EventAdd() if __name__ == '__main__': main() Note that it uses the Optimize Command to remove the points it will not need. -Niklas On 29/05/2014 at 04:57, xxxxxxxx wrote: Jihaaaaaa!!!! It works! I put in your Code in to the Pythonconsole and the result was fine. Thanks for that amazing script. (Is there still a chance to tune it up with a extra feature to center axis of all the new Objects?) PS: von wo aus aus BRD kommst du? Die Brücke kommt mir bekannt vor. :) **** **** **** On 29/05/2014 at 06:01, xxxxxxxx wrote: Ahh sorry, now I see... it have to be necessary to optimize/kill all the needless points before. After that I can center the axes. This I can make in one shot for multible objects. (I have to think before I write) :) regards from Cologne On 30/05/2014 at 09:40, xxxxxxxx wrote: You're welcome. I'm from Bavaria :) But the Sidney Harbour Bridge is surely anything but near your location ;) Cheers, -Niklas
https://plugincafe.maxon.net/topic/7905/10272_extract-separat-objects-from-polgonselctiontags/1
CC-MAIN-2020-50
refinedweb
423
61.43
Changes between Initial Version and Version 1 of Ticket #3100 Legend: - Unmodified - Added - Removed - Modified Ticket #3100 - Property Difficulty changed from to Unknown Ticket #3100 – Description Can you give me a self-contained repro case? I'm much more likely to fix this if I can do it without downloading all of Happs. Usually it's just a case of figuring out what the imports are, and replacing import Wurble by f :: (?x:Int) => Int -> Int f = error "urk" For what it's worth, the only way I can see that this crash can arise is if you are reifying the type of an instance declaration. For example, from the decl class Foo a where op1 :: a -> a op2 :: a -> Int instance Foo a => Foo [a] where ... GHC generates a data type from the class decl, and a function from the instance thus: data TFoo a = MkTFoo (a->a) (a->Int) dfEqList :: forall a. TFoo a -> TFoo [a] In trying to convert this back into TH syntax in 'reifyType' it can convert the TFoo on the left of the arrow to Foo =>, but it gets stuck with the TFoo on the right. Crashing is unhelpful I grant you. Maybe I should just return TFoo (it's actually more like :TFoo so as to avoid name clashes with your other data types)? But I'm still a bit unsure how yo managed to provoke this case. More info needed pls, on (a) repro case, (b) desired result. Simon Thanks Simon By Edgewall Software. Visit the Trac open source project at
https://ghc.haskell.org/trac/ghc/ticket/3100?action=diff&version=1
CC-MAIN-2015-32
refinedweb
259
68.91
Getting started with managed C++ isn't as easy as it should be because the documentation is often not up-todate. Our introduction uses VC++ 2010 and you can follow it using the free Express edition. Find out just how easy, and surprising, it all is. C++ is often thought of as a "difficult" language. Partly because the way that it has been built up from C and partly because it has a history of being used in conjunction with sophisticated and complex technology like the MFC, ATL, Corba or COM. Indeed C and later C++ were the original ways of building Windows applications using the raw API. You can't really blame C++ for seeming complex and difficult when it's so often used in conjunction with complex and difficult APIs.U What comes as something of a surprise - especially if you have any familiarity with, say, the MFC, is how simple application development with C++ can be. VIsual Studio supports a very easy way to work with Visual C++ - Managed C++. It turns out to be a good way to see just how easy C++ is and how much like other object-oriented languages it is. If you know C#, Java, Visual Basic or any object-oriented language you will be able to follow this article and get started with C++. You don't even need a full copy of C++, just download and install Visual C++ 2010 Express. One difficulty is that the syntax of many parts of the language introduced to extend it to "managed" classes changed with VC++ 2005 and while this is some time ago much of the documentation, and most of the examples, still use the old syntax. So let's start with a fresh copy of 2010 and see how it all works. Managed C++ makes use of the framework, i.e the .NET class library and the common runtime system, to do everything. I’m not going to go into the details of mixed managed and unmanaged C++, nor am I going to go into considerations of migrating code - both are for future articles. At the moment what is important is to get a clear idea of what using the .NET framework from C++ is all about. If you start a new Windows Forms project, Managed C++ doesn't support WPF at the moment, then after the project has been created you will see the forms designer much like you would in any .NET language. You can drag-and-drop a button from the Toolbox and double click on it to create a click event handler. If you view the generated code you should quickly follow what is happening. The event handler looks fairly obvious: private: System::Void button1_Click( System::Object^ sender, System::EventArgs^ e) {} with both parameters as they would be in any .NET language expect that they are passed as C++ "pointers" to classes of the appropriate type. To be more accurate the new operator ^ works like a pointer but it in fact dereferences a handle to the managed heap. It you think about handles to the managed heap as managed pointers you won't be too far from the truth. Note: the ^ operator replaces the original __gc* reference pointer type. To create a Hello World program all you have to do is add the line: Diagnostics::Debug::WriteLine( L"Hello World"); The output will appear in the Debug Output window when you run the program. If you try this out you will notice that there are some differences between using say C# and C++ under Visual Studio. The most obvious is that there is no Intellisense prompting - and you really notice its loss. There is also no interactive error checking. The first you know about an error is when you try to run the program. The errors are reported in the Output window and you need to scroll to the right to read the relevant detail. If you are used to C# the the C++ environment will seem very primitive and barely adequate. If you are a C++ programmer you won't notice the primitive environment because you will still be in a state of shock that you can create something as sophisticated as a window, with a button and an event handler in a few seconds, with so little code and in such a straight forward manner - compare the experience to when you first learned the MFC say. If you look at the generated code you will quickly understand the structure of the program and the way it works with the framework. You will discover a constructor Form1 and a destructor ~Form1 have been generated. The constructor calls InitilizeComponent and the destructor uses the delete instruction to remove any components that have been created. Components that you add to the form are declared as pointers to objects of the appropriate type. For example: private: System::Windows::Forms:: Button^ button1; Most of the real work is done by the InitilizeComponent method - which is 100% generated by the Designer and you shouldn't ever directly modify it. If you read the method through you should be able to see how it creates and initialises each of the components you have added. For example, the button is created using: this->button1 = (gcnew System::Windows::Forms::Button()); This allocates space on the garbage-collected, i.e. managed, heap and then creates an instance of the type using the space. Notice how similar this is to the use of new in C#.. Once we have created a component initialising it is very easy using the pointer returned by gcnew. For example: this->button1->Location = System::Drawing::Point(58, 68);this->button1->Name = L"button1";this->button1->Size = System::Drawing::Size(75, 23); Even the setting up of the event handler looks very like the same operation in C#: this->button1->Click += gcnew System::EventHandler( this, &Form1::button1_Click); Finally the form is initialised in the same way. The only new action is adding the button object to the form's Controls collection property: this->Controls->Add(this->button1); Even though this generated code is simple it does give us some ideas of the differences between raw and managed C++. The first thing to notice is that there aren’t many header files. Traditional C++ has lots of header files that define how libraries are used. You can still use header files but managed references and using statements do most of the same job in managed C++. Notice that the layout of the form is stored in the header file Form1.h. It is this that the designer reads and writes to determine the form's layout and it is where the code that you have just been examining lives. This header file is loaded into the main program Forms.cpp in the usual way #include "Form1.h" and the main program gets everything going with the single line: Application::Run(gcnew Form1()); which creates an instance of the form on the managed heap. To move on from this simple example perhaps the first thing to find out about is how to work with a window. We have the basics of how manged C++ creates a user interface. The only thing you really have to adapt to is that managed objects are created on the managed heap and instead of pointer to memory you work with handles to allocations on the managed heap. Lets see how this works by creating a dialog box in code. After all when you first wrote a raw Win32 application you saw how to create a window and there is a sense in which a window is the core of any Windows program. In the case of a managed C++ program, however, the inner workings of message loop, window and window procedure have all but vanished. The System::WinForms namespace and the System.WinForms.dll library contains classes that enable you to implement a GUI. In this case the fundamental window class is called Form and it is a fairly high-level abstraction of what a C++ programmer generally thinks of as a window. For example, it can be a general window, a modal dialog box or a non-modal dialog box. To create an instance of the Form class we simply write: Form^ dialog1=gcnew Form(); The instance now exists and we can make use of the many properties and methods that Form offers to customise its look and behaviour. Once again notice that the variable dialog1 isn't a pointer to the new instance but a handle that we tend to treat as if it was a pointer. For example, to set the caption and display the dialog you would use: dialog1->Text=L"My Dialog";dialog1->ShowDialog(); When this program is run a window appears and the program pauses until you close it. You can't switch back to the applications main window until the modal dialog box is closed. A dialog box. If you look at the dialog box that we have produced you will see that it isn’t much like a standard dialog box. You can resize it and it has all the wrong window menus. However all of these problems can be fixed with a little custimization. <ASIN:0321334876> <ASIN:0672326973> <ASIN:1430225491> <ASIN:1430229675> <ASIN:1847195563> <ASIN:0735626707> <ASIN:0071668950> <ASIN:0321543726>
http://i-programmer.info/programming/cc/921-getting-started-with-managed-c.html
CC-MAIN-2016-26
refinedweb
1,561
59.64
Hi, I am embarrassed to admit the photo geotagging plugin has me baffled again. I have a gps trace and a batch of photos I took during a trip I made this summer. Ordinarily, correlating the two is a simple task and I have used this tool and method countless times in the past few years. However, when I move operations to a new timezone, as happens every year when I move to Thailand for the winter or from Thailand back to Alaska, I cannot synchronize them. The photos and GPS trace were done in Alaska in July but I'm in Thailand now. I've tried everything I can think of: I've changed the timezone used by the plugin, changed the timezone back to Alaska Time on my computer, allowed the plugin to do its "best guess", all to no avail. I usually resort to manually synchronizing using a photo taken at a known location whenever this happens but that's tedious. I'm looking for the reason the plugin isn't able to do the deed. Any help or suggestions would be appreciated. Cheers, Dave asked 08 Sep '17, 00:51 AlaskaDave 4.0k●70●94●143 accept rate: 12% edited 08 Sep '17, 00:53 I correlated the photos manually. I tried scai's suggestion but no matter which photo I used, including the one I shot of my GPS time, I could only get about half of them to synch. But I kept trying the "best guess" button and the more often I did that, the more photos synched. Why it doesn't get the same result each time is a mystery. Perhaps it's supposed to work that way??? Finally, playing with the Timezone on the plugin until the maximum number of photos synched (+9:00 hours) and playing further with the hours and minutes adjustment sliders, all 78 photos magically synched. Then I fine tuned the position further by adjusting one photo to a known location. The correction ultimately turned out to be -86602 seconds, plus the 9 hour timezone setting of course. I do not understand any of this. I admit to being somewhat timezone challenged but the behavior of the plugin is baffling. In the future I'll be sure to correlate any photos before flying half way around the world. answered 08 Sep '17, 09:53 GPS Prune can be useful as it can display each trackpoint's time on an osm map, it will auto geo locate jpegs but if that fails the time info helps if you have to go manual. I'm tempted to upgrade to a Garmin gps with a built in camera sometimes, but dedicated cameras take better pics. I managed to synchronize another set of my Alaskan mapping photos using Geosetter although it was anything but an intuitive process. Then, noting that the actual positions were just slightly off I tried using the photo geotagging plugin to fine tune them but as soon as I clicked the manual adjust button and told the plugin to override GPS coords in the EXIF data (put there by Geosetter), the photos that were previously visible along the trace disappeared again and the plugin reported only 1 photo could be synchronized! The plugin is the culprit but I don't know how to even qualitatively characterize what's going on. It's either not reading or ignoring GPS data that's already in the photos. I'll just go back to Geosetter and fine tune the positions there I reckon. Maybe the plugin is broken for large offsets. Yes, I think so. In the end, I was unable to fine tune the positions adequately with either program. I just used what I could and tossed the rest. I might give GPSPrune a try but I used it once before and didn't like it all that much. It might be a good idea to open a bug at and provide a link to this question here (and vice versa). Try to compare the timestamp of the first GPX point and the timestamp of the first photo's EXIF meta information. Then you will get an idea about the time difference. Not sure if changing your computer's time is of any use at all since the synchronization is only done between the track and the photos. answered 08 Sep '17, 07:27: josm ×532 gps ×211 plugin ×24 photo_geotagging ×2 correlation ×1 question asked: 08 Sep '17, 00:51 question was seen: 1,266 times last updated: 11 Sep '17, 14:29 Printing fault in upgrade measurement plugin does not work Transfer tags between building node and area import image into JOSM What does a basic DGPS set cost for improving accuracy, especially next to tall buildings? Can not store imagery offset (JOSM) Importing PDF JOSM How do I see past GPS Traces in a specific area ? JOSM pdfimport Video mapping plugin problem with VLC player? First time here? Check out the FAQ!
https://help.openstreetmap.org/questions/59479/timestamp-synchronization-problem-with-the-photo-geotagging-plugin
CC-MAIN-2020-16
refinedweb
835
69.01
Subject: Re: [boost] compile time parser generator From: Ábel Sinkovics (abel_at_[hidden]) Date: 2012-01-09 01:31:36 Hi Dave, > constexpr is somewhat crippled: although a constexpr function executes > at compile-time, it has to obey the same rules as any other function. > For example, its return type can depend on the type, but never on the > *value*, of its arguments, and once you're inside the function, the > contents of the argument are not treated as compile-time constants. > So, I think constexpr actually doesn't promise it. A constexpr can still be used to access the characters of a string at compile-time. Having access to each character we can build an MPL list of characters. Here is an example of what I'm thinking of: ---- using namespace boost::mpl; template <int N> constexpr char nth(const char s[N], int n) { return n >= N ? 0 : s[n]; } #define S "cool" typedef push_back< push_back< push_back< push_back< push_back< push_back< string<>, char_<nth<sizeof(S)>(S, 0)> >::type, char_<nth<sizeof(S)>(S, 1)> >::type, char_<nth<sizeof(S)>(S, 2)> >::type, char_<nth<sizeof(S)>(S, 3)> >::type, char_<nth<sizeof(S)>(S, 4)> >::type, char_<nth<sizeof(S)>(S, 5)> >::type str; int main() { std::cout << c_str<str>::type::value << std::endl; } ---- The code getting the characters one by one and building the MPL list (or string in the above example) can be generated by a macros - I know it is not that nice and the length of the string will be tricky, but we'll have something. The above code snippet compiles with gcc 4.6. Regards, Ábel Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2012/01/189422.php
CC-MAIN-2019-43
refinedweb
294
59.43
The new Event Service that is part of version 2.0 of the JMX API is available in the latest snapshot of the JDK 7 platform. The package description for the new javax.management.event package summarizes what it is for and how to use it. The description there starts like this: This package defines the Event Service, which provides extended support for JMX notifications. The Event Service provides greater control over notification handling than the default technique using MBeanServer.addNotificationListener or MBeanServerConnection.addNotificationListener. Here are some reasons you may want to use the Event Service: - To receive notifications from a set of MBeans defined by an ObjectName pattern, such as com.example.config:type=Cache,*. - When the notification-handling behavior of the connector you are using does not match your requirements. For example, with the standard RMI connector you can lose notifications if there are very many of them in the MBean Server you are connected to, even if you are only listening for a small proportion of them. - To change the threading behavior of notification dispatch. - To define a different transport for notifications, for example to arrange for them to be delivered through the Java Message Service (JMS). The Event Service comes with one alternative transport as standard, a "push-mode" RMI transport. - To handle notifications on behalf of MBeans (often virtual) in a namespace. The Event Service is new in version 2.0 of the JMX API, which is the version introduced in version 7 of the Java SE platform. It is not usually possible to use the Event Service when connecting remotely to an MBean Server that is running an earlier version. As with everything else in the new JMX API, we're always keen on hearing feedback, which you can add as a comment here or send to jmx-spec-comments@sun.com. The principal designer of the new Event Service is Shanliang Jiang, and he'll probably have some interesting things to say about it, which I'll link to from here.
https://community.oracle.com/blogs/emcmanus/2008/09/01/jmx-event-service-now-available-jdk-7-snapshots
CC-MAIN-2019-39
refinedweb
338
54.42
#include <db.h> int DB->set_encrypt(DB *db, const char *passwd, u_int32_t flags); Set the password used by the Berkeley DB library to perform encryption and decryption. Because databases opened within Berkeley DB environments use the password specified to the environment, it is an error to attempt to set a password in a database created within an environment. The DB->set_encrypt() method may not be called after the DB->open() method is called. The DB->set_encrypt() method returns a non-zero error value on failure and 0 on success. The DB->set_encrypt() method may fail and return one of the following non-zero errors: If the method was called after DB->open() was called; or if an invalid flag value or parameter was specified. Database and Related Methods
http://docs.oracle.com/cd/E17076_02/html/api_reference/C/dbset_encrypt.html
CC-MAIN-2013-20
refinedweb
128
51.48
3. Given the following, 1. abstract class A { 2. abstract short m1() ; 3. short m2() { return (short) 420; } 4. } 5. 6. abstract class B extends A { 7. // missing code ? 8. short m1() { return (short) 42; } 9. } which three of the following statements are true? (Choose three.) Self Test 135 A. The code will compile with no changes. B. Class B must either make an abstract declaration of method m2() or implement method m2() to allow the code to compile. C. It is legal, but not required, for class B to either make an abstract declaration of method m2() or implement method m2() for the code to compile. D. As long as line 8 exists, class A must declare method m1() in some way. E. If line 6 were replaced with �class B extends A {� the code would compile. F. If class A was not abstract and method m1() on line 2 was implemented, the code would not compile.
http://www.coderanch.com/t/242249/java-programmer-SCJP/certification/book
CC-MAIN-2014-49
refinedweb
158
85.39
#include "GA_API.h" #include "GA_Types.h" #include <SYS/SYS_Types.h> #include <iosfwd> Go to the source code of this file. Save and load the vertex point-reference array using JSON Save and load the vertex point-reference array using JSON If the io_ReadHandle should scan for filenames ending with extensions that indicate it is compressed then this method should return true If the io_ReadHandle should scan for filenames starting with "stdin", this method should return true. Definition at line 167 of file GA_IO.h. Class which defines an I/O interface to save/load geometry. Loading a GA_Detail from a disk file goes through the following process Loading a GA_Detail from a stream goes through a similar process Saving geometry works similarly: Load geometry from an input stream. After the geometry has been loaded, the sub-class may need to call the method to rebuild the topology attributes on the detail. The caller may also request that full topology attributes are created after load. If the method fails, it's because the geometry failed the sanity check on topology. In this case, the geometry is cleared. The user should probably add an error in this case. Provide access to private portions of the GA_Detail class for loading Tell the detail that the loading will be adding npoints, nvertex and nprimitive primitives. This will adjust the load map so that the proper index/offset mappings can be created. It also adjusts the index maps. The method will return false if there was an error. If the io_WriteHandle should scan for filenames starting with "stdout", this method should return true. Definition at line 174 of file GA_IO.h. Save geometry to a disk file. The default behaviour for this method is: Write geometry to an output stream.
http://www.sidefx.com/docs/hdk/_g_a___i_o_8h.html
CC-MAIN-2018-17
refinedweb
295
65.62
Python API (advanced)¶ In some rare cases, experts may want to create Scheduler and Worker objects explicitly in Python manually. This is often necessary when making tools to automatically deploy Dask in custom settings. However, often it is sufficient to rely on the Dask command line interface. Scheduler¶ To start the Scheduler, provide the listening port (defaults to 8786) and Tornado IOLoop (defaults to IOLoop.current()) from distributed import Scheduler from tornado.ioloop import IOLoop from threading import Thread s = Scheduler() s.start('tcp://:8786') # Listen on TCP port 8786 loop = IOLoop.current() loop.start() Alternatively, you may want the IOLoop and scheduler to run in a separate thread. In this case, you would replace the loop.start() call with the following: t = Thread(target=loop.start, daemon=True) t.start() Worker¶ On other nodes, start worker processes that point to the URL of the scheduler. from distributed import Worker from tornado.ioloop import IOLoop from threading import Thread w = Worker('tcp://127.0.0.1:8786') w.start() # choose randomly assigned port loop = IOLoop.current() loop.start() Alternatively, replace Worker with Nanny if you want your workers to be managed in a separate process by a local nanny process. This allows workers to restart themselves in case of failure. Also, it provides some additional monitoring, and is useful when coordinating many workers that should live in different processes in order to avoid the GIL.
http://docs.dask.org/en/latest/setup/python-advanced.html
CC-MAIN-2019-09
refinedweb
235
59.5
I’m implementing the semantic analysis of dynamic expressions in Roslyn this week, so I’m fielding a lot of questions within the team on the design of the dynamic feature of C# 4. A question I get fairly frequently in this space is as follows: public class Alpha { public int Foo(string x) { ... } } ... dynamic d = whatever; Alpha alpha = MakeAlpha(); var result = alpha.Foo(d); How is this analyzed? More specifically, what’s the type of local result? If the receiver (that is, alpha) of the call were of type dynamic then there would be little we could do at compile time. We’d analyze the compile-time types of the arguments and emit a dynamic call site that caused the semantic analysis to be performed at runtime, using the runtime type of the dynamic expression. But that’s not the case here. We know at compile time what the type of the receiver is. One of the design principles of the C# dynamic feature is that if we have a type that is known at compile time, then at runtime the type analysis honours that. In other words, we only use the runtime type of the things that were actually dynamic; everything else we use the compile-time type. If MakeAlpha() returns a class derived from Alpha, and that derived class has more overloads of Foo, we don’t care. Because we know that we’re going to be doing overload resolution on a method called Foo on an instance of type Alpha, we can do a “sanity check” at compile time to determine if we know that for sure, this is going to fail at runtime. So we do overload resolution, but instead of doing the full overload resolution algorithm (eliminate inapplicable candidates, determine the unique best applicable candidate, perform final validation of that candidate), we do a partial overload resolution algorithm. We get as far as eliminating the inapplicable candidates, and if that leaves one or more candidates then the call is bound dynamically. If it leaves zero candidates then we report an error at compile time, because we know that nothing is going to work at runtime. Now, a seemingly reasonable question to ask at this point is: overload resolution in this case could determine that there is exactly one applicable candidate in the method group, and therefore we can determine statically that the type of result is int, so why do we instead say that the type of result is dynamic? That appears to be a reasonable question, but think about it a bit more. If you and I and the compiler know that overload resolution is going to choose a particular method then why are we making a dynamic call in the first place? Why haven’t we cast d to string? This situation is rare, unlikely, and has an easy workaround by inserting casts appropriately (either casting the call expression to int or the argument to string). Situations that are rare, unlikely and easily worked around are poor candidates for compiler optimizations. You asked for a dynamic call, so you’re going to get a dynamic call. That’s reason enough to not do the proposed feature, but let’s think about it a bit more deeply by exploring a variation on this scenario that I glossed over above. Eta Corporation produces: public class Eta {} and Zeta Corporation extends this code: public class Zeta : Eta { public int Foo(string x){ ... } } ... dynamic d = whatever; Zeta zeta = new Zeta(); var result = zeta.Foo(d); Suppose we say that the type of result is int because the method group has only one member. Now suppose that in the next version, Eta Corporation supplies a new method: public class Eta { public string Foo(double x){...} } Zeta corporation recompiles their code, and hey presto, suddenly result is of type dynamic! Why should Eta Corporation’s change to the base class cause the semantic analysis of code that uses a derived class to change? This seems unexpected. C# has been carefully designed to avoid these sorts of “Brittle Base Class” failures; see my other articles on that subject for examples of how we do that. We can make a bad situation even worse. Suppose Eta’s change is instead: public class Eta { protected string Foo(double x){...} } Now what happens? Should we say that the type of result is int when the code appears outside of class Zeta, because overload resolution produces a single applicable candidate, but dynamic when it appears inside, because overload resolution produces two such candidates? That would be quite bizarre indeed. The proposal is simply too much cleverness in pursuit of too little value. We’ve been asked to perform a dynamic binding, and so we’re going to perform a dynamic binding; the result should in turn be. Pingback: Dynamic contagion, part two | Fabulous adventures in coding
https://ericlippert.com/2012/10/22/a-method-group-of-one/
CC-MAIN-2019-35
refinedweb
813
59.94
Use F&H's DP style algorithm to search for global solutions to model match. More... #include <fhs_searcher.h> Use F&H's DP style algorithm to search for global solutions to model match. Model consists of a set of features, together with a tree of neighbour relationships of the form pos(j) = pos(i) + (N(mx,var_x),N(my,var_y)) where N(m,var) is a gaussian with mean m and variance var. The aim is to find a set of points {p(i)} which minimise sum_i F_i(p(i)) + sum_k shape_cost(arc(k)) where k indexes the set of arcs defining neighbour relationships, and shape_cost(arc) = dx*dx/arc.var_x + dy*dy.var_y (dx=p(arc_j).x()-p(arc_i).x()-arc.mx) This is achieved using a combination of a quadratic distance function applied to each feature response image F(i), and a dynamic programming approach to combining the data. Algorithm based on papers by Felzenszwalb and Huttenlocher on Pictoral Structure Matching. Definition at line 28 of file fhs_searcher.h. Default constructor. Definition at line 18 of file fhs_searcher.cxx. Compute optimal position of all points. Assumes search() has been called first Returns cost at optimal position Assumes search() has been called first Definition at line 239 of file fhs_searcher.cxx. Combine responses for image im_index, given supplied feature_response for that node. Combine responses for image im_index. Definition at line 48 of file fhs_searcher.cxx. Number of points represented. Definition at line 68 of file fhs_searcher.h. Compute optimal position of all points given position of root. Assumes search() has been called first Definition at line 192 of file fhs_searcher.cxx. Return final total cost image for root. Definition at line 250 of file fhs_searcher.cxx. Index of root node (set by last call to set_tree(). Definition at line 41 of file fhs_searcher.cxx. Perform global search. Images of feature response supplied. The transformation (world2im()) for each image can be used to indicate regions which don't necessarily overlap. However, effective displacements are assumed to be in pixel sized steps. After calling search(), results can be obtained using points() and best_points() etc Definition at line 147 of file fhs_searcher.cxx. Set tree defining relationships between features. Input arcs define neighbour relationships in any order. root_node defines which feature to be used as the root Definition at line 25 of file fhs_searcher.cxx. Arcs defining neighbour relationships between features. Ordered so that parents precede children Definition at line 33 of file fhs_searcher.h. arc_to_j_[j] gives index of arc ending at given j. Definition at line 36 of file fhs_searcher.h. children_[i] gives list of child nodes of node i in tree. Definition at line 39 of file fhs_searcher.h. Workspace for sum of responses, transformed by distance function. Definition at line 45 of file fhs_searcher.h. pos_[i](x,y,0),pos_[i](x,y,1) is position of best response for (x,y). Result is in image co-ordinates. Definition at line 49 of file fhs_searcher.h. Workspace for accumulated sum of responses. Definition at line 42 of file fhs_searcher.h.
http://public.kitware.com/vxl/doc/release/contrib/mul/fhs/html/classfhs__searcher.html
crawl-003
refinedweb
515
60.61
Indian mobile numbers details to get mobile phone number from websiterator (.ai) file. Please read my comments total price for all of them? * How long ago the records an Android app. I already have a design for it, I just need it to be built. Looking for company databe lists with adresses, phone numbers and email (2017-2018) We are looking for the countries: USA Europe UK United Emirates Others I need someone to call 300 phone numbers in Brazil (I'll provide the list and a specific script) and confirm if the phone is valid - if yes, to confirm the contact details of the owner of the phone. I need a website . Only the best should apply . Work details will be told in the have various kind or writing related project. I am looking for English native writer. I want to work on ongoing process. I ensure you that I will give you lots of writing project. Create an app to extract product details from an ecommerce website and have live sync to our website... And designs for every part of your life - Invitations - Cards - Posters - Logos - Flyers - Blog banners - Email headers - Photo collages I need a new website. I need you to design and build to view URL] Quick Stats, and Overview. I've attached screenshot.... The App will use a .. windows around the table. on top of the Map. .. started. Need someone import all product details (variations, pricing, quantity) from aliexpress using curl php I need to be able to use your excel VBA, or SQL, or oth...a solution that allows me to do so at a click of the mouse, whenever I want. I am not a programmer so you might know other better ways to do it. Please message me with details of how you intend to do so. I prefer it to be like a website that allows me to choose the fields I want to download. i need application like playing 11 link -[login to view URL] Please Sign Up or Login to see details.. ...the same time and they can text me back if they are available. I require a Windows.. .. Please Sign Up or Login to see details. hi an debugging project 4 hours bid now rest in personal message details, an project fro coding of c plus plus if you an expert bid in price is 2000 -3000 if you can do then only bid it
https://www.freelancer.com/work/indian-mobile-numbers-details/
CC-MAIN-2018-22
refinedweb
407
82.24
Thanks! On Mon, Mar 12, 2018 at 2:02 PM, Kirk Lund <kl...@apache.org> wrote: > The jdbc-1.0.xsd is now online at: > > > On Wed, Mar 7, 2018 at 10:13 AM, Anthony Baker <aba...@pivotal.io> wrote: > > > You need to add it to the geode-site repo on the asf-site branch: > > > > ~/code/geode-site (asf-site)$ find . -name *.xsd > > ./schema/lucene/lucene-1.0.xsd > > ./schema/cache/cache-1.0.xsd > > > > > > Anthony > > > > > > > On Mar 7, 2018, at 10:08 AM, Kirk Lund <kl...@apache.org> wrote: > > > > > > Yep, I believe it should exist. Any ideas where and how to copy it from > > > Geode src so that it appears at? > > > > > > On Wed, Mar 7, 2018 at 9:59 AM, Jinmei Liao <jil...@pivotal.io> wrote: > > > > > >> I am looking at some xml that specifies jdbc connector, the namespace > is > > >> pointing to " > > >> > > >>", but that url is missing on > > >> apache website. Is it supposed to be there? > > >> > > >> > > >> <cache > > >> > >> xmlns: > >> xmlns: > >> xsi: > >> > > >> > > >> > > >> -- > > >> Cheers > > >> > > >> Jinmei > > >> > > > > > -- Cheers Jinmei
https://www.mail-archive.com/dev@geode.apache.org/msg18292.html
CC-MAIN-2018-51
refinedweb
164
84.47
Difference between revisions of "Chatlog 2013-02-25" From Linked Data Platform Latest revision as of 19:00, 25 February 2013 See original RRSAgent log or preview nicely formatted version. Please justify/explain all edits to this page, in your "edit summary" text. 14:58:37 <RRSAgent> RRSAgent has joined #ldp 14:58:37 <RRSAgent> logging to 14:58:39 <trackbot> RRSAgent, make logs public 14:58:39 <Zakim> Zakim has joined #ldp 14:58:41 <trackbot> Zakim, this will be LDP 14:58:41 <Zakim> ok, trackbot, I see SW_LDP()10:00AM already started 14:58:42 <trackbot> Meeting: Linked Data Platform (LDP) Working Group Teleconference 14:58:42 <trackbot> Date: 25 February 2013 14:59:03 <Zakim> +JohnArwe 14:59:26 <Zakim> +cygri 14:59:27 <Zakim> + +1.214.537.aaaa 14:59:33 <Zakim> +SteveBattle 15:00:00 <Zakim> +Arnaud 15:00:21 <Arnaud> zakim, who's here? 15:00:22 <Zakim> On the phone I see [IPcaller], JohnArwe, cygri, +1.214.537.aaaa, SteveBattle, Arnaud 15:00:22 :00:23 <Zakim> +??P24 15:00:29 <dret> zakim, IPcaller is me 15:00:30 <Zakim> +dret; got it 15:00:31 <Zakim> +[OpenLink] 15:00:35 <cody> (1 214 537.aaaa is Cody, who hasn't learned to change Zakim's prompt from phone # to name) 15:00:38 <svillata> Zakim, ??P24 is me 15:00:38 <Zakim> +svillata; got it 15:00:43 <TallTed> Zakim, [OpenLink] is OpenLink_Software 15:00:43 <Zakim> +OpenLink_Software; got it 15:00:47 <TallTed> Zakim, OpenLink_Software is temporarily me 15:00:47 <Zakim> +TallTed; got it 15:00:49 <TallTed> Zakim, mute me 15:00:49 <Zakim> TallTed should now be muted 15:01:21 <Zakim> +bblfish 15:01:35 <Kalpa> Kalpa has joined #ldp 15:01:43 <bblfish> hi, in train from Paris to Amsterdam 15:02:23 <Kalpa> Kalpa has left #ldp 15:02:23 <Arnaud> zakim, who's here? 15:02:24 <Zakim> On the phone I see dret, JohnArwe, cygri, +1.214.537.aaaa, SteveBattle, Arnaud, svillata, TallTed (muted), bblfish 15:02:25 :02:30 <JohnArwe> zakim, aaaa is cody 15:02:30 <Zakim> +cody; got it 15:02:35 <bblfish> afternoon! 15:02:59 <Arnaud> chair: Arnaud 15:03:07 <Arnaud> scribe: svillata 15:03:08 <svillata> scribe: svillata 15:03:15 <bblfish> svillata: you can use this: 15:03:16 <Kalpa> Kalpa has joined #ldp 15:03:28 <svillata> thanks bblfish 15:03:34 <nmihindu> nmihindu has joined #ldp 15:03:37 <dret> +1 15:03:46 <svillata> Topic: Approving minutes Feb 18 15:03:54 <svillata> Resolved: Minutes of Feb 18 approved 15:04:05 <SteveS> SteveS has joined #ldp 15:04:11 <Zakim> +[IBM] 15:04:12 <Kalpa> zakim, who is on the phone 15:04:12 <Zakim> I don't understand 'who is on the phone', Kalpa 15:04:27 <SteveS> zakim, [IBM] is me 15:04:27 <Zakim> +SteveS; got it 15:04:28 <JohnArwe> zakim, who is on the phone? 15:04:28 <Zakim> On the phone I see dret, JohnArwe, cygri, cody, SteveBattle, Arnaud, svillata, TallTed (muted), bblfish, SteveS 15:04:32 <svillata> Arnaud: F2F is coming up 15:04:37 <stevebattle> I'll be travelling 15:05:08 <stevebattle> ..on the monday before the F2F 15:05:09 <svillata> Arnaud: indicate your participation to F2F meeting <svillata> Topic: Tracking of issues and actions 15:05:53 <svillata> subtopic: Pending review ISSUE-47 15:05:56 <bblfish> Issue-47? 15:05:56 <trackbot> ISSUE-47 -- publish ontology -- pending review 15:05:56 <trackbot> 15:06:07 <Zakim> +??P31 15:06:33 <krp> krp has joined #ldp 15:06:44 <nmihindu> Zakim, ??P31 is me 15:06:44 <Zakim> +nmihindu; got it 15:06:48 <svillata> Arnaud: do we want to close ISSUE-47? 15:06:53 <stevebattle> q+ 15:06:55 <Zakim> -bblfish 15:07:00 <svillata> q? 15:07:13 <bblfish> makes sense to close it if the actions are taken. ( I can't hear much breaks up a lot in the train ) 15:07:15 <Zakim> +roger 15:07:31 <Zakim> +Sandro 15:07:59 <Arnaud> ack stevebattle 15:08:11 <cody> Should it not have a date pattern in the URL like most W3C published schemas? How to handle new versions? <svillata> stevebattle: afraid publishing the ontology as linked data with hyperlinked classnames etc is overkilling 15:08:23 <JohnArwe> arnaud: we now have a turtle document in the cvs ... that seems like linked data "enough" 15:08:53 <JohnArwe> ...expect editors to update ontology based on future resolutions of issues 15:09:07 <TallTed> cody - those date patterns are associated with the start of the WGs, not the schemas 15:08:21 <svillata> Resolved: Close ISSUE-47 15:08:21 <trackbot> Closed ISSUE-47 publish ontology. 15:09:13 <svillata> Topic: LDP specification and publishing a second draft 15:09:39 <cody> thx 15:09:43 <roger> roger has joined #ldp 15:10:07 <svillata> Arnaud: we have to discuss what we think we need to do for publishing the second draft 15:10:21 <TallTed> TallTed has changed the topic to: Linked Data Platform WG -- -- current agenda: 15:10:30 <JohnArwe> q+ 15:10:31 <svillata> ... what do the editors need to publish a second draft? 15:10:41 <svillata> q? 15:11:07 <Kalpa> Kalpa has joined #ldp 15:11:24 <svillata> SteveS: pretty good shape wrt the resolved issues 15:11:39 <Zakim> -nmihindu 15:11:41 <Zakim> +??P29 15:11:56 <krp> zakim, ??P29 is me 15:11:56 <Zakim> +krp; got it 15:12:07 <Zakim> +??P31 15:12:17 <svillata> Arnaud: how are we doing with regard to linking all the issues from the spec? <svillata> steves: as of last week the spec was up to date so that shouldn't be a problem 15:13:19 <dret> dret has joined #LDP 15:13:46 <Zakim> -??P31 15:13:50 <bblfish> concerning draft is the relative urls resolved? 15:14:09 <svillata> Arnaud: would be good to have a week to review the spec? 15:14:13 <SteveS> bblfish: it is an open action, minor update we can do 15:14:16 <Zakim> +??P31 15:14:38 <stevebattle> I'm happy to be transparent and publish internally and externally simultaneously. 15:14:49 <svillata> ... start review, and for March 11 decide whether to publish it 15:15:05 <Arnaud> q? 15:15:12 <Arnaud> ack john 15:15:13 <SteveS> q+ 15:15:17 <nmihindu> Zakim, ??P31 is me 15:15:17 <Zakim> +nmihindu; got it 15:15:20 <Arnaud> ack steve 15:17:17 <svillata> Arnaud: maybe next week spec will be in a good shape, and we can decide then whether to publish it #15:18:55 <svillata> which issue are we discussing? #15:19:34 <svillata> ok, thanks 15:19:44 <stevebattle> q+ <svillata> Topic: Open Issues 15:20:29 <svillata> subtopic: Composition vs Aggregation ontology (related to ISSUE-34) #15:20:59 <Zakim> +Sandro.a 15:21:03 <svillata> JohnArwe: the ontology itself is subject to change 15:21:06 <Zakim> -Sandro 15:21:11 <SteveS> Think this is more narrowly issue-32 and somewhat a part of it 15:21:18 <Arnaud> ack stevebattle 15:21:59 <svillata> stevebattle: issue-34 brings to an ontology about aggregation and composition 15:22:30 <Zakim> -nmihindu 15:23:00 <Zakim> +??P28 15:23:21 <JohnArwe> ashok's email: item 2 15:23:34 <nmihindu> Zakim, ??P28 is me 15:23:34 <Zakim> +nmihindu; got it 15:23:49 <svillata> Arnaud: proposal now is to have two subclasses for composition and aggregation 15:24:46 <svillata> ... container is a useful notion independently from aggregation/composition 15:25:03 <SteveS> q+ 15:25:28 <svillata> ... we are discussing how many classes to define, which properties 15:25:29 <Arnaud> ack steves 15:26:19 <roger> q+ 15:26:21 <stevebattle> q+ 15:26:27 <Arnaud> ack roger 15:26:49 <svillata> ISSUE-34? 15:26:49 <trackbot> ISSUE-34 -- Adding and removing arcs in weak aggregation -- closed 15:26:49 <trackbot> 15:27:07 <Arnaud> ack stevebattle 15:27:28 <svillata> stevebattle: important to make a distinction in the ontology 15:28:33 <cygri> cygri has joined #ldp 15:28:50 <Arnaud> 15:29:09 <roger> It would be good to get feedback from Richard about issue 34 (because he originally raised the issue). 15:29:16 <svillata> Arnaud: email JohnArwe sent out on Friday with a proposal 15:29:50 <JohnArwe> SteveB: as long as real behavioral difference, happy to have different classes in ontology 15:29:52 <SteveS> roger: I believe cygri opened on behalf of us at F2F1…but would be good to get feedback, not arguing that 15:30:56 <svillata> Proposed: adopting ontology proposed by JohnArwe () 15:30:57 <Zakim> +bblfish 15:31:06 <stevebattle> +1 15:31:15 <bblfish> bblfish has joined #ldp 15:31:24 <SteveS> +1 15:32:06 <stevebattle> No - they have different deletion behaviour. 15:32:21 <svillata> cygri: reading the ontology I have no idea of what the difference is 15:32:52 <JohnArwe> @cygri: the example in the email ontology is (as resolved in 34) currently the only difference between them. 15:32:52 <TallTed> I'd suggest changing :Aggregation to :aggregateContainer and :Composition to :compositeContainer 15:33:17 <stevebattle> That sounds a bit verbose to me. 15:33:21 <svillata> Arnaud: when you delete the container, different behaviors about the deletion of the resources it contains 15:33:27 <stevebattle> It's going to be used a lot 15:33:41 <TallTed> but otherwise I'm OK with the suggested change *as a start* ... I agree with cygri that the specific differences in behavior must be explicitly noted. 15:34:27 <Zakim> +??P33 15:34:52 <bblfish> back in new train 15:34:56 <svillata> cygri: having two subclasses which differ only for a sentence does not make sense, my feeling is that just using the super-class would be sufficient 15:35:19 <nmihindu> Zakim, ??P33 is me 15:35:19 <Zakim> +nmihindu; got it 15:35:33 <svillata> Arnaud: think richard is suggesting parent is aggregation and the subclass is the composition 15:35:54 <bblfish> the question I would have is what happens when something is changed from an Aggregation to a Container, especially concerning the members. 15:35:59 <svillata> cygri: members may continue to exist is not a constraint 15:36:15 <svillata> ... it doen't commit the server 15:36:20 <Arnaud> q? 15:36:22 <svillata> q? 15:36:25 <bblfish> q+ 15:36:25 <TallTed> q+ 15:36:31 <svillata> q? 15:36:34 <bblfish> please see my question above: 15:36:35 <TallTed> Zakim, unmute me 15:36:35 <Zakim> TallTed should no longer be muted 15:37:06 <svillata> Arnaud: how do we insert this aggregation concept? 15:37:17 <Arnaud> q? 15:37:19 <bblfish> please see above 15:37:23 <bblfish> the question I would have is what happens when something is changed from an Aggregation to a Container, especially concerning the members. 15:37:45 <stevebattle> q+ 15:38:08 <bblfish> ack me 15:38:20 <JohnArwe> I don't know if we'd allow a change in container behavior dynamically... new conversation? 15:38:24 <Arnaud> ack TallTed 15:38:26 <roger> that (in my opinion) is a very dodgy thing 15:38:40 <svillata> SteveS: we can open an issue and address the question of bblfish 15:39:20 <svillata> q? 15:39:29 <bblfish> my guess is that this will only work if you add a :contains relation 15:39:48 <svillata> Arnaud: we have to make concrete proposals 15:39:50 <Arnaud> ack stevebattle 15:39:57 <JohnArwe> Ted: if (in the end) there is no behavioral difference between Container and AggregateContainer, would you like cygri want to collapse them? 15:40:12 <svillata> stevebattle: cygri's proposal appealing 15:40:23 <JohnArwe> s/Ted:/Question for Ted:/ 15:40:54 <svillata> Arnaud: changing container to something else change the spec quite a lot, John's proposal is trying to minimize the change 15:41:05 <stevebattle> In OOD, composition is not (typically) a subclass of aggregation. They're commonly subclasses of association. 15:41:16 <Arnaud> q? 15:41:20 <svillata> s/change /changes 15:42:00 <svillata> q? 15:42:18 <stevebattle> Isn't Container an abstract superclass that is useful for property definitions? #15:42:28 <Zakim> +Sandro.aa #15:42:32 <Zakim> -Sandro.a 15:42:46 <svillata> TallTed: propose to use aggregate containers and composite containers 15:43:07 <svillata> ... superclass Container 15:43:17 <sandro> q+ to ask a naive question (can't we just use URLs?) 15:43:18 <SteveS> stevebattle: agree, we can multi-type if we even wanted to say it is a ldp:Container and a ldp:Aggregation 15:43:29 <SteveS> q+ 15:43:33 <stevebattle> Yes - agreed that Aggregation and Composition are mutually exclusive classes. 15:43:35 <Arnaud> q? 15:43:44 <svillata> TallTed: proposal to change aggregation VS composition into aggregate containers/composite containers 15:43:44 <Arnaud> ack sandro 15:43:44 <Zakim> sandro, you wanted to ask a naive question (can't we just use URLs?) <svillata> sandro: after weeks of discussion we still don't seem to have a resolution, so why not instead rely on the structure of the URLs to determine whether member resources should be deleted or not? 15:44:25 <stevebattle> I proposed that at the last F2F and got voted down :) 15:44:28 <bblfish> I think it is an interesting idea 15:44:31 <Arnaud> ack steves <svillata> steves: this would go against the opacity principle 15:44:51 <bblfish> I was going to propose that urls ending in / are LDPCs 15:45:09 <Ruben> mmm, I don't like "urls ending in" 15:45:16 <Ruben> should be opaque 15:45:18 <bblfish> we spoke about this at the last F2F, but since then I have changed my mind. 15:46:02 <bblfish> Ruben, URLs are opaque as far as emantics goes, but in fact the URI spec does give / a special significance 15:46:09 <cygri> q+ 15:46:14 <bblfish> s/emantics/semantics/ 15:46:22 <Arnaud> ack cygri 15:46:44 <svillata> cygri: think one issue that was discussed at F2F1 and that led us to where we are was the idea of using the url structure to indicate composition 15:47:22 <svillata> ... can't give any special semantics to the relations to keep the implementation really simple 15:47:56 <stevebattle> q+ 15:47:59 <sandro> I see that, but I don't find that compelling, giving the simplicity provided. 15:48:09 <svillata> q? 15:48:41 <Arnaud> ack stevebattle 15:49:26 <sandro> I probably voted against stevebattle at the F2F, but now that I see how long we've spent trying to figure this out, I lean more toward simplicity. 15:49:43 <bblfish> I can make a proposal 15:49:44 <svillata> stevebattle: is it possible to re-open the issue? 15:49:52 <sandro> q+ 15:50:02 <SteveS> q+ 15:50:14 <svillata> Arnaud: possible but better to re-open issues when new information comes 15:50:14 <bblfish> stevebattle: I have an idea on how to do this in a way that is uncontroversial 15:50:19 <Arnaud> ack sandro 15:50:19 <sandro> q- 15:50:26 <Arnaud> ack steves 15:50:28 <bblfish> ro was that Sandro 15:50:58 <stevebattle> An aggregate could generate URIs at the same level at the aggregation. 15:51:15 <sandro> sandro: I think it might be new information that this is so hard to us to figure out. 15:51:21 <stevebattle> They wouldn't be nested below the Aggregation 15:51:42 <stevebattle> ..In the URI structure 15:51:53 <JohnArwe> I think Sandro was proposing that "if the URL is structured ..., then the client Knows the behavior is delete (or not) members." 15:52:07 <SteveS> I think we are arguing over minor details of class hierarchy and not fundamental behavioral difference 15:52:09 <bblfish> sandro, we should get together on this. 15:52:19 <sandro> yes, JohnArwe 15:52:23 <SteveS> s/difference/differences/ 15:52:39 <Arnaud> proposed: use John's proposed ontology with Aggregation renamed as AggregateContainer, Composition as CompositeContainer, and better documentation 15:52:45 <svillata> Arnaud: TallTed proposal from JohnArwe proposal 15:52:50 <sandro> in fact -- I probably shouldn't be in the lead or critical path for this 15:53:07 <stevebattle> +0 (not convinced about the long names) 15:53:18 <svillata> Arnaud: how do we feel with TallTed's proposal? 15:53:20 <TallTed> +1 15:53:21 <JohnArwe> When we talk about URL structures yielding client assumptions, we'd be making it harder for any existing implementations to comply. 15:53:30 <SteveS> +0 (I go back to my +1 for JohnArwe's proposal) 15:53:40 <roger> +0 15:53:48 <sandro> +0 15:53:51 <JohnArwe> +1 (rename things at will - I hate arguing over them, you'll win all the time ) 15:53:59 <cody> +0 15:54:01 <svillata> +1 15:54:08 <cygri> -0 not convinced that aggregate is needed. ted's names are an improvement 15:54:23 <nmihindu> +0 15:54:36 <stevebattle> vote on the original proposal? 15:54:39 <svillata> Arnaud: we don't seem to have consensus 15:54:56 <dret> +/-0 <svillata> TallTed: I think we do, nobody has voted against it 15:54:59 <Zakim> -bblfish 15:55:17 <svillata> Arnaud: JohnArwe proposal? 15:55:49 <stevebattle> +1 (use namespaces for disambiguation) 15:56:52 <stevebattle> I prefer the shorter local names - we don't need to append 'Container' 15:56:56 <svillata> TallTed: what do you mean stevebattle as using namespaces for disambiguation? 15:57:24 <stevebattle> yez 15:57:52 <stevebattle> s/z/s/ 15:58:56 <Arnaud> resolved: Go with John's proposal amended by Ted 15:58:21 <svillata> subTopic: LDP model section 16:00:59 <svillata> Arnaud: maybe we should leave to the editors to choose among the two proposals 16:01:23 <Zakim> -cygri 16:01:23 <Kalpa> Kalpa has left #ldp 16:01:30 <stevebattle> q+ 16:01:39 <Arnaud> ack stevebattle 16:02:03 <svillata> stevebattle: the two proposals are materially the same, but I prefer Henry's proposal 16:02:22 <dret> yeah, that was just a proposal. 16:02:36 <svillata> Arnaud: do we have any text to put in the second draft of the spec? 16:02:38 <dret> no complete text yet, but i can take an action for that. 16:03:48 <SteveS> agree that editors can take the pen, using the feedback that is there now 16:03:55 <svillata> dret: we can write a complete section 16:04:12 <dret> in that case, can i have an action? 16:04:41 <Zakim> -SteveS 16:04:41 <svillata> ACTION: dret to create complete section 16:04:41 <trackbot> Created ACTION-38 - Create complete section [on Erik Wilde - due 2013-03-04]. <svillata> Arnaud: Meeting adjourned 16:04:43 <Zakim> -roger 16:04:45 <stevebattle> Thanks, bye. 16:04:49 <dret> thanks everybody! 16:04:52 <Zakim> -cody 16:04:53 <Zakim> -TallTed 16:04:53 <Zakim> -SteveBattle 16:04:54 <Zakim> -Arnaud #16:04:54 <Zakim> -Sandro.aa 16:04:56 <Zakim> -svillata 16:04:56 <Zakim> -dret 16:04:56 <cody> One question 16:04:57 <Zakim> -krp 16:04:57 <Zakim> -JohnArwe 16:05:03 <cody> regarding the face to face coming up 16:05:17 <Ruben> Ruben has left #ldp 16:05:20 <JohnArwe> what's your q cody? #16:05:31 <Zakim> -nmihindu.a 16:05:34 <cody> The line opens at 2:00 AM - 12:00 PM Boston time. 16:05:44 <cody> Is this because of overseas participation? 16:05:55 <cody> And is that the actual meeting start/end time? 16:06:02 <JohnArwe> probably - and probably copied from F2F1 16:06:34 <JohnArwe> ...when it was in France. Usually they run 8 (or later) to 5 (or later) local time. 16:07:19 <cody> Just seems like a face to face hosted in the U.S. would require the overseas participants to join at the odd times. 16:07:26 <JohnArwe> Eric P one of the staff contacts made the arrangements - suggest email the list so he'll see your q and respond. 16:07:59 <cody> OK. Thx. 16:08:19 <JohnArwe> the assumption is most participants will be local, so local time is "it". I can attest to the effect you describe (I was in NY during the Lyon F2F) 16:09:32 <JohnArwe> ...local time also tends to dictate when rooms can be booked, when meals are available (espec in a case like F2F2 when it appears there will be no sponsors so lunch is a "go out and get it" thing) 16:10:20 <cody> I still think I am confused. 2:00 AM to start a meeting in the U.S.? 16:10:31 <Zakim> disconnecting the lone participant, nmihindu, in SW_LDP()10:00AM 16:10:32 <Zakim> SW_LDP()10:00AM has ended 16:10:32 <Zakim> Attendees were JohnArwe, cygri, +1.214.537.aaaa, SteveBattle, Arnaud, dret, svillata, TallTed, bblfish, cody, SteveS, nmihindu, roger, Sandro, krp 16:10:43 <TallTed> TallTed has joined #ldp 16:11:07 <Arnaud> hmm, I wish I knew who was 1.214.537.aaaa 16:11:17 <cody> That is Cody 16:11:23 <Arnaud> ah, thanks 16:11:32 <cody> I do not know yet how to tell Zakim to use my name 16:11:39 <Arnaud> zakim is supposed to learn over time 16:12:00 <Arnaud> zakim, aaaa is cody 16:12:00 <Zakim> sorry, Arnaud, I do not recognize a party named 'aaaa' 16:12:15 <sandro> 214 537 is appears to be Richardson, TX 16:12:17 <sandro> dunno if that helps. 16:12:41 <cody> Someone already said "zakim aaaa is cody", so maybe that is why the statement no longer works 16:12:43 <Arnaud> cody is saying it's him 16:13:02 <sandro> ah. i'm slow. 16:13:39 <Arnaud> I think it's because the call is over 16:13:51 <Arnaud> zakim, +aaaa is cody 16:13:51 <Zakim> sorry, Arnaud, I do not recognize a party named '+aaaa' 16:13:55 <Arnaud> right 16:14:21 <Arnaud> it's ok I can fix the minutes to reflect it anyway 16:14:39 <cody> Thx. 16:15:29 <JohnArwe> arnaud your transcript should show that we attributed aaaa to cody in zakim Very Shortly after he joined. he said he did not know how to do so, so I did it. 16:15:50 <Arnaud> ok 16:16:10 <JohnArwe> remember that zakim for attendees unions them all together. I forget if the minuting script collapsing resolved aliases or not. 16:18:06 <JohnArwe> cody, wrt to the 0200 start that is Very Likely wrong, copied from Lyon (where 0800 CET would be 0200 ET) 16:19:01 <JohnArwe> ...hence: email to list on it. EricP presumably will then check whatever he booked at MIT and make Zakim's times align, then reflect that on the page (correctly) 16:19:12 <cody> OK 16:19:28 <cody> Is there a private list email? I seem to only have the public-ldp@ 16:20:38 <sandro> The charter says the group will work in public, so that's the main list. There is also member-ldp-wg for confidentail stuff like phone numbers, but that's rarely used. 16:20:39 <Arnaud> there are two lists: public-ldp and public-ldp-wg 16:20:50 <JohnArwe> all our emails are public. there is another list (public) for non-members to append to if needed. 16:20:56 <sandro> (and you are on member-ldp.wg too.) 16:21:00 <cody> Ok- got it. Thanks! 16:21:59 <Arnaud> as a member you can post to either list 16:22:03 <JohnArwe> cody: you in vegas next week? 16:22:14 <Arnaud> non members can subscribe to both but only post to public-ldp 16:24:44 <cody> No. I'm in Dallas/Fort Worth next week. Was unaware of Vegas. (Sorry, I am just really, really green at this). 16:25:25 <cody> What is going on in Las Vegas? IBM conf? 16:25:29 <JohnArwe> cody: (2) I also see you posed a question in IRC that may have been missed. Short answer on dates is that the month/year gets added very close to the end, because they are taken from the date it hits Rec. Until then all ns values we own should be thought of as provisional. 16:26:02 <JohnArwe> cody: (1) yeah Pulse Conf. if you were going to be there would be an opp for F2F meeting was the thought. NP. 16:27:13 <cody> Got it on the URL. Thanks. And enjoy the conference! 16:27:36 <JohnArwe> cody: (2) ...also the email contents were an excerpt; in the ttl file in mercurial the ns we're using for now is <>. 16:27:48 <gavinc> gavinc has joined #ldp 16:52:06 <jmv> jmv has joined #ldp 17:37:33 <bblfish> bblfish has joined #ldp 17:52:41 <cygri> cygri has joined #ldp # SPECIAL MARKER FOR CHATSYNC. DO NOT EDIT THIS LINE OR BELOW. SRCLINESUSED=00000369
http://www.w3.org/2012/ldp/wiki/index.php?title=Chatlog_2013-02-25&curid=108&diff=2248&oldid=2247
CC-MAIN-2014-35
refinedweb
4,290
57.2