text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Are there any compatibility issues using a UBS-C connector with a PSoC5?
dear all,
I am into writing a PSoC 3 based program wherein I have to control the period of a wave form using data obtained from UART. I have created a program where the data is received and it is converted to an integer value. the problem is I'm not able to use this value outside the "if" structure where i'm formatting the command. for example if I have to maintain an analog value in the output of VDAC for certain duration of time that was specified through UART, this delay is not reflected during the execution. only the initial delay is maintained. the change is not made. I'm new to UART interrupt based programming. I have attached a project file here. it might be an old version of creator. please help me.
Hi all,
I am working on a device that has two PSoC 5LP MCUs on it. One manages the main controls and the other one manages battery level and input current.
The control PSoC acts as an I2C master to the battery PSoC. The Master PSoC sends a command (0x01) to which the slave PSoC should respond with the battery level, input current, and motor current.
My problem is whenever the Master requests info from the slave, the data that I see on the master read buffer (Which should be 4 bytes, 3 data bytes and a checksum) the only data displaying on the read buffer is the battery level on all 4 positions of the read buffer.
Would anyone know the cause for this?
I am attaching the projects to this post. The i2c processes are within the .c files with the i2c on their names.
I have a unique 9-bit UART application. The 9th bit is only set when the host wants me to know that byte is the start of a packet (to re-synchronize). The problem is that when debugging the project, I don't ever see the RxStatus change when I send I byte with a Mark vs a byte with a space. I am stopping the code in the UART_A_INT.c file just after it reads the byte from the register (status previously read and saved in a local var).
The UART configuration is set to Address Byte = None. I can't set it to anything else because my code will no longer fit (used up too many UDBs). I prefer not to upload project as it is very large and proprietary. Here is a code snippet:
The following is from the UART_A_INT.c file. This is mostly auto-generated by building the project. I have added the last bit after the data is read from the UART data register. I put a breakpoint after the read data register and inspect it and the status. The data is always correct but I never see the status register change when I send a space vs a mark! I am using a Terminal program to send the data and it allows me to configure it to send a space or mark with the character (8 bit). I have verified with a scope that it is working.
/* Read receiver status register */readStatus = UART_A_RXSTATUS_REG;/* Copy the same status to readData variable for backward compatibility support * of the user code in UART_A_RXISR_ERROR` section. */readData = readStatus;
if((readStatus & (UART_A_RX_STS_BREAK | UART_A_RX_STS_PAR_ERROR |UART_A_RX_STS_STOP_ERROR | UART_A_RX_STS_OVERRUN)) != 0u){/* ERROR handling. */UART_A_errorStatus |= readStatus & ( UART_A_RX_STS_BREAK | UART_A_RX_STS_PAR_ERROR | UART_A_RX_STS_STOP_ERROR | UART_A_RX_STS_OVERRUN);/* `#START UART_A_RXISR_ERROR` */
/* `#END` */#ifdef UART_A_RXISR_ERROR_CALLBACKUART_A_RXISR_ERROR_Callback();#endif /* UART_A_RXISR_ERROR_CALLBACK */}if((readStatus & UART_A_RX_STS_FIFO_NOTEMPTY) != 0u){/* Read data from the RX data register */readData = UART_A_RXDATA_REG;#define HW9thBitA// Handle 9th bit set - HW system wants this to be the first byte in a packet (command).if (readStatus & UART_A_RX_STS_MRKSPC) // Is this a Mark?{ // If so...UART_A_rxBufferRead = 0u; // Clear the buffer as this new byte will be the first one in a packet.UART_A_rxBufferWrite = 0u;UART_A_rxBufferLoopDetect = 0u;UART_A_rxBufferOverflow = 0u;NinethBitSetA = 1; // Set flag for Callback routine.}
I found this in the component datasheet:
Does this mean that I'm hosed? Seems dumb that we can't get at the Mark/Space bit without implementing an Address mode.
Mike.
Hi,
Over the years I've done my share of hardware programming, e.g. CPLD, FPGAs, etc. But it's been a long time since the power of microcontrollers have improved I haven't had the need when I need a new home grown hardware component.
In my search for a good Quadrature Decoder module I came across Cypress QuadDec and realized that it's a solution that requires hardware programming. I would appreciate a basic characterization of what it would take for me to implement this (kind) soluion. I would like to get a sense of the following issues:
My goal is to determine effort ramp up to get back into hardware programming and cost, and how much I might consider for other projects.
Thanks
Hello?
hello,
I tried to write a simple project to see if I can use the rtc component in psoc 5lp - 059.
I attached here my project can someone please help me and tell me why the rtc data (min,sec....) stay the same.
***I used uart to see the data on my computer screen.
Thanks!
Dear all,
We put a PSoC5LP into a device. We found code example for CDC, then we implemented the CDC with success the USB FS (virtual port).
I find this old topic about the mass storage
I'm looking for a solution to implement both USB features into the same USB FS. It's seems possible, the number of end point is suffisant.
However, i'm facing to some issues
1) the snippets of code for mass storage use "old" version of USB FS (v 2.8)
2)now USB FS configurator help us to create a MSC component.
Port the old code into the new USB FS seems more complicated than expected
CDC could be found in the code example, but it's not the case for MSC. Cypress example are a great help...
Do you have found some example of mass storage component with a recent USB FS (v3.2)?
regards,
Robin.
Is anyone aware of examples on how to read input from an IR remote with the 5lp?
Thank you,
Steven
I'm improving the default bootloader to calculate the entire application CRC instead of the 8bit checksum.
I was successful in inserting the CRC into the cyacd file and implementing it in the bootloader.
My problem is that when I insert it into the hex file it is not enough and I get the following error when trying to program the hex file with the PSoC programmer "FAILED! Hex File parsing failure. Checksum of Main Flash does not match Hex Checksum record" I'm assuming that aside from the row checksums there is also a checksum of the entire file.
How is this checksum calculated and where is it stored?
Expert II
Honored Contributor II
Esteemed Contributor
Valued Contributor II
Employee | https://community.cypress.com/t5/PSoC-5-3-1-MCU/bd-p/psoc135/page/783 | CC-MAIN-2021-17 | refinedweb | 1,168 | 64 |
.
Math: BigDecimal arithmetics by default
Floating point number literals are BigDecimals by default. So when you type 3.14, Groovy won't create a double or a float, but will instead create a BigDecimal. This might lead people into believing that Groovy is slow for arithmetics!
If you really want to use floats or doubles, be sure to either define such numeric variables with their float or double types, like in:
Or else, you can also use suffixes like:
See also our section on Math with Groovy.
Default imports
All these packages and classes are imported by default, i.e. you do not have to use an explicit
import statement to use them:
- java.io.*
- java.lang.*
- java.math.BigDecimal
- java.math.BigInteger
- java.net.*
- java.util.*
- groovy.lang.*
- groovy.util.*
Common gotchas
Here we list the common things you might trip over if you're a Java developer starting to use Groovy.
- == means equals on all types. really need the identity, you can use the method "is" like foo.is(bar). This does not work on null, but you can still use == here: foo==null.
- in is a keyword. So don't use it as a variable name.
When declaring array you can't write
you need to write
If you are used to writing a for loop that looks like
in groovy you can use that too, but you can use only one count variable. Alternatives to this are
or
or
Things to be aware of
- Semicol.
Uncommon Gotchas
Java programmers are used to semicolons terminating statements and not having closures. Also there are instance initializers in class definitions. So you might see something like:
Many Groovy programmers eschew the use of semicolons as distracting and redundant (though others use them all the time - it's a matter of coding style). A situation that leads to difficulties is writing the above in Groovy as:
This will throw a
MissingMethodException!
The issue here is that in this situation the newline is not a statement terminator so the following block is treated as a closure, passed as an argument to the
Thing constructor. Bizarre to many, but true. If you want to use instance initializers in this sort of way, it is effectively mandatory to have a semicolon:
This way the block following the initialized definition is clearly an instance initializer.
Another document lists some pitfalls you should be aware of and give some advice on best practices to avoid those pitfalls.
- safe navigation using the ?. operator, e.g. "variable?.field" and "variable?.method()" - no more nested ifs to check for null clogging up your code
1 Comment
David Brown | http://docs.codehaus.org/display/GROOVY/Differences+from+Java | CC-MAIN-2014-10 | refinedweb | 440 | 63.7 |
02 December 2011 15:05 [Source: ICIS news]
WASHINGTON (ICIS)--The ?xml:namespace>
In its monthly report, the department said that private sector employers added some 140,000 workers last month, but this was partly offset by 20,000 job losses at state and local governments.
The drop in the unemployment rate marks the lowest level since March 2009 when the
The 120,000 net advance in jobs growth in November also represents an improvement from the 100,000 overall new hires reported for October, revised upward from the initial October jobs report of 80,000 new hires.
However, that pace of employment expansion is still below the 150,000 net new jobs the
And if there is to be any significant progress in generating enough jobs for the nearly 14m unemployed Americans, net monthly job growth should be running at 300,000 or better.
In normal economic times, | http://www.icis.com/Articles/2011/12/02/9513718/us-adds-120000-jobs-in-nov-unemployment-rate-drops-to-8.6.html | CC-MAIN-2015-18 | refinedweb | 149 | 51.72 |
The example i am testing out is the one from Gordon projects.
code is as follows:
When i run it, ALL the leds blink, and not only one.
Code: Select all
#include <stdio.h> #include <wiringPi.h> // LED Pin - wiringPi pin 0 is BCM_GPIO 17. #define LED 0 int main (void) { printf ("Raspberry Pi - Gertboard Blink\n") ; wiringPiSetup () ; pinMode (LED, OUTPUT) ; for (;;) { digitalWrite (LED, 1) ; // On delay (500) ; // mS digitalWrite (LED, 0) ; // Off delay (500) ; } return 0 ; }
I have wired up GPI17 to B1 as described.
I have the preassembled gertboard.
Is something going wrong here or am i just doing something wrong?
| https://forums.raspberrypi.com/viewtopic.php?t=56435 | CC-MAIN-2022-05 | refinedweb | 103 | 75.71 |
In Python, if you ever need to deal with codebases that perform various calls to other APIs, there may be situations where you may receive a string in a list-like format, but still not explicitly a list. In situations like these, you may want to convert the string into a list.
In this article, we will look at some ways of achieving the same on Python.
Converting List-type strings
A list-type string can be a string that has the opening and closing parenthesis as of a list and has comma-separated characters for the list elements. The only difference between that and a list is the opening and closing quotes, which signify that it is a string.
Example:
str_inp = '["Hello", "from", "AskPython"]'
Let us look at how we can convert these types of strings to a list.
Method 1: Using the ast module
Python’s
ast (Abstract Syntax Tree) module is a handy tool that can be used to deal with strings like this, dealing with the contents of the given string accordingly.
We can use
ast.literal_eval() to evaluate the literal and convert it into a list.
import ast str_inp = '["Hello", "from", "AskPython"]' print(str_inp) op = ast.literal_eval(str_inp) print(op)
Output
'["Hello", "from", "AskPython"]' ['Hello', 'from', 'AskPython']
Method 2: Using the json module
Python’s
json module also provides us with methods that can manipulate strings.
In particular, the
json.loads() method is used to decode JSON-type strings and returns a list, which we can then use accordingly.
import json str_inp = '["Hello", "from", "AskPython"]' print(str_inp) op = json.loads(str_inp) print(op)
The output remains the same as before.
Method 3: Using str.replace() and str.split()
We can use Python’s in-built
str.replace() method and manually iterate through the input string.
We can remove the opening and closing parenthesis while adding elements to our newly formed list using
str.split(","), parsing the list-type string manually.
str_inp = '["Hello", "from", "AskPython"]' str1 = str_inp.replace(']','').replace('[','') op = str1.replace('"','').split(",") print(op)
Output:
['Hello', ' from', ' AskPython']
Converting Comma separated Strings
A comma-separated string is a string that has a sequence of characters, separated by a comma, and enclosed in Python’s string quotations.
Example:
str_inp = "Hello,from,AskPython'
To convert these types of strings to a list of elements, we have some other ways of performing the task.
Method 1: Using str.split(‘,’)
We can directly convert it into a list by separating out the commas using
str.split(',').
str_inp = "Hello,from,AskPython" op = str_inp.split(",") print(op)
Output:
['Hello', 'from', 'AskPython']
Method 2: Using eval()
If the input string is trusted, we can spin up an interactive shell and directly evaluate the string using
eval().
However, this is NOT recommended, and should rather be avoided, due to security hazards of running potentially untrusted code.
Even so, if you still want to use this, go ahead. We warned you!
str_inp = "potentially,untrusted,code" # Convert to a quoted string so that # we can use eval() to convert it into # a normal string str_inp = "'" + str_inp + "'" str_eval = '' # Enclose every comma within single quotes # so that eval() can separate them for i in str_inp: if i == ',': i = "','" str_eval += i op = eval('[' + str_eval + ']') print(op)
The output will be a list, since the string has been evaluated and a parenthesis has been inserted to now signify that it
op is a list.
Output
['potentially', 'untrusted', 'code']
This is quite long and is not recommended for parsing out comma-separated strings. Using
str.split(',') is the obvious choice in this case.
Conclusion
In this article, we learned some ways of converting a list into a string. We dealt with list-type strings and comma-separated strings and converted them into Python lists. | https://www.askpython.com/python/string/python-convert-string-to-list | CC-MAIN-2020-34 | refinedweb | 624 | 64.1 |
“Microsoft’s new C# programming language is gaining in popularity, with usage nearly doubling in the last six months, a new study shows. C#.” Read the rest of the report at News.com.
C# Striking a Chord with Programmers
2002-05-04 General Development 45 Comments
…that these new C# programmers are mostly VB programmers and people who are commanded by their employer to do so.
I think that C# is a nice language, but every time you hear reports like this, they either directly or indirectly say that C# is stealing away Java programmers (since C# basically is Java and was made to squash Java). I do not believe that such is the case.
Having used both, I find web programming in Java very straight forward. I program in Java and use HTML to present what I have created (whether it be applets, JSP or Servlets). With C#, it isn’t quite that clean.
As I said, C# is a nice language. C# is an outstanding replacement for VB, but not a really good one for Java in my opinion.
This sounds like bull we hear about the XBox.
MS’s .Net strategy is failing. The consumer half is already dead. The business half certainly isn’t dead, but the days of jumping on the latest MS ‘bandwagon’ because it is ‘gaining’ momentum’ are over.
Businesses are in no mood to sign up for something that is little more than an attempt by MS to save their shrinking guaranteed revenue streams from their desktop and office suite monopolies.
Unless you are diehard MS fanatic or still run your business with the assumption that MS is always the safe bet, C# and .Net are worthless.
The future lies in open standards that run on MacOS X/Linux/Windows..
Although the language looks nice & has a bit more pragmatism than Java (ie native code), I can’t take it seriously till it is available from several other vendors for several other OSs. I don’t want to waste my life being an MS slave, been there done that. Again as with Java, it’s not just the language, its the potential infinity of the class libraries. I lost interest in Java after 10000 API were added, I have no interest in C# outside the language and a basic x platform kit I could use outside MS.
Personally, I very much like C# as in “programming language”, not necessarily as in “Ms bashing ideology”.
Nothing else to add. Thank you for reading this.
If you think that C# is basically Java, then you have another thing coming. C# is similar to Java but fixes the mistakes that the Java creators made. It really is an excellent language and is only dismissed by those who do not want to give Microsoft any credit.
As for the comment about web development with C# not being “clean”, I guess you have never researched “ASP .NET”. It is very clean and expands a lot on Java/JSP.
-G
I imagine most of these new C# programmers are former Win32 programmers who used to use C++ and VB. Big deal. I suppose it’s better that those programming for Windows move away from travesties such as C++ and VB and move to C#.
I don’t know what all the fuss is about C#. Or Java for that matter. .NET itself it cool, yes. But as languages, both C# and Java are rehashes of very old ideas with more and more money wasted on hyping them.
Personally, there’s nothing in C# and Java that I need that I don’t already have in Smalltalk and Common Lisp. A lot of just “average programmers” (so called by MS and Sun in their language visions) find C#’s and Java’s resemblance to C/C++ to be the most important feature that they have over Smalltalk and Common Lisp. Fortunately for me, the damage done by C/C++ syntax is reversible.
But as long as other people like C#, to each her own.
JJ, there is already another compiler vendor. See the GNU Mono [1] project, which has already done significant work towards creating an open-source .NET runtime and C# compiler. The C# compiler is largely done, and works on Linux and Win32, and compiles itself. Go play with it, and have fun.
[1] Mono project:
And I think that Tao/AmigaDE is better than the CLI just because is something released before that is already working on most platform w/out many problems
Shinnable languages are already something you can find in gcc…
Java is getting better and is mature
And I just like C and is ok for my needs (and java,python,perl,ruby etc etc etc seems a bit more free than C# or the .NET)
Are they kidding? 24%? Of programmers? There’s no way. Period. End of story.
Even a beautiful, compelling language couldn’t grab 24% of programmers away from whatever they’re entrenched with now. This is baloney, pure and simple.
How did they arrive at this “sample”? I’d like to know.
-Netanyahu
Employee: here it is:
1) It’s better overall than Java.
2) It’s a Microsoft product.
Boss: I said a list !
Employee: here it is:
1) It’s better overall than Java.
2) It’s a Microsoft product.
3) It’s better overall than Java.
4) It’s a Microsoft product.
5) It’s better overall than Java.
6) It’s a Microsoft product.
[snip]
I love C#. I don’t give a fzuck one way or the other about Microsoft. I’ve met plenty of knee-jerk Microsoft lovers — mostly low-rent VB hacks — and God knows there are knee-jerk Microsoft bashers around everywhere.
Whether you love or hate Microsoft has NOTHING — and I mean absolutely NOTHING — to do with an objective evaluation of C# as a language.
I am a coding fool. You’ll have to take my word for it, but I consider myself an expert Java, Delphi, C++, C, C#, and Delphi programmer. I write code on Windows, Linux, and Mac OS X. (Never on any other Mac OS, though.)
I used to love Java, and I still think it’s a great language, but its only technical merit over C# is that I can have a single codebase run on most platforms with a decent JVM. Yes, C# ties you to Windows, and depending upon your requirements, that might be a very bad thing. If being tied to Windows is a problem, then hey José, *don’t* use C#. Use Java. Other than that, C# is a better Java than Java.
Some reasons why:
C# has a single unified type system like Smalltalk. There is no awkward division between primitive types and objects. (This rules. Really it does.) Performance is maintained through a mechanism called boxing. Look into it.
C# has value types called structs, which are like classes except that they live on the stack and die immediately when they go out of scope instead of waiting for garbage collection.
C# has superior iteration mechanisms using interfaces like ICollection, etc. and the foreach construct. (Although it’s still not as good as the internal iteration in Smalltalk.)
C# assembly/namespace “mechanism” is much less of a pain in the ass than Java’s packages. Take it from someone whose worked extensively with both, but if you don’t believe me, take C# for a test drive and see.
C# has properties as a first-level language construct. You may think this is not necessary, but when your language plugs into an IDE it can be extremely useful.
If you are targeting the Windows platform and have no cross-platfrom needs, C# is a better choice than Java. If you need cross-platform capabalities, Java is still a great choice. Remember: I didn’t say Java sucks. I just like C# a lot better..
Cheers.
I just noticed I listed Delphi twice. I used to really love Delphi.
reziThat’s excactly what I thought when I saw that
It’s funny what you can do with numbers, depending on what you want to express.
& should be imprisoned for his ill eagle hostage taking
i went to the launching of .net in my country. i find it funny that whenever they mention that the .net framework will be cross platform, they always point out that it will work on windows, pocket pc’s, and mobile phones. all of it to promote their own ms platforms (yes ms makes cell phone software now). sun also promoted java as a cross platform (or was the word multi platform?), meaning it will work on windows, linux, bsds, macs, palm, pocketpc, mobile phones, etc. i will point out that the .net framework is being ported to linux (mono) and to mac (by ms, but didn’t say when), but i doubt it will work well with those platforms. and ms says only development on ms OSs will be supported.
i’m sure most java programmers would like to try out c#. but that’s about it. just a try out. i’ve talked with a lot about the cross platform thing, and they say “what the #@$$?”
just goes to show how microsoft believes they are the only software company in the world.
Although VB sucks, it still does the job of being a language that makes creating reasonable Windows apps fairly easy and it did create a hugely successful component industry that will still be around for a long time to come (long after VB is dead).
Like I said before, once MS Office .NET comes out (which will completely integrate the Common Language Runtime) VB and VBA will be history and you will be able to do everything in C#, J#, Eiffel, Ada or anything you want.
I have heard some Linux people pine for a development tool that would let them create X Windows GUI applications as easily as VB does for Windows (of course they want a proper, clean implementation with a real language though). Does Kylix fill this role now? I don’t really know and I haven’t kept up with this issue on Linux but the last time I checked there didn’t seem to be any kind RAD tool available.
If you think that C# is basically Java, then you have another thing coming.
No, I’m not mistaken. I worked at Microsoft when C# was being created. It was really funny. Before MS lost the lawsuit with Sun, J++ was going to be the main new centerpiece to VisualStudio. There were presentations floating around that claimed VB would take a back seat (or disappear all together) and that J++ (or rather Java) would be the premier coding tool of VisualStudio.
After the lawsuit was lost, however, a Microsoft-owned Java clone was designed and created. It ended up being called C#. Sure they made improvements on Java, but so has Sun with it’s latest Java SDK.
C# is a great language for certain things. I have stated that. It’s greatness comes from two places. Java and Borland’s Delphi and C++Builder tools. If I am going to write a desktop application, I will do it with C#. But web programming with it sucks.
As for the comment about web development with C# not being “clean”, I guess you have never researched “ASP .NET”. It is very clean and expands a lot on Java/JSP.
-G
On the contrary. I have created several custom controls for the company I work for and have dealt a great deal with ASP.NET.
That is exactly what I’m talking about. Why the hell should I have to use ASP.NET in order to make a web app in C#? Java doesn’t have that requirement.
I will admit that the VB style environment for writing ASP.NET apps is a cool RAD environment, but I thought this discussion was about C#. Not ASP.NET.
As I’ve said before. I like C#, but only for Windows desktop application programming. Not for web programming.
If I create a very simple web page using C#, I also have to deal with ASP.NET and either JScript or VBScript and I am mostly tied to Windows and am tied to expensive web server technologies. I can do the same things with Java, only it’s free, I am not limited to any platform, and can use a wide variety of web server. This is why I think Java is better. It is more flexible.
For a beginner who wants to write programs for the web, it is much easier to learn one language (Java) than to have to learn three or four (C#, ASP.NET, VBScript, JScript) to get the same job done.
Yes, I have..
Yes, it is very easy to fork over several thousand dollars for development tools to “give C#/.NET a try”. If you wan’t to actually put C#/.NET to use on the web, it’s a lot more costly than that.
By the way, what features are missing from Java exactly?
When I attended an MSDN conference last year the presenter said that VBScript was completely dead in ASP.NET and that if you did anything in it you would be using pure VB.NET, C#, J# or whatever for your entire application because ASP.NET uses the Common Language Runtime (which is almost the whole point of it, right?). JScript is still supported as a core .NET language, but not VBScript by any means. I distinctly remember the speaker making this point very clearly. I am not hallucinating.
Sooo…is it actually plain old vanilla ASP that you are talking about, or what? What you are saying about ASP.NET in this regard isn’t making any sense to me at all.
An example of using C# directly within ASP.NET:
Camel, you have some explaining to do.
This is a better code example that shows both C# and VB.NET side-by-side, to run within ASP.NET:
“Yes, it is very easy to fork over several thousand dollars for development tools to “give C#/.NET a try”.”
Microsoft isn’t the only one to play with FUD. Several thousand dollars is a complete lie. Here, download the .Net SDK for *free* from Microsoft:…
This includes the full framework, documentation, examples, and the command-line comiler. Or, if you’d rather use an IDE with GUI builder, you can buy the standard edition of C# for a measly $90:…
If you don’t like C# and you don’t want to use it, fine, I’m not one to say what tools you must use. However, if you’re a professional developer, you’re only doing yourself a disservice by burying your head in the sand and chanting “C# is evil!”
A good developer is always educating himself about new ideas and technologies. Considering C# and .Net are a major focus of the biggest software company in the world, it would be irresponsible for a professional to not at least learn about it.
C# has a single unified type system like Smalltalk. There is no awkward division between primitive types and objects. (This rules. Really it does.) Performance is maintained through a mechanism called boxing. Look into it.
I always hear that argument from the “Java isn’t object oriented” camp, but it is rediculous. If I want to store an int in memory and nothing more, why would I want to generate the overhead of creating an Integer object? If I do want an Integer object, Java provides one for me. If, however, I don’t need an object but only a primitive type, Java allows me to better utilize my resources by declaring a primitive.
Java was originally designed to be used where memory is a tight commodity. Frivilous use of memory is fine on a computer with a gig of RAM, but is ridiculous in other applications. Frankly, I’m glad that the creators of Java had the forsight to include both primitive types and class wrappers of these types. That provides me, the programmer, the option to use whichever I prefer..
I couldn’t agree with you more on VB. I had to program in it for over a year before. What a nightmare.
I disagree with your comparisons of Java and C#. Like I have said before. C# is a great replacement for VB (then again, so is a rancid fart). C# is good for Windows programming. No, it is great for Windows programming. I just don’t think it comes anywhere close to what Java offers in the realm of internet programming and cross-platform support.
Whether you love or hate Microsoft has NOTHING — and I mean absolutely NOTHING — to do with an objective evaluation of C# as a language.
True, if a language was seperate from it’s implementation, and if C# wasn’t about it’s implementation and its future.
” A good developer is always educating himself about new ideas and technologies. Considering C# and .Net are a major focus of the biggest software company in the world, it would be irresponsible for a professional to not at least learn about it. ”
WTF? New technologies? C#???
C# is absolutely worthless. I’m not in business to prop up MS’s shrinking profit margins. And I’m not in business to tie my entire business software to Windows,IIS,IE,Exchange,Outlook,Media Player, and every other piece of garbage MS product.
And the fact that some clown thinks C# is just keen changes nothing for me or the vast majority of business out there.
Camel,
C# has primitive types in the “classic” sense. It’s just that they are semantically equivalent to objects. For instance:
int i = 3;
i++;
These are primitive “value” types — no objects here. But I can say:
int i = 3;
MessageBox.Items.Add(i.ToString());
When the compiler sees i.ToString(), it “boxes” the variable i into its “pure object” equivalent.
Boxing is what allows primitive types to remain efficient while at the same time remaining inside a single type hierarchy. I don’t have to write special case code for my primitive types to store them in collections, etc., as I would with Java.
“WTF? New technologies? C#???
C# is absolutely worthless. I’m not in business to prop up MS’s shrinking profit margins.”
Relax, pal. I’m not claiming C# is the best thing since sliced bread. I’m not saying you should be using it. And I’m certainly not saying that because it comes from Microsoft it’s great.
When I say “new technology”, I don’t mean “new” the way fire or the wheel were new in their time. But as a way of developing Windows apps, this is in fact a new way to do it. If you don’t want to develop Windows apps, fine, don’t.
My point is that as a professional, you’re being negligent if you don’t even take a look at it. Many anti-C# posters are basing their arguments purely on the fact that it comes from Microsoft. I don’t personally like Microsoft, but I do like to reach the 95% of the world running Windows at home, and if C# makes my job easier or my products better, then I’d be stupid to ignore it.
I’d be much happier working with someone (or buying a product or service from them) who tried C# and .Net and decided it didn’t fit their needs, rather than somebody who just turned away screaming, “Bill is Satan!”.
Educate yourself, that’s all I’m saying. Many of you have read up on or even tried C#, and decided it wasn’t for you. Great – I can totally understand and respect that.
“Yes, C# ties you to Windows, and depending upon your requirements, that might be a very bad thing.”
How can anyone read that sentence with out laughing out loud?
Unless you’re just a Window’s home coder who wants to play with the latest MS toy, the days of entrusting your entire company to just one vendor are over.
The consumer half of .Net is already dead in Hailstorm.
The biz half isn’t dead but has been met with lukeware reaction from the IT world thanks to the multitude of left over 1990s era “no one ever got fired for choosing MS” IT folk. Thankfully they’re a dying breed.
<SERMON>
Wildly emotional rants aren’t necessary Tuttle, really. Please calm down and maybe give us some technical reasons why C# is so absolutely worthless, if you can.
If you can’t back up your opinion with facts and just resort to idiotic name-calling then your opinion is worthless and you are not doing anything to contribute to the quality of this forum. Just because Microsoft has a lot of popular products does not alone constitute proof that they are garbage. If they are then tell me exactly why and don’t ask me trust in your sage wisdom, OK?
BTW, I am a Microsoft platform developer but I am doing what I can to learn more about the Java world because I accept the fact that Java is so entrenched that it is always going to be fairly popular and I may just have to write some code that will run on a Nokia cell phone some day. Who knows? If you think that .NET is going to fail and that no one wants it in the face of all these reports to the contrary then you are plainly in denial, IMO.
When someone gets dogmatically religious about technology then something has already gone terribly wrong. Forget the dogma, OK? We don’t need it. Give us facts.
</SERMON>
is an evil multi-headed hydra that eats babies, writes buggy software, and steals your blanket. Come on, folks. Again, I’m no Microserf, but I swear it seems like some of you actually take the position that it’s impossible for Microsoft to create anything worthwhile at all. Microsoft is the devil, according to your religion. The source of all evil. People, this is irrational.
We’ve been highly productive at our company with C# and ASP.NET. We love the language, the IDE, and the software we’ve created with it, and so do our customers.
I am a professional programmer. Why was I not asked what language I am using? These type of stats are always inaccurate and can not be trusted.
Besides, what is the point of these stats? To hear the C# is growing in use should not be the a factor in what language a programmer should be using for any given project.
Ignoring Gil Bates name calling response.
What is bizarre about responses like Phuqker’s is even MS themselves don’t make such claims. MS’s publicly stated goal of the whole .Net/C# Java ripoff is to turn the Internet into a giant tollbooth controlled by themselves. They want a tax on every financial transaction that takes place on the net, banking, music, business to business, anything. All running exclusively on Windows servers and Windows clients.
So excuse me for finding no use in a single platform proprietary language that is nothing more than a tweaked ripoff of Java.
Thank god a half of the .Net nightmare has a already failed with Hailstorm and the XBox.
If it doesn’t run on Linux, MacOS X, (insert platform), it is useless. Name calling doesn’t change that fact.
All I see in your little rant are lots of opinions with virtually nothing in the way of facts to back them up, Tuttle. I think we would all appreciate it if you would stop trying to sell us your dogmatic opinion as fact. It is not fact and it is not even worth engaging your unsupported opinions in argument because there is nothing to be gained as you have no prestige worth taking whatsoever, IMO.
For our sake, please try to learn the difference between the two, OK?
“we would all”
“our sake”
???
C# appears to be a divisive issue, simply because it’s a Microsoft product. My personal philosophy is to use (or recommend to those who must make the decision) the best tool for the job at hand after evaluating all the options. When the decision has been mine, I have sometimes chosen tools from Microsoft and sometimes not. When it came to Windows-specific development, I used to be a foaming-at-the-mouth, fanatical Delphi developer. Now I think Visual Studio.NET/C# are superior to Delphi for Windows development. (Odd. It’s the first time I ever thought a Microsoft development tool was the best tool for creating Windows applications.) The company I’m working at now is a Microsoft shop. That’s their decision, not mine. The only non-Microsoft tool we use is Python, which all the developers love. For my own personal web development, I used Apache/PHP/MySQL.
My point is to show that discarding Microsoft simply because it’s Microsoft is absurd. The company I’m at hasn’t been hexed by going the Microsoft route, although I’ll admit it’s expensive. I probably wouldn’t have done it were it up to me. But if someone else is paying the bill, Microsoft’s development tools are a joy to work with.
IBM and Nokia. These are staunch Java supporters, so I think Java has still a long lease on life.
“If it doesn’t run on Linux, MacOS X, (insert platform), it is useless. Name calling doesn’t change that fact.”
Did Java run on Linux, MacOS, etc. from day one? I don’t actually know, but find it hard to believe it ran on every platform from the very beginning. I do know that the early MacOS implementations were so entirely crappy that “cross-platform” GUI development was pure fantasy.
Ximian is working on Mono, a .Net implementation for Linux (). No doubt Mono will eventually be available on the BSDs too, and possibly OS X. There’s been talk about Microsoft doing an OS X version themselves (they do develop software for that platform).
Again, C# may not be right for you, but given a bit of time it may be more cross-platform than you expect.
I can’t actually get anything out of these comments other than a bunch of lame slashdot posturing. It seems that someone let the linux zealots rule the roost, and now anything that isn’t a *nix product gets wasted in the forums.
Too bad really, since OSNEWS USED TO BE A GREAT PLACE TO GET GOOD INFORMATION FROM PEOPLE THAT KNEW WHAT THE *&@% THEY WERE TAKING ABOUT.
I’m not going to post an opinion on this, but for those who may be interested:…
“The company I’m at hasn’t been hexed by going the Microsoft route, although I’ll admit it’s expensive. I probably wouldn’t have done it were it up to me.”
And now they’re locked into to one vendor for their entire IT infrastructure. I guess any requirement for basic business sense isn’t a requirement for the decision makers at that company.
I keep on looking at C#, knowing deep down it’ll help when I next look for a job.
But each time I seea public variable with should be private with get/set methods, I back away a little.
But as I’ve only ever use it to create a little bug that crashed for no good reason (well maybe a good reason, but I’ll be buggered if I can work it out) on that funky game MS released I can not really give a fair analisys(sp)
Ohh Java supported just Solaris and Windows at the begining (just like .net only supports Windows & BSD
, but now runs on everything from a Nokia phone to err big stuff with JVM’s
.
mlk
mlk,
each time I seea public variable with should be private with get/set methods, I back away a little.
d00d, what are you talking about? Properties? Properties are not public variables. They are syntactic sugar for Java’s canonical get/set methods.
Java:
public class Watusi
{
private int i = 0;
public int getI()
{
return i;
}
public setI(int i)
{
this.i = i;
}
}
Watusi w = new Watusi();
w.setI(5);
C#:
public class Watusi
{
private int i = 0;
public virtual int I
{
get
{
return i;
}
set
{
i = value;
}
}
}
Watusi w = new Watusi();
w.I = 5;
These are functionally equivalent classes, even in terms of encapsulation. Both have a private variable i which in Java is accessed through the get/set methods and in C# through the property I. Please explain to me how this breaks encapsulation. | https://www.osnews.com/story/1032/c-striking-a-chord-with-programmers/ | CC-MAIN-2022-33 | refinedweb | 4,840 | 74.39 |
It feels like everything we touch is carefully designed: websites, phones, subway maps, and so on. Even the things we used to take for granted: thermostats, smoke detectors and car dashboards now get a careful user experience treatment.
Design isn't just about the look and feel: it's about considering every way a user needs to interact with our device/tool/screen/object.
This applies to programming, too.
(Un)designed programming
Programming languages are big, complicated worlds. Even PHP, which plenty of programming snobs think is too "easy", is actually a pretty complicated mix of functions and classes that behave in very inconsistent ways.
The syntax, methods and naming have evolved over many years across millions of different users and applications. Most tend to reflect the underlying construction of the internals — not necessarily how you would want to use it.
Great moments in API design: jQuery
When I started writing JavaScript back in 2006 or so, it was a mess. Here's how I would find a tag with a certain class and move it around the DOM back then:
var uls = getElementsByTagName("ul"); var classToSearch = "foods"; for (var i = 0; i < uls.length; i++) { var classes = uls[i].getClasses(); for (var j = 0; j < classes.length; j++){ if (classes[j] == classToSearch){ myUL = uls[i]; } } } var $li = document.createElement('li'); $li.innerHTML = 'Steak'; myUL.innerHTML += $li;
Done!
jQuery made JavaScript fun again. In the late 2000s, the effect was so dramatic that I remember my dad asking me about "some jkwery thing" he read about in the Wall Street Journal. But despite its great effect, jQuery didn't add any "new features" to JavaScript. It just took the things developers had to do and broke it down into really clear patterns.
Rather than re-invent how to find stuff on the page, they leveraged what people already knew: CSS selectors. Then it was just a matter of collecting a lot of the common actions and organizing them into a few dozen functions. Let's try the the prior example again, now with jQuery:
var $li = $('<li>Steak</li>'); $("ul.foods").append($li);
In 2006, I bought a 680 page book on Ajax. With jQuery's great API, that was pretty much replaced by this:
The WordPress API
Though API has come to signify "third-party service" it simply means the programming interface to talk to a system. Just like there is a Twitter API or a Facebook API there is a WordPress API. You don't do raw database queries to create a post, right? You use
wp_insert_post.
But lots of design holes plague the WordPress API. You might use
get_the_title but
get_the_permalink generates an error, you use
get_permalink. Hey, when you have a decades-long open-source project involving thousands of people's code and millions of users: you're gonna get some quirks.
You can save yourself lots of time by masking these quirks and writing to the habits and behaviors of the programmer you're writing for (which may be you). This is where you can design the right interface to program the plugins and themes that you do everyday.
The Solution
To speed up our work and cut down on repetitive tasks, I've created libraries to handle the commands and customizations I need all the time.
1. Shortcuts for Common Tasks
Take, for instance, grabbing the source of a post's thumbnail. Turns out there's no built-in WordPress function to grab a thumbnail based on a post's ID (only the attachment ID).
Which means I often find myself doing this:
$thumb_id = get_post_thumbnail_id( get_the_ID() ); $src = wp_get_attachment_thumb_url( $thumb_id ); echo '<img alt="" src="' . $src . '" />';
But there's got to be a better way!
function get_thumbnail_src( $post ){ $thumb_id = get_post_thumbnail_id( $post ); $src = wp_get_attachment_thumb_url( $thumb_id ); return $src; } echo '<img alt="" src="' . get_thumbnail_src( get_the_ID() ) . '" />';
2: Unpredictable Inputs, Predictable Output
Much better! In fact you find yourself using it all the time and then sharing with other developers at your company.
Your friend is having trouble with it, so he calls you over to debug and you see:
echo '<img src="' . get_thumbnail_src( get_post() ) . '">';
So it looks like he accidentally used
get_post instead of
get_the_ID. You yell at him. But wait a second, why not make it more accepting?
Maybe we can adjust our function so that it can take a
WP_Post object and still give the user what they're expecting. Let's go back to that function:
function get_thumbnail_src( $post ){ if ( is_object( $post ) && isset( $post->ID ) ){ $post = $post->ID; } else if ( is_array( $post ) && isset( $post['ID'] ) ) { $post = $post['ID']; } $thumb_id = get_post_thumbnail_id( $post ); $src = wp_get_attachment_thumb_url( $thumb_id ); return $src; }
So if they send a
WP_Post object or an array, your function will still help them get what they need. This is a huge part of a successful API: hiding the messy guts. You could make separate functions for
get_thumbnail_src_by_post_id and
get_thumbnail_src_by_wp_post_object.
In fact, for more complicated transformations it might be preferable, but you can simplify the interface by having a single function route to the correct subroutine. No matter what the user sends, the function consistently returns a string for the image source.
Let's keep going: What if they send nothing?
3. Sensible Defaults; }
We've simplified yet again so the user doesn't have to send a post or even a post ID. When in the loop, all that's needed is:
echo '<img src="'.get_thumbnail_src().'" />';
Our function will default to the current post's ID. This is turning into a really valuable function. To make sure this will play nicely, let's wrap it inside a class so it doesn't pollute the global namespace.
/* Plugin Name: JaredTools Description: My toolbox for WordPress themes. Author: Jared Novack Version: 0.1 Author URI: */ class JaredsTools { public static; } }
And please don't prefix your class with
WP. I'm making this a public static function because I want it accessible everywhere, and it doesn't change: the input or execution doesn't change the function or object.
The final call to this function is:
echo '<img src="'.JaredsTools::get_thumbnail_src().'">';
Design First, Build Later
Let's move on a more complicated need. When I write plugins I find I always need to generate different types of error and/or update messages.
But the event-based syntax has always bugged me:
add_action( 'admin_notices', 'show_my_notice'); functon show_my_notice(){ echo '<div class="updated"><p>Your thing has been updated</p></div>'; }
There are lots of good reasons that WordPress follows this event-based architecture. But it's not intuitive, unless you want to sit around and memorize different filters and actions.
Let's make this match the simplest use-case: I need to show an admin notice. I like to design this API-first: where I figure out the best way to refer to the function in my code. I'd like it to read like this:
function thing_that_happens_in_my_plugin($post_id, $value){ $updated = update_post_meta($post_id, $value); if ($updated){ JaredsTools::show_admin_notice("Your thing has been updated") } else { JaredsTools::show_admin_notice("Error updating your thing", "error"); } }
Once I have the end-point designed, I can fulfil the design requirement:
class JaredsTools { public static function show_admin_notice($message, $class = 'updated'){ add_action('admin_notices', function() use ($message, $class){ echo '<div class="'.$class.'"><p>'.$message.'</p></div>'; }); } }
Way better! Now I don't need to create all these extra functions or remember crazy hook names. Here I'm using PHP anonymous functions (also called "closures") that lets us tie a function directly to an action or filter.
This saves you from having a bunch of extra functions floating around your files. The
use command lets us pass arguments from the parent function into the child closure.
Be Intuitive
Now another co-worker calls you over. She doesn't know why her admin notice isn't turning red:
JaredsTools::show_admin_notice("Error updating your thing", "red");
It's because she's sending "red" (which she would expect would turn the box red) when in fact she should be sending the name of the class that triggers red. But why not make it easier?
public static function show_notice( $message, $class = 'updated' ) { $class = trim( strtolower( $class ) ); if ( 'yellow' == $class ) { $class = 'updated'; } if ('red' == $class ) { $class = 'error'; } add_action( 'admin_notices', function() use ( $text, $class ) { echo '<div class="'.$class.'"><p>' . $text . '</p></div>'; }); }
We've now accepted for more user tolerance that will make it easier to share and for us when we come back to use it months from now.
Conclusion
After building a number of these, here are some of the principles I've learned to make these really useful for my team and me.
1. Design first and let the function's build match how people want to use it.
2. Save your keyboard! Make shortcuts for common tasks.
3. Provide sensible defaults.
4. Be minimal. Let your library handle the processing.
5. Be forgiving on input, but precise on output.
6. That said use as few function arguments as possible, four is a good max. After that, you should make it an options array.
7. Organize your library into separate classes to cover different areas (admin, images, custom posts, etc.).
8. Document with example code.
At Upstatement, our libraries for Timber make it easier to build themes and Jigsaw provides time-saving shortcuts to customize each install.
The time savings these tools provide lets us spend more time building the new and innovative parts of each site or app. By taking the commands that are otherwise esoteric (like adding a column to the admin post tables) and making simple interfaces: any designer or developer at our company can fully customize each site with the same power as a pro WordPress developer.
| http://code.tutsplus.com/articles/writing-better-apis-and-libraries-for-wordpress--wp-33601 | CC-MAIN-2014-42 | refinedweb | 1,605 | 63.39 |
This is your resource to discuss support topics with your peers, and learn from each other.
01-22-2013 10:34 AM
in Flash Pro. I have library where i store items how this is achieved in "flash builder"?
I really thought these simple things gonna be sraightforward.
Im moving from Flash Pro so , Flash Builder is new to me
you embed them (like in my code snippet) or you load them (like jtegen said). embedding is faster, with loading you can load from web etc. as well
you don't have to use ANE for something this simple
01-22-2013 10:58 AM
01-22-2013 11:00 AM
01-22-2013 05:53 PM - edited 01-22-2013 05:55 PM
Thanks for answers!
OK so Im getting somewhere - i want to use pure ActionScript approach and try using embeding first.
(maybe latter will try loading)
But still things dont work...
thats what I do:
1. Created folder assets, right mouse click --> Import-->General-->File System, import image.png, hit Refresh.
2. Write code:
package { import flash.display.Bitmap; import flash.display.Sprite; [SWF(width="1024", height="600", backgroundColor="#404040", frameRate="60")] public class kkk extends Sprite { [Embed(source='assets/image.png')] public static const MyImage : Class; public function kkk() { initBGR(); } private function initBGR():void { var myImg:Bitmap = new MyImage(); var mySprite:Sprite = new Sprite(); mySprite.addChild(myImg); mySprite.cacheAsBitmap = true; addChild(mySprite); } } }
the line [Embed(source='assets/image.png')] is underlined in red and i ger error:
Could not find Embed source 'assets/image.png'. Searched 'C:\Documents and Settings\mmm\grybasssssssssss\kkk\src\assets\image
What I am doing wrong?
P.S. I can see image.png inside folder assets so its really there
01-22-2013 06:09 PM
Yeees!
I created new workspace, and created folder assets INSIDE folder src and it works perfectly!
Thanks to all who helped me out,
Cheers! | https://supportforums.blackberry.com/t5/Adobe-AIR-Development/Help-how-to-put-bitmap-on-screen/m-p/2113769 | CC-MAIN-2017-09 | refinedweb | 319 | 68.36 |
Howdy,
I am messing with while loops and can not figure out how to get the following while statments to work.
______Code_____
//---------------------------------------------------------------------------
/* Divide 2 numbers 1) Show the decimal value and
2) Show fractional value with remainder*/
#include <vcl.h>
#include <iostream.h>
#include <conio.h>
#include <stdio.h>
int a, b, c, d, e; //declare variables
float divide (int c, int d); //function prototype
int main(int argc, char* argv[])
{
divide (c, d); //call divide function
getchar();
return 0;
}
float divide (int c, int d)
{
double f, g;
float h;
int remainder;
cout << "\nEnter a value for C: ";
cin >> c;
while (c<1) {
cout << "must be more than 0!! \n";
cout <<" Enter a value for C: ";
cin >>c;
}
cout << "\nEnter a value for D: ";
cin >>d;
while (d<1) {
cout << "must be more than 0!! \n";
cout <<" Enter a value for D: ";
cin >>d;
}
e=c/d;
f =c; //change c to double???
g =d; //change d to double???
h=f/g; //get value as double to print as decimal
cout <<"\nDecimal value: " << h<<"\n";
if (e<1) {cout<< c<<"/"<<d<<" is less than 1: \n";
return 0;}
remainder =c%d;
if (e<1 && remainder<1){cout<<"Rem is less than 1: \n";
return 0;}
else;
cout <<c<<"/"<<d<<"="<<e<<" with a remainder of " <<remainder;
return 0;
}
clearly when a decimal value is entered for variable c or d i get a infinite loop. my question is how can i test for values less than 0 for example .01 or .23 and get the result i'm looking for, "Tell the user to enter a larger value"?
thanks for any help
M.R. | http://cboard.cprogramming.com/cplusplus-programming/5216-while-some-value-loops-printable-thread.html | CC-MAIN-2015-35 | refinedweb | 274 | 77.27 |
CodePlexProject Hosting for Open Source Software
I have an issue while trying to draw images to the screen while using farseer.
I have everything set up, with a rectangle drawing at the same size as I set my Rectangle fixture.
However when I test collisions, the drawing is WAY off, and I fall through the drawn rectangle, but collide when I'm slightly off and up.
Can anyone give me a hand with trying to draw at the right position using the Farseer engine? I don't really want to use the debug draw thing included with the samples
Take a look at the HelloWorld sample in the source control. It uses the Spritebatch for drawing textures.
You need to scale your physics world to use the MKS (Meter-Kilogram-Second) system. When you draw textures using the spritebatch, you need to convert the world units (meters) to screen units (pixels) in order for it to draw correctly.
Alright, I'll have a look, but doesn't that used the debug view thing? Or can I set it up to work with my 2D Cameras matrix instead. The debugview thing is what really puts me off of messing with it
The one in the source control uses the Spritebatch with a view matrix. Remember, if you need a camera, you can find one inside the DemoBaseXNA project - it is designed to use matrices for both view and projection - to use it with the spritebatch, you simply
need to use Camera2D.ConvertWorldToScreen() to convert the body position (in meters) to texture position (in pixels). You also need to scale the texture correctly.
Edit: Oh, and it does use the debug view, but you can simply remove it if you don't need it.
I just imported the Camera 2D into my project, and I'll see how it works. What sort of scaling factor do you recommend? If the texture is the same size in pixels as it was in meters, will it appear smaller?
I ran into this problem of aligning textures with physics shapes.
Here is a link to a thread I started, not too long ago:
My basic approach was: GameObjects and everything is in world coordinate system using Farseer's MKS. I use the physic's body position to draw textures.
I setup a BasicEffect and ask Spritebatch to use that for rendering. The basiceffect has View and Orthographic projection matrices setup in a way, that overrides the default system. The default system assumes top-left corner as the (0,0), but Farseer Coordinate
system (0,0) would be in the center of the screen. You need to account for that one way or another.
Once I tell Spritebatch to use the specific projection matrices, everything is in MKS, I need to scale the textures accordingly, otherwise things would be out of proportion. For e.g. Your rectangle shape is 6 m in width, but your texture is 32 x 32, and
you were to place it as-is, the texture would be huge, since it implies the texture is 32 m x 32 m. Thus, 6/32 = 0.1875f would be the scale factor which would align texture with that rectangle, pixel perfect.
Hope it helped
I'm having the same issue, spent hours reading the forum, and googling, none of the many solutions I've found have worked.. (I used FP2.x with no issues..)
Can we please get a simple example along the lines of...
public class Game1{ Fixture Fixture; Texture2D Texture; protected override void LoadContent() { Texture = Content.Load<Texture2D>("MyTexture"); Fixture = FixtureFactory.CreateRectangle(Screen.World, Texture.Width, Texture.Height, 1.0f); Fixture.Body.BodyType = BodyType.Dynamic; } protected override void Update(GameTime gameTime) { //We update the world _world.Step((float)gameTime.ElapsedGameTime.TotalMilliseconds*0.001f); base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); sb.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, null, null, null, null, Camera2D.Projection); // This data is mostly from my animations, I was too lazy to write something else for this example.. sb.Draw(Texture, Camera2D.ConvertWorldToScreen(Body.Body.Position + a.Frames[CFrame].Offset), a.Frames[CFrame].FrameData, Color.White, 0.0f, new Vector2(a.Frames[CFrame].FrameData.Width / 2, a.Frames[CFrame].FrameData.Height / 2), Scale, SpriteEffects.None, 0); sb.End(); base.Draw(gameTime); } }
---
Just something to show me how to use the Camera2D class you provided, with the spritebatch, and make it actually draw my "custom" texture where I need it to be.. (If it exist in either the Hello World sample, or SimpleSamples, etc, I could not
find it.. I used Find, and F3'd the whole project..)
I've tried everything, I even ripped my engines guts out, and spliced yours in(SimpleSamples), hoping that would work, but it didn't.. (Though, your stuff worked fine like that, ie, Demo1-9..)
PS: I'm not trying to sound rude or anything, ppl tend to get that impression from me, I'm just annoyed, and haven't slept right, in forever(like 4 hours a night.. for months..), plus I'm about 20 hours in today.... (I swear, I'm going to end up like that
dude from Fight Club..)
The updated HelloWorld XNA sample uses the SpriteBatch and its use is not burried at all. It is however only in the source control and not yet in the downloads section, which has been stated in numerous discussions here though. I guess we should change that
soonish... :/
Usage of the Camera2D class will not work without adapting it a bit in most cases. HelloWorld XNA provides minimal camera functionality without it. A new camera class will most likely be present in the 3.3 release. The SimpleSamples use primitives
instead of spritebatch for rendering. That will also change, but for now it may not be the best example if you start out with 3.x
Judging by the amount of these topics, etc, it would probably be good to get that sample up, or else I see no end to these kinds of questions\topics..
So how bout my suggestion, a simple example in the format I suggested, using a matrix setup, should be fine, I can build the camera class myself, once I have an idea of what I'm trying to build.. (Simply fill the above game class with the basics of a single
rectangle fixture, and get it drawing using a matrix\spritebatch..)
Anyways, thanks for answering me, and for working on this project, etc,.. :) It would be greatly appreciated if I could get that example soon, my engine is pretty much on hold, because everything is going to use Farseer, my animation editor(for attaching
bodies to frames, it should be easier to handle this way..), my level editor(for previews, etc,..), and my game engine itself. (I have some little misc stuff I can do, but nothing I can't rip through in a few hours...)
(Again, I hope I didn't come off as rude, it's not my intention...)
That sample you suggest already exists. Just browse to "Source Code" on the top. Goto "Latest Version -> Browse" on the left. From there to "Samples -> HelloWorldXNA".
Either download the whole project or just have a look at Game1.cs. It contains exactly what you are looking for and it won't get any simpler than that.
If you want to compile the new sample you probably have to specify the correct path to the DebugView, Farseer etc. but that is just a matter of openening it in VS 20xx and setting the path / references to your local files.
Thanks, that looks like it does exactly what I needed, hopefully, I can get it integrated into my game\etc today, I'm anxious to get it working... :)
(I had a fully completed engine using FP2.x, but, it was lost during an HD failure, so really, I'm just rushing to put it back together, namely, while it's all still fresh in my mind.. Lot's to do, and to remember, etc,..)
----------------
That didn't work, it was further off than my own attempts, I'll try some more, but I don't think I did anything wrong, seeing as I mainly copied\pasted the stuff in....
I still can't get this to work, I downloaded the source control version, it compiles and runs fine, so can anyone see what I'm doing wrong here.. I've looked this over several times, and it looks the same as your version... This is built on top of your sample
stuff(screenmanager, etc,..)... The only thing I can think of is that something else is interfering with it...
(I added a DisplayUnits field to ConvertUnits, it stores the value set in ConvertUnits.SetDisplayUnitToSimUnitRatio, so that's why that's there....)
using System.Text;using FarseerPhysics.DebugViews;using FarseerPhysics.DemoBaseXNA;using FarseerPhysics.DemoBaseXNA.DemoShare;using FarseerPhysics.DemoBaseXNA.ScreenSystem;using FarseerPhysics.Dynamics;using FarseerPhysics.Factories;using Microsoft.Xna.Framework;using Microsoft.Xna.Framework.Graphics;using Microsoft.Xna.Framework.Input;namespace GameEngine{ internal class TestScreen : PhysicsGameScreen, IDemoScreen { #region IDemoScreen Members public string GetTitle() { return "Test Screen"; } public string GetDetails() { StringBuilder sb = new StringBuilder(); sb.AppendLine("Test screen... For testing...."); return sb.ToString(); } #endregion private Fixture _groundFixture; private Texture2D _groundSprite; private Matrix _projection; private Matrix _view; private Matrix _viewDebug; private Vector2 _cameraPosition; private Vector2 _screenCenter; private float _cameraRotation; public override void LoadContent() { ConvertUnits.SetDisplayUnitToSimUnitRatio(64); World = new World(new Vector2(0, -20)); base.LoadContent(); // The DebugView needs a projection matrix with the screen size in meters _projection = Matrix.CreateOrthographicOffCenter(0f, ScreenManager.GraphicsDevice.Viewport.Width / ConvertUnits.DisplayUnits, ScreenManager.GraphicsDevice.Viewport.Height / ConvertUnits.DisplayUnits, 0f, 0f, 1f); // Initialize camera controls _view = Matrix.Identity; _viewDebug = Matrix.Identity; _cameraPosition = Vector2.Zero; _cameraRotation = 0f; _screenCenter = new Vector2(ScreenManager.GraphicsDevice.Viewport.Width / 2f, ScreenManager.GraphicsDevice.Viewport.Height / 2f); _groundSprite = ScreenManager.ContentManager.Load<Texture2D>("Materials/Waves"); // 512px x 64px => 8m x 1m /* Ground Fixture: */ Vector2 groundPosition = _screenCenter / ConvertUnits.DisplayUnits + Vector2.UnitY * 1.25f; // Create the ground fixture _groundFixture = FixtureFactory.CreateRectangle(World, _groundSprite.Width / ConvertUnits.DisplayUnits, _groundSprite.Height / ConvertUnits.DisplayUnits, 1f, groundPosition); _groundFixture.Body.IsStatic = true; _groundFixture.Restitution = 0.3f; _groundFixture.Friction = 0.5f; } public override void HandleGamePadInput(InputHelper input) { base.HandleGamePadInput(input); } public override void HandleKeyboardInput(InputHelper input) { base.HandleKeyboardInput(input); } public override void Update(GameTime gameTime, bool otherScreenHasFocus, bool coveredByOtherScreen) { _view = Matrix.CreateTranslation(new Vector3(_cameraPosition - _screenCenter, 0f)) * Matrix.CreateRotationZ(_cameraRotation) * Matrix.CreateTranslation(new Vector3(_screenCenter, 0f)); base.Update(gameTime, otherScreenHasFocus, coveredByOtherScreen); } public override void Draw(GameTime gameTime) { /* Ground position and origin */ Vector2 groundPos = _groundFixture.Body.Position * ConvertUnits.DisplayUnits; Vector2 groundOrigin = new Vector2(_groundSprite.Width / 2f, _groundSprite.Height / 2f); ScreenManager.SpriteBatch.Begin(SpriteSortMode.Deferred, null, null, null, null, null, _view); ScreenManager.SpriteBatch.Draw(_groundSprite, groundPos, null, Color.White, 0, groundOrigin, 1, SpriteEffects.None, 0); ScreenManager.SpriteBatch.End(); base.Draw(gameTime); } }}
----
Edit: I went through, and debugged this, all the values seem to be correct, screen size, DisplayUnits, all the events fired properly, so something is definitely not right.. Whether I'm overlooking something stupid, or it's a bug, I have no clue.... I also
tried that same texture in the Hello World sample, to verify it wasn't something like that, it wasn't, the texture doesn't matter..
The code looks correct, so my guess would be that your problem is somewhere else. If it is not to big and you could somehow send me your project I will have a look at it.
What parts of the ScreenManager are you using?
The ScreenManager does some fancy render target magic for screen transitions. You have to be careful about the drawing order of your screens, cause everything drawn prior to a render target change is lost cause the graphics cards just discards it.
I'm using the entire simple samples project for my engine until I get everything sorted with FPE3..
Since I wasn't clear, it is rendering, it's just rendering in the wrong place(and the scaling is wrong..), while the Hello World sample using the same texture worked as expected.. (So I don't think it's platform specific.. ie, Win7 x64..)
I could send it to you, however, if you add that class into the simple samples as a new screen it should give you the same setup as me...(I have the one from the downloads screen..) You just need to make this change to ConvertUnits..
private static float _displayUnitsToSimUnitsRatio = 100;
public static float DisplayUnits
{
get { return _displayUnitsToSimUnitsRatio; }
}
--
Edit: Here in case you need it.. It's only 2.42mb..
I'm heading to bed, I hope you have better luck than me.. Been about to tear my hair out on this one..
Your code is perfectly fine and your object is created exactly where you want it to be. What you don't do however is draw the debug view properly. The PhysicsGameScreen is tuned for the samples. The ScreenManager has its own Camera which uses its own custom
coordinate system in which the DebugView is drawn. Unfortunately this custom coordinate system has nothing to do with the one you use for drawing your stuff, that is why they don't align properly. Either you set up the ScreenManagers camera to use the same
projection you use for drawing your stuff... or you draw the DebugView yourself like it is done in the samples, as you don't use the ScreenManagers camera anyway at the moment.
Farseer 3.3 will have new samples with a camera that automatically adapts to your projection... for now it doesn't and you can't expect the DebugView to align itself to your drawing :/
On a side note: If you mess around with the camera in its current state, you will also have to adapt the Border object. The samples work with an orthographic projection that is resolution independent and always has the same scale, that is objects always
cover the same portion of the screen but may therefore vary in pixel size. I guess you want objects with a fixed pixel size, which get smaler if you crank up the resolution in terms of screen space covered.
Ok, that makes sense, I was going crazy over here, because I knew my code was right, as far as drawing my stuff went..
I didn't even think of the possibility that the debug object was being drawn in the wrong place.. :)
Hopefully I can get it working now, thanks for the help.
I made a number of changes to the FarseerLib itself, to make using the MKS system less of a hassle... I don't know if you can do this in the main project without causing issues, but I figured I would list the changes..
In the world class, I did this..
// MKS Units public static float Units;
public World(Vector2 gravity, float units = 0)
{
World.Units = units;
Snip....
This also lets me call World.Units anytime I need my unit number.. Then, I went into the FixtureFactory, and added this...
// MKS System Added
public static Fixture CreateRectangle(World world, float width, float height, float density)
{
if (World.Units != 0)
return CreateRectangle(world, width / World.Units, height / World.Units, density, null);
else
return CreateRectangle(world, width, height, density, null);
}
Then I went into the body class and did this..
public Vector2 Position
{
get
{
if (World.Units != 0)
return Xf.Position * World.Units;
else
return Xf.Position;
}
set { SetTransform(ref value, Rotation); }
}
Now I when I create my world I can just set the c# 4.0 optional parameter "units", and if I do, it will take care of the math behind the scenes for me... I still have to use it in several places myself, but if I keep applying this method, I'm sure
I can cut it down to where the MKS system is nearly transparent..
Anyways, I got my project going now, thanks for all the help.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://farseerphysics.codeplex.com/discussions/244014 | CC-MAIN-2017-13 | refinedweb | 2,682 | 57.16 |
Run commands/scripts on a Raspberry Pi when a voice command is given to Alexa.
I wanted the ability to be able to run commands on my Raspberry Pi by issuing voice commands to my Alexa Dots in my house. Also I wanted to schedule tasks via Alexa as opposed to having to add scheduled tasks individually to the several Pis I have around the house.
In order to do this, we need to set up:
- Log in in to AWS and the Amazon developer site as well to confirm you can access AWS and the Amazon dev site
- AWS IAM: Create a user and a group that the RPi script will use to log in to AWS
- AWS SQS (message queue).
- AWS LAMBDA function
- Amazon Alexa Skill
An Alexa skill points to a Lambda function and based on what voice command is issued depends on what message is added to SQS queue, on the Rapsberry Pi runs a script to check the SQS queue and dependant on the message it will initiate jobs or scripts on the RPi.
AWS and Amazon dev accounts
If you do not already have an AWS account you will need to set one up, there is no reason why this project should fall outside of the AWS 'free tier' which allows 1 million free requests a month to Lambda and 1 million requests to the SQS service. In order to go past the free tier you would need to make more than 1 request every 3 seconds (based on a 30 day month), once set up I set the script to run every 15 seconds but of course if I had 10 RPi's doing a request then that would need to be factored in.
AWS - Create a user and group
Login to AWS and go to the IAM Management console and add a user:
As the permissions are set at the group level we need to create a new group for this user, if we create any more users in the future they can also use the same group. This user will log in from the Raspberry Pi and so needs to be able to be able to see and update the queue in AWS SQS.
For this we can use an amazon predefined policy (pre set permissions), search for 'sqs' and select the policy 'AWSLambdaSQSQueueExecutionRole' followed by clicking the 'Create group' (this will allow the user to read and write to the SQS queue).
This will take you back to the 'Add user to group' screen and the new group will be selected to add the new user to, move forward by clicking the 'Next: Tags' button.
On the 'Add tags' screen just move on by clicking the 'Next: Review' button.
On the next screen, review the details for the new user and click 'Create user'
On the following screen make sure you 'Download.csv' as this will be your ONLY opportunity to get the credentials for this user.
The credentials in the CSV will be added to scripts on the RPi so it can log in to access the SQS queue.
NOTE: As of June 2019 in order for an Alexa skill to run a Lambda function, the Lamda code must be hosted in one of the following regions:
- Asia Pacific (Tokyo)
- EU (Ireland)
- US East (N. Virginia)
- US West (Oregon)
As I live in England (UK) I am going to host by SQS Queue and Lambda function in the Ireland region.
Find the SQS page and create a new Simple queue service:
Enter a new name and create a 'Standard Queue' using the 'Quick-Create Queue' option:
Click on the new queue and take a note of the ARN and the URL that we will need later:
IN AWS, in the 'services' search for 'Lambda' and create a new function:
Select 'Author from scratch', add a name and change the 'runtime' to 'Python 2.7'. For the execution role select 'Create a new role from AWS policy template, select the 'Amazon SQS poller permissions' policy give the role a name and create the function.
For the configuration of the function, when you click on something in the 'designer' panel the configuration settings are seen in the bottom half of the screen.
From the left add the 'Alexa Skills Kit' trigger, configuration is required to be able to save this function. Scroll to the bottom, as we do not have an Alexa Skill ID yet (we have not created a skill yet) we will just 'disable' this for now and come back to it later.
Select the Lambda function icon (for me it is 'RPi-LED-Function) and at the bottom we need to provide our custom python code, this code will provide feedback to the Alexa skill and also update the SQS queue.
I have added some comments in to the code below on what will need to be changed.
Create the custom Alexa skillCreate the custom Alexa skill
import boto3
# Below you need to add in your access key, access secret, rgion and sqs queue url
access_key = "This can be found in the downloaded .csv file"
access_secret = "This can be found in the downloaded .csv file"
region ="eu-west-1"
queue_url = "This can be found when looking at the SQS queue, https://..."
# you should not need to change the following unless you know what your doing.
}
def post_message(client, message_body, url):
response = client.send_message(QueueUrl = url, MessageBody= message_body)
def lambda_handler(event, context):
client = boto3.client('sqs', aws_access_key_id = access_key, aws_secret_access_key = access_secret, region_name = region)
intent_name = event['request']['intent']['name']
# The following needs to be customised
# The intent names shown below are linked with intents created in the custom Alexa Skill.
# The 'post_message' relates to the SQS queue
# The 'message' line is the message/response that Alexa will speak back to you
if intent_name == "LightsOn":
post_message(client, 'on', queue_url)
message = "Lounge Lights will now turn on"
elif intent_name == "LightsOff":
post_message(client, 'off', queue_url)
message = "Lounge Lights will now turn off"
elif intent_name == "LightsRed":
post_message(client, 'red', queue_url)
message = "Lounge Lights will change to red"
elif intent_name == "LightsGreen":
post_message(client, 'green', queue_url)
message = "Lounge Lights will now change to green"
elif intent_name == "LightsBlue":
post_message(client, 'blue', queue_url)
message = "Lounge Lights will now change to blue"
elif intent_name == "LightsTest":
post_message(client, 'test', queue_url)
message = "Lounge Lights will now run a test sequence"
else:
message = "Sorry but I do not understand that request"
speechlet = build_speechlet_response("Mirror Status", message, "", "true")
return build_response({}, speechlet)
0
Log in here and create a new skill, select the 'custom' skill and 'Start from scratch:'
Here you need to configure the following:
Invocation Name - This is the name that will be used to trigger the skill, I have called mine 'lounge lights.'
Intents, Samples, and Slots - For this example I have kept it simple and created an 'intent' for each action as shown below, you should also notice that these intents reflect the code I added in Lambda.
Here is an example of one of my intents, you can of course add whatever you want depending on your requirements:
So, for this utterance to work I would say 'Alexa, ask Lounge Lights to turn off.'
Build Model - Once the above is done, save and build the model.
Endpoint - add an endpoint, this is the ARN of the Lambda function that we have already created. If you get any errors saving this endpoint make sure the following have been checked:
- Lambda code is located in: Asia Pacific (Tokyo), EU (Ireland), US East (N. Virginia) or US West (Oregon)
- The 'Alexa Skills Kit' trigger has been added to the Lamdba function
From the Alexa skill interface, click on the 'test' tab and from here you can test to confirm that 1) your intents are correct and 2) the lambda function is correctly configured. Once testing is complete you can then go on to configure the Raspberry Pi to read from the SQS queue.Configure Raspberry Pi (linux device)
Depending on your end goal depends on what you need to do on your RPi, my first test was to turn on/off my monitor for my Magic Mirror and also to be able to reboot the RPi.
The second was inline with this project writeup and was to be able to switch on a ws2811/ws2812 LED strip of lights connection to my RPi.
The following code needs to be ran, this will log in to AWS and look at the entry in the SQS queue and depending on the message stored depends on what local Python script is ran. To run up the LEDs you will need to run the script as 'sudo.'
import boto3
import os
import time
access_key = "Access key from the csv file"
access_secret = "Access seret from the csv file"
region = "the region where the SQS queue is - found in the queue url"
queue_url = "SQS Queue URL,....."
def pop_message(client, url):
response = client.receive_message(QueueUrl = url, MaxNumberOfMessages = 10)
#last message posted becomes messages
message = response['Messages'][0]['Body']
receipt = response['Messages'][0]['ReceiptHandle']
client.delete_message(QueueUrl = url, ReceiptHandle = receipt)
return message
client = boto3.client('sqs', aws_access_key_id = access_key, aws_secret_access_key = access_secret, region_name = nameOfTheRegion
waittime = 20
client.set_queue_attributes(QueueUrl = queue_url, Attributes = {'ReceiveMessageWaitTimeSeconds': str(waittime)})
time_start = time.time()
while (time.time() - time_start < 30):
print("Checking...")
try:
message = pop_message(client, queue_url)
print(message)
if message == "on":
os.system("python /home/pi/LEDScripts/LED_on.py")
elif message == "off":
os.system("python /home/pi/LEDScripts/LED_off.py")
elif message == "blue":
os.system("python /home/pi/LEDScripts/LED_blue.py")
elif message == "red":
os.system("python /home/pi/LEDScripts/LED_red.py")
elif message == "green":
os.system("python /home/pi/LEDScripts/LED_green.py")
elif message == "test":
os.system("python /home/pi/LEDScripts/LED_test.py")
except:
pass
You'll see above that the package 'boto3' is used (AWS SDK), you can install this by running:
python -m pip install boto3
Here is an example of one of the Python scripts which are ran:
Setting up the Pi for the LED stripsSetting up the Pi for the LED strips
from neopixel import *
# LED strip configuration:
LED_COUNT = 12 # Number of LED pixels.
LED_PIN = 19 # GPIO pin connected to the pixels (18 uses PWM!).
LED_FREQ_HZ = 800000 # LED signal frequency in hertz (usually 800khz)
LED_DMA = 10 # DMA channel to use for generating signal (try 10)
LED_BRIGHTNESS = 30 # Set to 0 for darkest and 255 for brightest
LED_INVERT = False # True to invert the signal
LED_CHANNEL = 1 # set to '1' for GPIOs 13, 19, 41, 45 or 53
strip = Adafruit_NeoPixel(LED_COUNT, LED_PIN, LED_FREQ_HZ, LED_DMA, LED_INVERT, LED_BRIGHTNESS, LED_CHANNEL)
strip.begin()
for i in range(strip.numPixels()):
strip.setPixelColor(i, Color(0, 255, 0))
strip.show()
If you have not run these up before there are different libraries can be ran, however I choose to use this one:
You'll need to run the following:
sudo apt-get install build-essential python-dev git scons swig
git clone
Then from the 'python' directory in the download package from GitHub:
sudo python setup.py build
sudo python setup.py install | https://amazonwebservices.hackster.io/nathansouthgate/control-raspberry-pi-linux-device-from-alexa-b558ad | CC-MAIN-2021-21 | refinedweb | 1,830 | 60.99 |
Bugs item #1337400, was opened at 2005-10-25 14:38 Message generated for change (Comment added) made by papadopo You can respond by visiting: Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None Priority: 5 Submitted By: Dimitri Papadopoulos (papadopo) Assigned to: Nobody/Anonymous (nobody) Summary: Python.h should include system headers properly [POSIX] Initial Comment: In Python 2.4.2, Python.h looks like this: #include <limits.h> [...] #include <stdio.h> [...] #include <string.h> #include <errno.h> #include <stdlib.h> #ifdef HAVE_UNISTD_H #include <unistd.h> #endif On POSIX platforms <unistd.h> should be included first! Indeed it includes headers such as <sys/feature_tests.h> on Solaris, <standards.h> on Irix, or <features.h> on GNU systems, which define macros that specify the system interfaces to use, possibly depending on compiler options, which in turn may enable/disable/modify parts of other system headers such as <limits.h> or <errno.h>. By including <unistd.h>, you ensure consistent systems interfaces are specified in all system headers included by Python sources. This may seem rather academic, but it actually breaks my Solaris builds: I need to compile Python using Sun's C compiler when building Python for performance and GNU's C++ compiler when building Python modules written in C++ for compatibility with C++ libraries used by these modules that can't be compiled with Sun's C++ compiler. So the same Python.h is used by Sun's C compiler (which it was created for in the first place) and GNU's C++ compiler. GNU's C++ compiler fails to compile some modules. Unfortunately I can't recall the exact modules and error messages right now, but including <unistd.h> fixes the problem. ---------------------------------------------------------------------- >Comment By: Dimitri Papadopoulos (papadopo) Date: 2005-11-02 09:42 Message: Logged In: YES user_id=52414 Ah, I didn't explain myself clearly. I meant to say that <unistd.h> must be included before other system headers such as <limits.h>, <stdio.h>, <string.h>, <errno.h> and <stdlib.h> in this specific case. I totally agree it has to be included after "pyconfig.h". For example if "pyconfig.h" defined _HPUX_SOURCE and <unistd.h> was included before "pyconfig.h", then wrong system APIs may be triggered (or at least system APIs that were not intended to be specified). Now why <unistd.h> should be included in front of other system headers? This is because: 1) <unistd.h> triggers specific POSIX or Single UNIX Specification APIs 2) most if not all system headers do not include <unistd.h>, so different APIs may be triggered before including <unistd.h> and after including <unistd.h> I can't provide a section of the POSIX specification that explictly states that <unistd.h> must be included before <stdlib.h>. This is however implicit: As you can see <unistd.h> may or not define macros that trigger a specific API (POSIX.1-1988, SUSv1, SUSv2, SUSv3, etc.). I'll investigate what happens in the case of this specific failure and let you know. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2005-11-02 05:19 Message: Logged In: YES user_id=21627 Can you please point to the relevant section of the POSIX specification that states that unistdh.h must be included before stdlib.h? As for the specific problem: it may be that you are somehow working around the real problem by including unistd.h before Python.h. Python.h *must* be included before any system headers, as pyconfig.h defines certain compilation options which affect the feature tests. Including unistd.h before can actually break things, as structs may get defined differently depending on whether pyconfig.h was included first or not. So in the specific example, it would help if you could determine why ::btowc is defined in one case but not in the other. ---------------------------------------------------------------------- Comment By: Dimitri Papadopoulos (papadopo) Date: 2005-10-25 15:57 Message: Logged In: YES user_id=52414 Oops... Instead of including <unistd.h> fixes the problem. please read including <unistd.h> first fixes the problem. Here is an example to reproduce the problem: $ cat > foo.cpp #include <Python.h> #include <cwchar> $ $ g++ -I/usr/local/python/include/python2.4 -c foo.cpp [...] /usr/local/gcc-4.0.2/lib/gcc/sparc-sun-solaris2.8/4.0.2/../../../../include/c++/4.0.2/cwchar:145: error: '::btowc' has not been declared [...] $ $ cat > foo.cpp #include <unistd.h> #include <Python.h> #include <cwchar> $ $ g++ -I/usr/local/python/include/python2.4 -c foo.cpp [...] $ ---------------------------------------------------------------------- You can respond by visiting: | http://mail.python.org/pipermail/python-bugs-list/2005-November/030900.html | crawl-002 | refinedweb | 784 | 61.83 |
form and post as a method to send request parameter?kasq Jan 23, 2012 4:41 AM
Hi all,
I have question about best practice in the CQ to send between 2 pages a parameter using a post method in the form.
I have simple form on page.html like:
<form action="page_2.html" method="POST">
<input type="hidden" name="flag" value="false" />
</form>
and using this form the flag parameter must be send to page_2.html.
I'm quite new with this stuff in the CQ, so in advance thanks a lot for any advice.
Regards,
kasq
1. Re: form and post as a method to send request parameter?Willy jojo Jan 23, 2012 7:50 AM (in response to kasq)
Hey Kasq,
I would like the answer to this as well. I have a form that is not authorable, that posts to itself. It is basically a calculator to return values based on form values.
Anyone have ideas?
2. Re: form and post as a method to send request parameter?Sham HC
Jan 23, 2012 9:31 PM (in response to kasq)
Hi Kasq/Willy jojo,
Both of you are looking to process the form without storing any information in the JCR. In order to accomplish it you need to implement a post servlet that will handle posts to a custom action.
More details look at /libs/foundation/components/form/actions there are nice examples.
- Configure an overlaid custom form action. /apps/foundation/components/form/actions/<cusotmName> and its properties are mentioned at [0].
- Have a dialog node defined with fields redirect (this will be used under "Action Configuration" when you add a CQ "Form" component) [1].
- To handle the request data create a post servlet [2]
- Configure a Form component and select your custom form action from the dropdown on page A
[0] {"hint":"Sends all submitted values to the post gateway.","sling:resourceType":"foundation/components/form/action","jcr:createdBy":"admin ","jcr:title":"Custom","jcr:created":"Mon Jan 23 2012 23:58:14 GMT-0500","jcr:primaryType":"sling:Folder"}
[1]
<dialog jcr:
<redirect jcr:
</dialog>
[2]
import java.io.IOException;
import java.util.*;
import javax.servlet.ServletException;
import org.apache.commons.lang.StringUtils;
import org.apache.sling.api.SlingHttpServletRequest;
import org.apache.sling.api.SlingHttpServletResponse;
import org.apache.sling.api.request.RequestParameter;
import org.apache.sling.api.servlets.OptingServlet;
import org.apache.sling.api.servlets.SlingAllMethodsServlet;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.apache.sling.api.resource.Resource;
import org.apache.sling.api.resource.ResourceResolver;
import org.apache.sling.api.resource.ResourceUtil;
import org.apache.sling.api.resource.ValueMap;
/**
* This servlet accepts POSTs to a form begin paragraph but only if the
* selector "test" and the extension "html" is used.
*
* @scr.component metatype="false"
* @scr.service interface="javax.servlet.Servlet"
*
* @scr.property name="sling.servlet.resourceTypes"
* value="foundation/components/form/start"
* @scr.property name="sling.servlet.methods" value="POST"
*
* @scr.property name="service.description" value="Test Form Servlet"
*/
public class TestFormServlet extends SlingAllMethodsServlet implements
OptingServlet {
private static final long serialVersionUID = -1346698467552285051L;
protected final Logger log = LoggerFactory.getLogger(getClass());
protected static final String EXTENSION = "html";
/** @scr.property name="sling.servlet.selectors" */
protected static final String SELECTOR = "test";
/**
* @see org.apache.sling.api.servlets.OptingServlet#accepts(org.apache.sling.api.SlingHttpServlet Request)
*/
public boolean accepts(SlingHttpServletRequest request) {
return EXTENSION.equals(request.getRequestPathInfo().getExtension());
}
/**
* @see org.apache.sling.api.servlets.SlingSafeMethodsServlet#doGet(org.apache.sling.api.SlingHtt pServletRequest,
* org.apache.sling.api.SlingHttpServletResponse)
*/
@Override
protected void doGet(SlingHttpServletRequest request,
SlingHttpServletResponse response) throws ServletException,
IOException {
//this.doPost(request, response);
}
/**
* @see org.apache.sling.api.servlets.SlingAllMethodsServlet#doPost(org.apache.sling.api.SlingHtt pServletRequest,
* org.apache.sling.api.SlingHttpServletResponse)
*/
@Override
protected void doPost(SlingHttpServletRequest request,
SlingHttpServletResponse response) throws ServletException,
IOException {
final ValueMap values = ResourceUtil.getValueMap(request.getResource());
String flag = values.get("flag", String.class);
log.error("Sample test form servlet executed, properties: " + flag);
//Do a redirect. This is just an example, you can do a forward if you like or something else.
//It is up to you how you decide to handle what happens after the post.
String redirectTo = request.getParameter(":redirect");
if (redirectTo != null) {
response.sendRedirect(redirectTo);
return;
}
}
}
3. Re: form and post as a method to send request parameter?4593029388218911 Jan 24, 2012 10:54 AM (in response to kasq)
I'd first ask the question why are you passing the flag to the second page? Does the page display differently if flag is passed -for example without the flag it displays with a right column and if the flag isn't present there is no right column on the second page? Or is the flag eventually going to get persisted somewhere (either in the JCR or in the database.
If the purpose of the flag is to cause the second page to display in a different mode then best practice would not be to use a post with a parameter but rather to use a GET with selector in the URL. So for example you rather then having the form action be page_2.html you have the form action be javascript that just sends the user to page_2.flagtrue.html.
Then in your component code you call slingRequest.getRequestPathInfo().getSelectors() which returns an array and you can iterate over the array and look for your flag. If the flag is present in the array you display one way and if the flag is missing you display the other way. Or if your layout is totally different you can create a JSP in your template named flagtrue.html and you can create a totally differnet appearance of your page. One advantage of this over user a POST or a GET with parameters is that it's fully cachable.
This only works well when you have a fairly well defined set of options, so if you have lots of possible values you are passing in the form then a parameter will work better then a selector. Also if you go down this path you need to pay attention to your dispatcher configuration to ensure you are white listing the selectors that permitted through to the app servers (to prevent DDOS attacks).
If you truely are trying to pass form information that is going to be next page - perhaps because you have a multi-step form - a sort of wizard type of thing - then I would suggest that you might find using GETs rather than POSTs to be easier to work with. The combination of how cq:Page primary type is structured (with the sling resource type being on the jcr:content child node) means that POST parameters are not a good way to pass information from one page to the next. You want to think through why you are passing the information from one page to the next and then consider if there may be a more appropriate design - which may include:
- Using selectors instead of paramters (again only useful when passing a short known list of options - and really only appropraite when trying to control the display of the next page).
- If you have a multi-page form moving the user interaction down to the client and creating the effect of a multi-page form using JavaScript to hide/show the appropriate form elements and then posting the complete set of data to a servlet for persistence
- For something like calculator mentioned a best practice would be a AJAX focused solution where the form posts to a servlet which returns the results - but not actually refreshing the page
- Passing values from page to page in a cookie (may not be appropriate depending on the type of data). This approach requires that you set the cookie and take actions based on the cookie value all on the client side - if you are taking actions based on the cookie server side your pages can't be cached.
4. Re: form and post as a method to send request parameter?kasq Jan 25, 2012 12:08 AM (in response to 4593029388218911)
Hi,
First of all thanks a lot for answer. In my case we needed this form to a send a post parameter to avoid first of all sending parameters directly in the url.
Second thing that in our case recognizing this flag we know if we should shown Disclaimer for a page or not depending on from where the page has been opened.
We have something like 2 pages - pageA and pageB. On a pageB we have assigned disclaimer. There are two possible scenario:
First scenario is that on pageA is the link to pageB, if user click on this link the disclaimer is open and in case if it is accepted by user webrowser redirect a user to pageB.
Second scenario is that if a user found pageB in Google for example, the page is open and disclaimer is show after loading a page.
The issue is that in the first scenario we would like to use on a pageA a from with flag parameter send in post that tells a pageB do not open a disclaimer. In case if this parameter in the request is null, it means open a disclaimer on pageB (scenario 2) .
We found some workaround which works at this moment. We are using short life cookies which are set when on pageA when the user accept a disclaimer and is redirected to pageB.
I hope that now the situation is more clear.
Reagrds,
kasq
5. Re: form and post as a method to send request parameter?Willy jojo Jan 25, 2012 1:08 PM (in response to Sham HC)
So in this example what might the form action="" be?
6. Re: form and post as a method to send request parameter?aklimets
Jan 26, 2012 2:43 AM (in response to kasq)
Some quick clarifications:
- if you don't modify anything on the server-side (such as the "calculator" mentioned), use GET instead of POST
- use selectors instead of request parameters if you want to benefit from caching (the cq dispatcher won't cache any request including request parameters (?....), but urls with selectors will be cached); so if your calculator is deterministic based on the parameters, use selectors, so the next requests will be served from the dispatcher; if it is e.g. a search and depends on repository or other external data, you don't want caching and use request parameters again
- you'd handle request parameters or selectors (as mentioned) inside the relevant component by simply using request.getParameter() or reqeust.getRequestPathInfo().getSelectors()
- if you don't use the CQ form components, there is no magic involved, but plain HTML forms; the action would be the target page (e.g. "target.html" or "/fixed/patch/target.html"); if you leave the action empty, the browser will get/post to the current page itself
- I agree that an AJAX solution is probably nicer for pure "calculator" style use cases
Cheers,
Alex
7. Re: form and post as a method to send request parameter?vikramca06 Jan 24, 2013 4:29 AM (in response to aklimets)
Hi,
I would like to know how to submit a form with some input fields to another page using sling POST method.
example:
I have two pages.
* one is form page
* second page gets the form data and sends email using jsp and java
Please help me to do this using POST method
Few questions:
What should i give in form action?
do i need to use sling redirect method or resource?
how to get form values from first page to second page?
giving example code will really helpful.
example form:
<form id="commentForm" method="post" name="checkout" enctype="text/plain" action="/en/cart/checkout/order-confirmation.html">
<fieldset>
<p>
<label for="cname">Name (required, at least 5 characters)</label>
<input id="cname" name="name" minlength="5" type="text" required />
<p>
<label for="cemail">E-Mail (required)</label>
<input id="cemail" type="email" name="email" required />
</p>
<p>
<label for="ccomment">Address (required)</label>
<textarea id="ccomment" name="comment" required></textarea>
</p>
<p>
<label for="creditcard">Credit Card No</label>
<input type="text" name="creditcard" size="4" maxlength="4" required<input type="text" maxlength="4" size="4" name="creditcard" required<input maxlength="4" type="text" size="4" name="creditcard" required <input maxlength="4" type="text" size="4" name="creditcard" required
</p>
<p>
<label for="cardccv">CCV No</label>
<input type="text" class="numeric" name="cardccv" size="4" maxlength="3" required />
</p>
<p>
<input class="btn btn-success" type="submit" value="Place Order"/>
</p>
</fieldset>
</form>
Please reply me ASAP.
Anyone can reply for this post.
Thank you. | https://forums.adobe.com/thread/953075 | CC-MAIN-2018-13 | refinedweb | 2,099 | 54.73 |
First you may be asking yourself, what is a UUID - here are some links to explain:
While most Java classes are included in the Sterling B2B Integrator, the UUID cannot simply be called in the map. This is because you cannot instantiate a UUID like you can do with may other Java classes, this can be accomplished by creating a class that has UUID.
Here is the java user exit contents (getUUID.java file) for this example:
package com.ourUUID;
import java.util.UUID;
public class getUUID {
public String result;
public getUUID() {
result = UUID.randomUUID().toString();
}
public String getStr() {
return result;
}
}
Compile the .java file from your Java Home directory, create the .jar file from your Java Home directory, Stop the SI Services, run Install3rdParty.cmd (or .sh), then setupfiles.cmd (or .sh).
The Extended rule to use in your map where you need the UUID value to be mapped would look like this:
object ob;
ob = new("com.ourUUID.getUUID");
#field = ob.getStr();
There are many different possible date formats to represent a date and time in data. MMDD (2 digit month 2 digit day) is one possible date format. You may notice this functions quite well until 0229 is received and it is rejected as invalid. The reason it is invalid in this scenario is if the year is not provided then the default or "epoch" date's year of 1970 is assumed (UNIX epoch date is January 1, 1970). Since 1970 was not a leap year, February 29 is invalid in this case.
A workaround that can be used to ensure such dates are properly validated, is to read the date value as a string type, attach the correct year, and then convert back to a date format with the year before validating.
Crack the code to add more codes...
So, you have an existing TP Code List and you need to add thousands more codes to it. This could be rather time consuming if entered manually. Here is a way that this can be accomplished within a few steps. You will need to have a list of the codes containing a minimal of the following fields - SENDER_ITEM, RECEIVER_ITEM, DESCRIPTION. It can be a comma delimited file or positional. In the example, I used CSV file.
1. Export the Trading Partner Code List via Deployment, Resource Manager, Import/Export, Export, and select your code list (make a back-up copy).
2. From the Deployment, Schemas, check out the SI_IE_Resources.xsd
You will only need CODE_LIST_XREF_ITEM
3. Create your map using the SI_IE_Resources.xsd for the output.
Here is an example map:
4. Run the translation either using Map Test Utility or a Translation Service.
The output should look like this:
<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<SI_RESOURCES>
<CODE_LIST_XREF_ITEM>
<SENDER_ITEM>cat</SENDER_ITEM>
<RECEIVER_ITEM>dog</RECEIVER_ITEM>
<TEXT1/>
<TEXT2/>
<TEXT3/>
<TEXT4/>
<DESCRIPTION>pets</DESCRIPTION>
</CODE_LIST_XREF_ITEM>
<CODE_LIST_XREF_ITEM>
<SENDER_ITEM>Hawaii</SENDER_ITEM>
<RECEIVER_ITEM>Alaska</RECEIVER_ITEM>
<TEXT1/>
<TEXT2/>
<TEXT3/>
<TEXT4/>
<DESCRIPTION>states</DESCRIPTION>
</CODE_LIST_XREF_ITEM>
</SI_RESOURCES>
5. Now you will copy from the first <CODE_LIST_XREF_ITEM> through and including the last </CODE_LIST_XREF_ITEM>
6. Open the Export file from step 1 in a text editor and carefully place/paste the output from the translation CODE_LIST_XREF_ITEM into the correct place/order"
7. Import the Export file into IBM Sterling B2B Integrator again using the Deployment, Resource Manager, Import/Export, Import.
8. Go to the Trading Partner, Code List and check your Code List to verify the new codes have been added.
Have you ever run into a situation where your partner is now sending you special characters that are causing translation errors. While the solution can be a simple change of the Input and Output fields/elements from String X to String Free Format, but what if you do not know what fields this could happen in. How can you change all String fields/elements without having to manually edit every field in the map.
Free Format indicates that any characters in the normal ASCII code range are acceptable in the field, and the translator does not check the characters for compliance.
In order to change all fields and elements from String X to String Free Format, first the map must be saved in .mxl format (File, Save As, change the Save As Type at the bottom to .mxl).
Open the .mxl file with a text editor, you will see it is in xml format.
Do a Search, Replace and
Replace <Format>X</Format>
with <Format><Format>
Save the .mxl file, open the updated file in the Map Editor, save, compile, and check in the updated map.
Map test is such a great tool to use for quick testing/tweaking during development. When developing a document extraction map (or splitting the output into separate documents), the map test result outputs all of the documents one after the other. Here is a nifty way to see how the documents are split after running a map test.
An example of a document extraction rule follows:
String[1024] buffer;
readblock(buffer);
writeblock("NEW DOC");
writeblock(buffer);
while readblock(buffer) do
begin
if left(buffer,3) = "HDR" then
begin
unreadblock();
break;
end
writeblock(buffer);
end
Might output:
HDR
NonHDR
NonHDR
HDR
NonHDR
NonHDR
...but you cannot tell where or if this is really splitting the documents.
To see where the file will really split, add a writeblock line with your own string such as:
while readblock(buffer) do
...
Now you can see the individual documents in the output divided by your line:
NEW DOC
HDR
NonHDR
NonHDR
NEW DOC
HDR
NonHDR
NonHDR
NonHDR
After testing just remember to remove the extra writeblock.
Twice a year, some of us go through a change - the Daylight Saving Time change. Once a year, one lost hour can pose a problem. If data is created on a server in a region that is not affected by these time changes, and the receiver is on a system that is, the hour that is lost in the spring does not exist for the receiver's server and errors as an invalid date. (Such as March 8, 2015 2:30 am for the US)
This is not something that comes up often, but there is a way around the problem if necessary. If the date field is created as a string type field, and then converted into a date in an ISO-8601 date type temporary field, the date is then valid.
Here is a link to this internationally accepted and universal date type:
It is often necessary to use readblock until the end of a file. However, if there is a blank line within the lines of data, readblock may think it has finished when there is still data left in the file. Here is a rule you can modify for this type of need. It is all ready to go so you don't have to worry about getting writer's block and drawing a blank - these writeblocks will remove the blanks! I even threw in a nifty optional line to mark the start of each document so that you can tell in one output file during testing (such as an SI map test),
string[250] buffer;
string[3] match;
integer match_len;
// set these next two variables as desired
match = "HDR"; // the tag of the first record in the document
match_len = 3; // the length of the tag
// read the block we're on and write it
readblock(buffer);
writeblock("START" ); //Optional line during testing to see the start of each new document
writeblock(buffer);
//keep reading and writing records until the end of the document
while eof(0) != 1 do
if readblock(buffer) then
begin
if left(buffer, match_len) = match then
begin
unreadblock();
break;
end
writeblock(buffer);
end
So, it isn't over until it's over. Thank you for viewing! OK, now this blog is over - you can stop reading
The translator report is very useful for quickly identifying problems in the map or data. There are many types of errors it can report including invalid date formats, field lengths, and conditional errors to name a few. Normally, the error is pretty straight forward and shows where the problem occurs. You may be surprised, however, if you do not get an error when the number of segments/records in the data exceeds the segment/record Looping Maximum Usage set in the map. This may even cause a Mandatory Missing error instead!? Here's why:
The translator reads the first tag in the data and searches through the map until it matches the tag. It will continue reading lines with that tag in this segment/record until the Looping Maximum Usage is reached. If there are then additional lines in the data with the same tag, it will not error but instead continue searching through the map to find the next place to match this tag. A couple of examples of where this is necessary are if a record is split or promoted
or in a Sterling B2B Integrator document extraction map where the map needs to reach the maximum usage and be re-invoked instead of loop within the record.
So what about the Mandatory Missing error you may see? In the process of trying to find the next tag match, the map may pass another mandatory segment/record which causes it to not receive data. This creates the Mandatory Segment Missing error.
If there is a particular spot where too many iterations must be reported, the segment/record can be followed by a second segment/record with this tag and a cerror used to report if spillover is read into the extra occurrence record.
If you've ever needed to calculate the number of years, months and days between two dates, say to figure out someone's age. Here is a rule that will do just that.
The fields for Birthday and Today need to be defined as Datetime and be populated. I used the system variable on the Today field to get today's date.
The fields for Years, Months and Days need to be defined as integers and do not need to be populated. You can of course substitute variables for the fields and it'll work the same.
integer b_year, t_year, b_month, t_month, b_day, t_day;
b_year = 0;
t_year = 0;
b_month = 0;
t_month = 0;
b_day = 0;
t_day = 0;
b_year = get years(#Birthday);
t_year = get years(#Today);
b_month = get months(#Birthday);
t_month = get months(#Today);
b_day = get days(#Birthday);
t_day = get days(#Today);
//Calculate years
#Years = t_year - b_year;
//If the birthday is later in the year
//subtract a year
If b_month > t_month then
#Years = #Years - 1;
//If it's the same month but later in the month
//subtract a year
If (b_month = t_month) & (b_day > t_day) then
#Years = #Years - 1;
//Calculate months
#Months = t_month - b_month;
//If birthday is before today, need to add 12
If #Months < 0 then
#Months = #Months + 12;
//Calculate days
#Days = t_day - b_day;
//If birthday is after today, it'll be negative
//not sure how to handle it, but adding 30 will be close
//You could add the number of days in the birth month or today's month
If #Days < 0 then
Begin
#Days = #Days + 30;
If #Months = 0 then
#Months = 12;
#Months = #Months - 1;
End
There are a few other ways you could tweak the rule as well, you could use the julian date for the check to see if the Birthday is later in the year for example.
Depending what you need for the days, you could use an user exit to get the number of days in either the current month or birth month.
You can leave out the part for calculating months and days if you only need the years...
It all just depends on what exactly you need.
If anyone has a great ideas for the number of days, please leave a comment.
As always, any and all comments are welcome. Even a quick “thanks”, to let us know that you found this somewhat useful, is appreciated!
Feel free to request any topics or ideas as well.
Thanks for reading!
Pat Frey – IBM Support
Did you ever need to generate a random number and did not want to have to access a table to get a number, and then add one to it for the next time? There is java class that can do all the work for you - as long as you do not require a consecutive number. The class is included in the IBM Sterling B2B Integrator product so there is no need to run the install3rdparty script.
This class is outlined on the Java Sun site.
An instance of the Random.
In order to guarantee this property, particular algorithms are specified for the class Random.
Java implementations must use all the algorithms shown here for the class Random,
for the sake of absolute portability of Java code. However, subclasses of class Random are
permitted to use other algorithms, so long as they adhere to the general contracts for
all the methods. The algorithms implemented by class Random use a protected utility method
that on each invocation can supply up to 32 pseudorandomly generated bits.
This example map calls to the ‘nextInt’ method which will return the next pseudorandom,
uniformly distributed int value from this random number generator's sequence.
The extended rule is located on the UniqueNumber field.
object UniqueNumber; // this is just declaring your user exit
UniqueNumber = new("java.util.Random"); // this line is initializing the mapping variable to the java class
#UniqueNumber = UniqueNumber.nextInt(); //this is getting the next random number that is generated (nextInt being the function of the class)
and assigning it to the UniqueNumber field
****
There are many other java classes - java.lang.string, for example, which has a function to switch from Upper Case to Lower Case and vise-versa along with many others.
... and if you act now we can send you a document that contains numerous examples and links to more information! It is yours for the asking!
You can open an IBM Support Request and ask your friendly mapping analyst for the document containing the 'Mapping mini-series User exits.pdf' file "Copyright 11/27/2012 by theMaskedMapper"
Do you have to include future dates in your outbound documents, but your back end system will not create them in your input file? What can one do to create these dates during translation?
Relax; you can use “Date Math” to solve your problem!
What is this “Date Math” I speak about? Well it is nothing more than the extended rule Operator “>> “ used for Date modification.
Simply by using an extended rule as such:
#requested_date_field = #shipped_date_field << days(n);
you can add any (n) number of days to a date field.
If you wish to subtract days from a date field, the integer inside the parenthesis (n) will be a negative (-n).
So let’s use this in an example.
You currently have only the date your customer’s order will ship, but you also have to include the expected delivery date. You know from previous deliveries it will take 3 days to go from your warehouse to the customer’s distribution center. The two fields in your map which will hold the date are:
#Date_Shipped
#Expected_Delivery_Date
The extended rule then will be:
#Expected_Delivery_Date = #Date_Shipped <<days(3);
If our Date_Shipped was 09/14/2014 our Expected_Delivery_Date would be 09/17/2014.
Now let’s say your partner will only accept delivery on weekdays. So Saturday and Sunday the distribution center will not be open to receive shipments. To make sure you avoid this situation, you will have to first find out the day of the week you are shipping, and then if the third day from when you shipped is either Saturday or Sunday, you will have to add an additional one or two days so the Expected Delivery will be on the following Monday.
string[3] day_of_the_week; //Will return the actual day of the week the date occurs
day_of_the_week = ""; //initialize the variable
//add 3 days
#Expected_Delivery_Date = #Date_Shipped_ << days(3);
//convert the new date to a 3 character string
//SUN, MON, TUE, WED, THU, FRI, or SAT
strdate(#Expected_Delivery_Date,"%a", day_of_the_week);
//Check the value to see if we need to add 1 or 2 days for the weekend
if day_of_the_week = "SAT" then
#Expected_Delivery_Date = #Expected_Delivery_Date << days(2);
if day_of_the_week = "SUN" then
#Expected_Delivery_Date = #Expected_Delivery_Date << days(1);
//If it is neither Saturday or Sunday then the original value for the Expected_Delivery_Date will be retained in the data.
You can do other conversions for dates, such as months and years. You would simply substitute months(n) or years(n) in the rule. So go ahead and ship your goods and take the weekend off!
Creating a map and browsing to the schema (.xsd) to build the XML side causes error:
Out of memory - this DTD is too complex to transform to a map with the amount of memory available
The Map Editor uses a third party library msxml4.dll parser to create and convert any document to DOM "document Object Model". It defines the logical structure of documents and the way a document is accessed and manipulated.
The Map Editor can only use 2 gb (roughly) of RAM since it is a 32 bit application and the DOM returned from the parser is larger than the 2 GB limit that the Map Editor can handle; this is where the problem lies. If it were a 64 bit program, almost 7 GB of RAM could be used. So the answer is not simply throwing more memory at it since by limit the 32-bit application is set to 2 GB and some schemas require more than that to build the map.
In short, the schemas that are being used to create the map would have to be trimmed down so the DOM that is created is under the 2 GB ceiling. Currently, this would be the only resolution that we could support.
As for whether the Map Editor can be converted to a 64-bit application, that would have to be requested through an enhancement request. If you would like to explore that you can create one from your Support page.
Sometimes there is a problem with compiling or translating a map, however it is not known exactly where the fault originates. Perhaps the map will not compile and it gives an unspecific error, or the translator error is not easy to interpret, or the map goes into a loop. It can be a huge task to look through every field in every record within every group of both sides of map to find the issue. It feels like finding a needle in a haystack (or being a bug in the map, we could say a beetle in a coal stack... either way it sounds frustrating and time consuming)!
The method I use to narrow in on the cause is to overwrite the output with a simple saved map .ddf file with one record. Now, if I still have the same difficulty or if it goes away, I have narrowed down the side it is on (if there is only one problem!). Once I know which side of the map contains the source of the trouble, I can continue narrowing down the groups and records on that side until the issue no longer occurs. I call this process the 'Cut and Try' because each time I cut out a section I try again to see if it will work. Once the smallest section causing the error is identified, fewer details will be left to research. This saves a lot of time and effort by quickly narrowing down the area the problem is in. Finally, I open the original map again and fix the suspected cause to verify that it was the only issue and resolves the problem with compiling or translating the map. Note: this can also work with finding a problem in a large amount of data.
Reduce to help deduce!
The most common use of readblock is to loop thru a while do loop reading in all lines of data.
Virtually all examples will have readblock as part of the while do statement:
while readblock(buffer) do
begin
...
end
This will work fine in almost all scenarios, the while loop continues to loop as long as readblock successfully reads a block of data.
As soon as readblock hits a blank line for example, then it will return a false value and cause the while statement to be false and no longer loop thru the rest of the data.
To make sure you're reading all of the lines, you need to use the eof function instead:
while eof(0) do
readblock(buffer);
...
This will loop until eof returns a value of false, which should only be when it hits the eof (hence the name).
Depending on what you want to do with the buffer, you may want to check to make sure it wasn't a blank line that was read in with a simple if:
If buffer != "" then
Any rule that works with using readblock in the while statement should work just as fine with using eof instead.
Hopefully that all makes sense, if not please leave a comment and let me know.
As always, any and all comments are welcome.
Everyone is probably familiar with using standard rules for accumulators.
It's pretty straight forward and there are numerous examples of using them to calculate totals, prices, extended prices etc.
Sometimes you need to reference the accumulator within an extended rule instead. Say you only wanted the extended price of items with specific qualifiers, there's no way within the standard rules to qualify them. You can however do it within an extended rule:
If #qualifier = "ABC" then
#price = accum(2);
In that example, it will only store the value in accumulator 2 into the price field with the qualifier is ABC.
You can specify any currently active accumulator number within the ( ) and it will be the same as referencing them within the standard rule.
Another use is to get the remainder of a division, into the extended rule. Currently there are no extended rules that allow you to get the remainder after a division. However you can use the Modulo function with the accumulator standard rule and then reference that accumulator via the extended rule to get the value.
It's the same way as before just the word accum and then accumulator number with ( ).
If accum(1) = 3 Then
...
Assuming that accumulator 1 has already used the modulo standard rule, this extended rule will only execute if the remainder is 3. | https://www.ibm.com/developerworks/community/blogs/mappingandtranslation?maxresults=15&lang=en | CC-MAIN-2016-44 | refinedweb | 3,768 | 67.79 |
This is a basic tutorial on C++. If this has already been done, sorry. Part of the reason for this tutorial is to ring in my upgraded membership to AO member instead of newbie. The other reason is to educate those who need it. Lets start the tutorial.
A beginners guide to C++ (Written by beginners for beginners)
First off, finding a good compiler. I use MSVC but I have heard good things about DevC/C++. there are other forums which address this problem. The most important part of coding is making readable code. I will include some examples that are in good form in another post. I will also talk about form throughout this tutorial. To include text that won't be compiled (Meaning C++ compiler will skip over it) use this /*insert text here that you wish to be ignored. This is good for as many lines as are kept between the asterix and slash*/ or //this is only good for the current line.
1) The libraries; you must always select a library to use. The only library you will be dealing with right now is "iostream.h". The code for a library looks like this:
#include <iostream.h>
#include must always be used and the library must always be between <>. This is always the first line of code. Next line of code should be:
int main()
Just include this and don't ask why yet. We will hit that later. Remember int main() must always be in your program outside of the brackets.
Lets skip ahead to the fun stuff:
/* Ian A. AKA Dark Star Prometheus 10/19/03 Hello world*/
#include <iostream.h>
int main()
{ //always use this bracket here
cout<<"Hello World!"; //cout *, "" *!, ;*|
return(0); //return (0) just tells the computer that the program executed normally
} //need bracket here too.
* cout stand for console out, it just means that the computer is going to display text.
*! "" are just used to denote text that will be displayed.
*| ; this symbol just means the end of a line. You will almost always use this(Except for IF statements and some others. that will be later though.)
You should indent the lines after the bracket so your code is easier to read.
2) More with cout. cout can be used for more than just printing text, it can also play with numbers. Mathematical operations in C++ are addition "+", subtraction"-", multiplication"*", and division"/". That's it, Yes that is really it. Oh so I lied, shoot me. There is also Modulus "%". This mods the numbers, example: 22%5 = 2. This is basically the left over amount from 5 going into 20 - current number. I hope you get it, it is hard for me to explain. It is the remainder, that is the word I wanted! 22 mod 5= 2 the remainder (cause 20/5 4 times and 2 is left over from 22) Anyway mod is used often, so learn it.
now on to practical use of this info.
cout<< Oh yeah I forgot to mention before, you always need "<<" it signify's the start of a new command.
cout<< 20 * 5;
the above line will not print "20 * 5" when the program is compiled, it will display 100, which is the answer. This is how one creates math problems in C++.
Some code to demonstrate:
#include <iostream.h>
int main()
{
cout<<"This program will display the area of a circle"<<endl; //endl just skips to the next line
//when the program is compiled and run.
cout<<"Radius = "<<10<<endl;
cout<<"Area = "<<3.14*10*10;
return(0);
}
When run this program will display the following lines:
This program will display the area of a circle
Radius = 10
Area = 314
See what it did? It did exactly what I told it to do. Multiplied 3.14 *10 *10 the equation for calculating the area of a circle. This is all for now. I will do more tommorow.
Any questions/comments/flames should be posted or emailed.
If you have an urgent question feel free to E-mail me at dsp2600@bluefrog.com
More on this tommorrow. Bye!
\"The wise programmer is told about Tao and follows it. The average programmer is told about Tao and searches for it. The foolish programmer is told about Tao and laughs at it.
If it were not for laughter, there would be no Tao.\"
i dont know about C, but this tutorial make me curious and want to know much more
. Could you suggest the step by step to learning C language..a kind of guidance like chapter by chapter in book or courses.
thanks redhawk
Interesting Stuff. Thanks for the tutorial!
Before you are writing more tutorials check the search option from this site, and you will find out that there are more c++ tutorials on this site with maybe even more information. Negative has made a thread with an index of all tutorials on this site.
[shadow]OpenGL rules the game[/shadow]
This is just the start, am going to post A much better C++ tutorial in a few days (Maybe even today) I am still writing it. What the new C++ tutorial will have:
Better format, easier to read
Lots more info, variables, if statements, loops, casting variables, etc.
*Might include a "how to make libraries" minipost*
Downloadable examples of my own code, (I wrote it in notepad, notepad is great for writing code)
I will try to have this out ASAP. Expect for it in two days but look today.
P.S. also cin statements, and boolean operators will definetly be discussed.
maybe you could include some links to download the compiler like
click here to hack my computer and delete all my important files
I can't think of the guys name but, he asked about C. I know a bit of it. Not anything to brag about but I know some. The same program that he wrote could be done in C.
#include<stdio.h>
main()
{
printf("Hello World:\n");
}
return 0;
Now the second one that he did.
Is a little different in C.
#include <stdio.h>
#include <stdlib.h>
int main()
{
int number1, number2, total;
printf("What is the first Number:");
scanf("%d", &number1);
printf("What is the second Number:");
scanf("%d", &number2);
total = number1 + number2;
printf("Total is %d\n", total);
system("PAUSE");
return 0;
}
That is that code in C, now to understand it. Instead of adding on to redhawk's tutorial PM me and I will explain it to you.
Oh and the C version is for DevC++/Bloodshed
Might include a "how to make libraries" minipost
Already done by me...go and search for a tutorial named the C++ Preprocessor or just search for preprocessor and you will find it
Also, your tutorial looks very old because you used an old style of programming in C++. Instead of:
Code:
#include <iostream.h>
it should look like this:
Code:
#include <iostream>
using namespace std;
Anyways, using namespace std i a LOT more common in todays C++ code...thought i would point that out but overall a good tutorial
#include <iostream.h>
#include <iostream>
using namespace std;
Support your right to arm bears.
^^This was the first video game which i played on an old win3.1 box
White_eskimo is correct, that code won't compile properly on g++ and other strict compilers. You should instead write it like so:
Code:
#include <iostream>
int main()
{
std::cout << "Hello world!" << std::endl;
return 0;
}
You can just put 'using namespace std' at the top of the code under the header includes, but I personally consider that to be bad style (and it defeats the whole point of namespaces if you're just going to ignore them anyway).
Also, I found the code in this tutorial incredibly difficult to read, partially because it was in a variable-width font (which can be over-ridden using the BB code tags) and I don't see why you have comments like:
Code:
//cout *, "" *!, ;*|
BTW, cout doesn't necessarily mean that some text will be displayed. Yes, in 90% of cases the standard output will be to the console, but it can be redirected to files or other devices. Obviously this doesn't happen in the Hello world type of programs, but it is a point that needs to be emphasised.
Other than those points above, it's a good introductory tutorial to C++.
#include <iostream>
int main()
{
std::cout << "Hello world!" << std::endl;
return 0;
}
//cout *, "" *!, ;*|
Paul Waring - Web site design and development.
One of the major benifits of c++ as compared to some other languages such as c is the concept of object oriented programming. I am sure there is plenty of reference on AO, but if you would like to make your tutorial more effective, you might want to mention this.
Other important things to mention:
(in no particular order, just coming to mind randomly)
makefiles
debugging
using namespace std
cin
variable assignments
pointers
pass by reference
pass by value
if, ifelse, while, dowhile, for
classes
switches
overloading
arrays
functions
and of course object oriented
Forum Rules | http://www.antionline.com/showthread.php?248076-Starting-C-.-A-tutorial | CC-MAIN-2017-04 | refinedweb | 1,521 | 74.08 |
On 15 October 2016 at 05:26, Steven Gawroriski <ste...@multiphasicapps.net> wrote:
Advertising
> On Sat, 15 Oct 2016 05:59:23 +0900 > > > > > PS: re-send from subscribed addess > > Hello, > > Not a developer of Fossil, but this could have potential compatibility > issues in the future with unknown options being passed. Say someone > builds with `--with-butter=salted`. Then later on Fossil adds a switch > that has the same name `--with-butter=`, but it takes a different kind > of argument that is completely incompatible. If the behavior is relied > upon it cannot really be taken back once it is out in the open. > > _______________________________________________ > fossil-users mailing list > fossil-users@lists.fossil-scm.org > > This won't be a problem - Fossil is a fat-free scm. :-p I guess I better leave a serious response here too... I don't know how other projects handle non-standard build features (apart from rejecting them outright), but a possible solution is for Fossil to agree to never use the --with-my-<feature> namespace, or something similar if they don't like -my-.
_______________________________________________ fossil-users mailing list fossil-users@lists.fossil-scm.org | https://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg24056.html | CC-MAIN-2016-50 | refinedweb | 190 | 55.03 |
In the previous chapter, we used the Label class, which is one of the multiple widgets that Kivy provides. You can think of widgets as interface blocks that we use to set up a GUI. Python Kivy has a complete set of widgets – buttons, labels, checkboxes, dropdowns, and many more. You can find them all in the API of Kivy under the package kivy.
We are going to learn the basics of how to create our own personalized widget without affecting the default configuration of Kivy widgets. In order to do that, we will use inheritance to create the widget class in the widgets.py file:
from kivy.app import App class widgets(App): pass if __name__ == '__main__': widgets().run()
widgets.kv file:
BoxLayout: Button: text: "Hello" Button: text: "Beautiful" Button: text: "World"
In this way we can play a lot with widgets, labels and buttons in Kivy. We will continue more in the next chapter. | https://thecleverprogrammer.com/2020/05/09/python-kivy-tutorials-basic-widgets-labels-and-buttons/ | CC-MAIN-2022-33 | refinedweb | 156 | 73.07 |
Hi
Maybe I'm missing something here but I can't seem to put abstract classes anywhere but in the same package as the class that extends it.
For example, I create 2 classes like this:
package abstracttest.testpackage2; public abstract class testAbstractClass { abstract void testMe(); }
package abstracttest.testpackage1; import abstracttest.testpackage2.testAbstractClass; public class MainClass extends testAbstractClass { @Override void testMe() { throw new UnsupportedOperationException("Not supported yet."); } }
Looks perfectly valid to me, however, Netbeans reports this error "method does not override or implement a method from a supertype". Why?
If I move the abstract class into the same folder as the mainClass it works! What's this all about? | http://www.javaprogrammingforums.com/whats-wrong-my-code/7265-issue-abstract-classes.html | CC-MAIN-2015-11 | refinedweb | 109 | 58.69 |
Anatomy of a robot¶
Note
The following assumes you have some familiarity with python, and is meant as a primer to creating robot code using the python version of wpilib. See our python primer for a brief introduction to the python programming language.
This tutorial will go over the things necessary for very basic robot code that can run on an FRC robot using the python version of WPILib. Code that is written for RobotPy can be ran on your PC using various simulation tools that are available.
Create your Robot code¶
Your robot code must start within a file called
robot.py. Your code
can do anything a normal python program can, such as importing other
python modules & packages. Here are the basic things you need to know to
get your robot code working!
Importing necessary modules¶
All of the code that actually interacts with your robot’s hardware is contained in a library called WPILib. This library was originally implemented in C++ and Java. Your robot code must import this library module, and create various objects that can be used to interface with the robot hardware.
To import wpilib, it’s just as simple as this:
import wpilib
Note
Because RobotPy implements the same WPILib as C++/Java, you can learn a lot about how to write robot code from the many C++/Java focused WPILib resources that already exist, including FIRST’s official documentation. Just translate the code into python.
Robot object¶
Every valid robot program must define a robot object that inherits from either
wpilib.IterativeRobot or
wpilib.SampleRobot. These
objects define a number of functions that you need to override, which get
called at various times.
wpilib.IterativeRobotfunctions
wpilib.SampleRobotfunctions
Note
It is recommended that inexperienced programmers use the IterativeRobot framework, which is what this guide will discuss.
An incomplete version of your robot object might look like this:
class MyRobot(wpilib.IterativeRobot): def robotInit(self): self.motor = wpilib.Jaguar(1)
The
robotInit function is where you initialize data that needs to be
initialized when your robot first starts. Examples of this data includes:
- Variables that are used in multiple functions
- Creating various wpilib objects for devices and sensors
- Creating instances of other objects for your robot
In python, the constructor for an object is the
__init__ function. Instead
of defining a constructor for your main robot object, you can override
robotInit instead. If you do decide that you want to override
__init__, then
you must call
super().__init__() in your
__init__ method, or an
exception will be thrown.
Adding motors and sensors¶
Everything that interacts with the robot hardware directly must use the wpilib
library to do so. Starting in 2015, full documentation for the python version
of WPILib is published online. Check out the API documentation (
wpilib)
for details on all the objects available in WPILib.
Note
You should only create instances of your motors and other WPILib hardware devices (Gyros, Joysticks, Sensors, etc) either during or after robotInit is called on your main robot object. If you don’t, there are a lot of things that will fail.
Creating individual devices¶
Let’s say you wanted to create an object that interacted with a Jaguar motor
controller via PWM. First, you would read through the table (
wpilib) and
see that there is a
Jaguar object. Looking further, you can see that
the constructor takes a single argument that indicates which PWM port to
connect to. You could create the Jaguar object that is using port 4 using the
following python code in your robotInit method:
self.motor = wpilib.Jaguar(4)
Looking through the documentation some more, you would notice that to set
the PWM value of the motor, you need to call the
Jaguar.set() function. The docs
say that the value needs to be between -1.0 and 1.0, so to set the motor
full speed forward you could do this:
self.motor.set(1)
Other motors and sensors have similar conventions.
Robot drivetrain control¶
For standard types of drivetrains (2 or 4 wheel, mecanum, kiwi), you’ll want to use the various included class to control the motors instead of writing your own code to do it. For most standard drivetrains, you’ll want to use one of three classes:
wpilib.DifferentialDrivefor differential drive/skid-steer drive platforms such as 2 or 4 wheel platforms, the Kit of Parts drive base, “tank drive”, or West Coast Drive.
wpilib.KilloughDrivefor Killough (Kiwi) triangular drive platforms.
wpilib.MecanumDrivefor mecanum drive platforms.
For example, when you create a
DifferentialDrive object, you can pass in motor controller instances:
l_motor = wpilib.Talon(0) r_motor = wpilib.Talon(1) self.robot_drive = wpilib.drive.DifferentialDrive(l_motor, r_motor)
Or you can pass in motor controller groups to use more than one controller per side:
self.frontLeft = wpilib.Spark(1) self.rearLeft = wpilib.Spark(2) self.left = wpilib.SpeedControllerGroup(self.frontLeft, self.rearLeft) self.frontRight = wpilib.Spark(3) self.rearRight = wpilib.Spark(4) self.right = wpilib.SpeedControllerGroup(self.frontRight, self.rearRight) self.drive = wpilib.drive.DifferentialDrive(self.left, self.right)
Once you have one of these objects, it has various methods that you can use to control the robot via joystick, or you can specify the control inputs manually.
Robot Operating Modes (IterativeRobot)¶
During a competition, the robot transitions into various modes depending on the state of the game. During each mode, functions on your robot class are called. The name of the function varies based on which mode the robot is in:
disabledXXX- Called when robot is disabled
autonomousXXX- Called when robot is in autonomous mode
teleopXXX- Called when the robot is in teleoperated mode
testXXX- Called when the robot is in test mode
Each mode has two functions associated with it. xxxInit is called when the robot first switches over to the mode, and xxxPeriodic is called 50 times a second (approximately – it’s actually called as packets are received from the driver station).
For example, a simple robot that just drives the robot using a single joystick might have a teleopPeriodic function that looks like this:
def teleopPeriodic(self): self.robot_drive.arcadeDrive(self.stick)
This function gets called over and over again (about 50 times per second) while the robot remains in teleoperated mode.
Warning
When using the IterativeRobot as your Robot class, you should avoid doing the following operations in the xxxPeriodic functions or functions that have xxxPeriodic in the call stack:
- Never use
Timer.delay(), as you will momentarily lose control of your robot during the delay, and it will not be as responsive.
- Avoid using loops, as unexpected conditions may cause you to lose control of your robot.
Main block¶
Languages such as Java require you to define a ‘static main’ function. In python, because every .py file is usable from other python programs, you need to define a code block which checks for __main__. Inside your main block, you tell WPILib to launch your robot’s code using the following invocation:
if __name__ == '__main__': wpilib.run(MyRobot)
This simple invocation is sufficient for launching your robot code on the robot, and also provides access to various RobotPy-enabled extensions that may be available for testing your robot code, such as pyfrc and robotpy-frcsim.
Putting it all together¶
If you combine all the pieces above, you end up with something like this below, taken from one of the samples in our github repository:
#!/usr/bin/env python3 """ This is a good foundation to build your robot code on """ import wpilib import wpilib.drive class MyRobot(wpilib.IterativeRobot): def robotInit(self): """ This function is called upon program startup and should be used for any initialization code. """ self.left_motor = wpilib.Spark(0) self.right_motor = wpilib.Spark(1) self.drive = wpilib.drive.DifferentialDrive(self.left_motor, self.right_motor) self.stick = wpilib.Joystick(1) self.timer = wpilib.Timer() def autonomousInit(self): """This function is run once each time the robot enters autonomous mode.""" self.timer.reset() self.timer.start() def autonomousPeriodic(self): """This function is called periodically during autonomous.""" # Drive for two seconds if self.timer.get() < 2.0: self.drive.arcadeDrive(-0.5, 0) # Drive forwards at half speed else: self.drive.arcadeDrive(0, 0) # Stop robot def teleopPeriodic(self): """This function is called periodically during operator control.""" self.drive.arcadeDrive(self.stick.getY(), self.stick.getX()) if __name__ == "__main__": wpilib.run(MyRobot)
There are a few different python-based robot samples available, and you can find them in our github examples repository.
Next Steps¶
This is a good foundation for building your robot, next you will probably want to know about Running robot code. | http://robotpy.readthedocs.io/en/stable/guide/anatomy.html | CC-MAIN-2018-09 | refinedweb | 1,429 | 53.21 |
| Join
Last post 02-09-2007 12:37 PM by Rossoneri. 9 replies.
Sort Posts:
Oldest to newest
Newest to oldest
Hi,
I have this dropdownlist that breaks xhtml w3c validation because it
generates a select tag with a javascript property. this only happens
when autopostback is set to true.
so the the validation error that i get is this:
Error
Line 263 column 86:
there is no attribute "language".
...PostBack('dropArchive','')" language="javascript" id="dropArchive">
this is the code.
this control:
<asp:dropdownlist</asp:dropdownlist>
generates this html:
<select name="dropArchive" onchange="__doPostBack('dropArchive','')" language="javascript" id="dropArchive"> <option value="1">Episode 1</option> <option value="2">Episode 2</option> <option selected="selected" value="3">Episode 3</option> </select>
any idea on how can i stop the dropdownlist to generate the
language="javascript" bit?
would the dropdown work without it? (I guess so?)
Thanks!
have you tried doing something like
droparchive.attributes.remove("language","javascript")
in the page_load or page_init event in your codebehind?
If you take off the autopostback='true' then ASP.Net won't add the onchange='..' to the dropdown but it also won't postback when you change the value so you'd then have to add the javascript your self to do the postback, which seems a little odd just to pass xhtml validation.
thanks, that was a good idea.
I tried that, but it only accepts 1 attribute, so I tried:
dropArchive.Attributes.Remove("javascript");dropArchive.Attributes.Remove("language");dropArchive.Attributes.Remove("language=\"javascript\"");
and even
dropArchive.Attributes.Clear();
but it didn't do anything.
any other ideas?
protected override void Render(HtmlTextWriter output){ StringWriter writer1 = new StringWriter(); HtmlTextWriter writer2 = new HtmlTextWriter(writer1); base.Render(writer2); writer2.Close(); this.xHtml = writer1.GetStringBuilder().ToString(); this.xHtml = this.xHtml.Replace("<script language=\"javascript\">", "<script type=\"text/javascript\">"); this.xHtml = this.xHtml.Replace("<script language=\"javascript\" type=\"text/javascript\">", "<script type=\"text/javascript\">"); this.xHtml = this.xHtml.Replace("language=\"javascript\"", string.Empty); output.Write(this.xHtml);}
cool thanks!
so how does it work?
I need to paste this method in my page and then call it from pageload?
also, it also looks like the method is specting HtmlTextWriter ?
You create a new class, call it XHtmlPage for example You set it up likepublic class XHtmlPage : System.Web.UI.Page{ // Paste the Render function above in here... public override void Render // You can rename the variables as you wish - Reflector gives them generic names as you see above}That's it. You will have to add some namespaces to the top of this class or qualify the StringWriter and HtmlTextWriter with their appropriate namespacesI also noticed that in the function above, xHtml was declared as a private variable so you can either do that as well or add string xHtml = string.Empty at the top of the Render methodThen when you setup a Page like Default.aspx, when you view the codebehind you see the class has
_Default : System.Web.UI.PageYou change it to _Default : XHtmlPageAnd of course if the class you created has a different namespace, then you will have to bring that it as well with a using statement at the top or qualifying it. Does this make sense?
Thanks man, that worked perfectly!
now let's hope valencia beats inter next week in the champions league!
Advertise |
Ads by BanManPro |
Running IIS7
Trademarks |
Privacy Statement
© 2009 Microsoft Corporation. | http://forums.asp.net/p/1073552/1571711.aspx | crawl-002 | refinedweb | 564 | 58.18 |
__gc
참고
This topic applies only to version 1 of Managed Extensions for C++. This syntax should only be used to maintain version 1 code. See Classes and Structs (Managed) for information on using the equivalent functionality in the new syntax.
Declares a __gc type.
__gc array-specifier __gc class-specifier __gc struct-specifier __gc interface-specifier __gc pointer-specifier __gc new
Remarks
A __gc type is a C++ language extension that simplifies .NET Framework programming by providing features such as interoperability and garbage collection.
참고
Every member function of an abstract __gc class must be defined unless the member function is pure virtual.
In Managed Extensions for C++, the equivalents to a C# class and a C# struct are as follows:
Example
In the following example, a managed class (X) is declared with a public data member, which is manipulated through a managed pointer:
// keyword__gc.cpp // compile with: /clr:oldSyntax #using <mscorlib.dll> using namespace System; __gc class X { public: int i; int ReturnInt() { return 5; } }; int main() { // X is a __gc class, so px is a __gc pointer X* px; px = new X; // creates a managed object of type X Console::WriteLine(px->i); px->i = 4; // modifies X::i through px Console::WriteLine(px->i); int n = px->ReturnInt(); // calls X::ReturnInt through px Console::WriteLine(n); }
Output
0 4 5 | https://docs.microsoft.com/ko-kr/previous-versions/4xs93xhk(v=vs.100)?redirectedfrom=MSDN | CC-MAIN-2019-43 | refinedweb | 224 | 50.77 |
#include <QuadNodeCartesianEuclid.h>
Add a point at polar coordinates (angle, R) with content input.
May split node if capacity is full
If the query point is not within the quadnode, the distance minimum is on the border. Need to check whether extremum is between. (Maybe not necessary due to copy elisison)
Safe to call in parallel.
Check whether the region managed by this node lies outside of an Euclidean circle.
Remove content at coordinate pos.
May cause coarsening of the quadtree
Shrink all vectors in this subtree to fit the content.
Call after quadtree construction is complete, causes better memory usage and cache efficiency | https://networkit.iti.kit.edu/api/doxyhtml/class_networ_kit_1_1_quad_node_cartesian_euclid.html | CC-MAIN-2018-09 | refinedweb | 104 | 66.84 |
5. Interprocess Communications
6. Trusted X Window System
7. Trusted Web Guard Prototype
8. Experimental Java Bindings for the Solaris Trusted Extensions Label APIs
A. Programmer's Reference
B. Trusted Extensions API Reference
In the printing application, the code for validating the label is contained in the lp/cmd/lpsched/validate.c file.
Some types of applications need to compare two given labels. For example, an application might need to determine if one label strictly dominates another label. These applications use API functions that compare one label to another label.
The printing application, however, is based on a range of labels. A printer is configured to accept printing requests from a range of different labels. Therefore, the printing application uses API functions that check a label against a range. The application checks that the label from the remote host falls within the range of labels that the printer allows.
In the validate.c file, the printing application uses the blinrange() function to check the remote host's label against the label range of the printer. This check is made within the tsol_check_printer_label_range() function, as shown here:
static int tsol_check_printer_label_range(char *slabel, const char *printer) { int in_range = 0; int err = 0; blrange_t *range; m_label_t *sl = NULL; if (slabel == NULL) return (0); if ((err = (str_to_label(slabel, &sl, USER_CLEAR, L_NO_CORRECTION, &in_range))) == -1) { /* str_to_label error on printer max label */ return (0); } if ((range = getdevicerange(printer)) == NULL) { m_label_free(sl); return (0); } /* blinrange returns true (1) if in range, false (0) if not */ in_range = blinrange(sl, range); m_label_free(sl); m_label_free(range->lower_bound); m_label_free(range->upper_bound); free(range); return (in_range); }
The tsol_check_printer_label_range() function takes as parameters the label returned by the get_peer_label() function and the name of the printer.
Before comparing the labels, tsol_check_printer_label_range() converts the string into a label by using the str_to_label() function.
The label type is set to USER_CLEAR, which produces the clearance label of the associated object. The clearance label ensures that the appropriate level of label is used in the range check that the blinrange() function performs.
The sl label that is obtained from str_to_label() is checked to determine whether the remote host's label, slabel, is within the range of the requested device, that is, the printer. This label is tested against the printer's label. The printer's range is obtained by calling the getdevicerange() function for the selected printer. The range is returned as a blrange_t data structure.
The printer's label range in the blrange_t data structure is passed into the blinrange() function, along with the clearance label of the requester. See the blinrange(3TSOL) man page.
The following code excerpt shows the _validate() function in the validate.c file. This function is used to find a printer to handle a printing request. This code compares the user ID and the label associated with the request against the set of allowed users and the label range that is associated with each printer.
/* * If a single printer was named, check the request against it. * Do the accept/reject check late so that we give the most * useful information to the user. */ if (pps) { (pc = &single)->pps = pps; /* Does the printer allow access to the user? */ if (!CHKU(prs, pps)) { ret = MDENYDEST; goto Return; } /* Check printer label range */ if (is_system_labeled() && prs->secure->slabel != NULL) { if (tsol_check_printer_label_range(prs->secure->slabel, pps->printer->name) == 0) { ret = MDENYDEST; goto Return; } } | http://docs.oracle.com/cd/E19963-01/html/821-1483/labelprint-15.html | CC-MAIN-2014-23 | refinedweb | 557 | 55.44 |
for connected embedded systems
posix_mem_offset(), posix_mem_offset64()
Get the offset and length of a mapped typed memory block
Synopsis:
#include <sys/mman.h> int posix_mem_offset(const void *restrict addr, size_t len, off_t *restrict off, size_t *restrict contig_len, int *restrict fildes); int posix_mem_offset64(const void *restrict addr, size_t len, off64_t *restrict off, size_t *restrict contig_len, int *restrict fildes); typed memory that's currently mapped to the calling process starting at addr, whichever is smaller.
- fildes
- A pointer to a location where the function can store the file descriptor for the typed memory object.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The posix_mem_offset() function sets the variable pointed to by off to the offset (or location), within a typed memory object, of the memory block currently mapped at addr.
The posix_mem_offset() function uses a variable to return the descriptor that establishes the mapping containing addr. This variable is pointed to by fildes; its value is -1 when the descriptor closes after the mapping is established.
The len argument specifies the length of the block of memory you want the offset for. On return, the value pointed to by contig_len is either len, or the length of the largest contiguous block of typed memory that's currently mapped to the calling process starting at addr, whichever is smaller.
If the off and contig_len values obtained from calling posix_mem_offset() are used in a call to mmap() with a file descriptor that refers to the same memory pool as fildes (either through the same port or through a different port), the typed memory area that is mapped is exactly the same area that was mapped at addr of the process that called posix_mem_offset(). Note that neither of the two flags (such as the POSIX_TYPED_MEMORY_ALLOCATE or the POSIX_TYPED_MEM_ALLOCATE_CONTIG) did open the file descriptor.
Returns:
- 0
- Success.
- EACCES
- The process hasn't mapped a memory object at the given address addr.
Classification:
posix_mem_offset() is POSIX 1003.1 TYM; posix_mem_offset64() is for large-file support
See also:
posix_typed_mem_get_info(), posix_typed_mem_open() | http://www.qnx.com/developers/docs/6.4.0/neutrino/lib_ref/p/posix_mem_offset.html | crawl-003 | refinedweb | 344 | 50.26 |
Rich Text Clipboard
Hi,
I am currently trying to replicate some of my Mac workflow to iOS, and the biggest hurdle I've run into is code highliting for keynote presentations.
I tried copy and pasting from Safari (and chrome, Firefox , and opera) but it would simply paste as plain text into keynote — I was able to paste as rich text into a couple of other apps but none that would let me then get it into keynote as RTF.
On Mac I use pygments to output RTF and pipe it into
pbcopyand now have pygments up and running inside Pythonista successfully. However, I can't get it into the clipboard as RTF.
It looks like in 1.6 I could use the native Obj-C clipboard stuff, but this release isn't out yet — any suggestions for how I might accomplish this in 1.5?
I'm sorry, but there's no way to do this in 1.5.
- Webmaster4o
@dshafik That is the plan, yes. I'm not sure if there will be a cost increase for new users buying the app, but the upgrade should be free for existing users?
- Webmaster4o
Generating an image shouldn't be hard. Try rendering with PIL ImageDraw..
@JonB I've not seen that syntax before, I assume creates a block (or some sort of scoped section) in which
fpis made available and then when the block is left it is automatically closed?
Anyhow, I do now see data in
databut I still can't get it to paste in. Were you able to do that?.
I managed to get Pygments image drawing working by monkey patching the font stuff (ick!):
import clipboard import os import _font_cache from objc_util import * from pygments import highlight from pygments.lexers import PhpLexer from pygments.formatters import ImageFormatter from pygments.formatters.img import FontManager from pygments.styles import monokai monokai.MonokaiStyle.background_color = '#000000' def _get_nix_font_path(self, name, style): if style == 'bold' or style == 'italic': return _font_cache.get_font_path('%s %s' % (name, style.capitalize())) elif style == '': return _font_cache.get_font_path(name) else: return FontManager._get_nix_font_path = _get_nix_font_path ''' Replace Unicode newline chars with regular ones ''' code = clipboard.get().replace(unichr(8232), os.linesep) png = highlight(code, PhpLexer(startinline=True), ImageFormatter(style='monokai', font_name='Source Code Pro', font_size=60, line_numbers=False, line_pad=4)) with open('highlight.png', 'w+') as fp: fp.write(png) data = NSData.dataWithContentsOfFile_('highlight.png') c = ObjCClass('UIPasteboard') pasteBoard = c.generalPasteboard() pasteBoard.setData_forPasteboardType_(data, 'public.png') os.remove('highlight.png')
Resulting in an image like so:
- + """ <>""" thePasteBoard = ObjCClass('UIPasteboard') thePasteBoard = thePasteBoard.generalPasteboard() theType = "Apple Web Archive pasteboard type" thePasteBoard.setValue_forPasteboardType_(theArchive,theType) | https://forum.omz-software.com/topic/2521/rich-text-clipboard | CC-MAIN-2017-26 | refinedweb | 433 | 50.84 |
Chinese Solar Lipo powered PIR led lamp.
This would nicely fit into Chinese solar Lipo powered PIR led lamp.
We would only need 3,3 regulator to feed radio and arduino.
This way we could have smart mysensors aware security solar powered PIR sensor and LED garden lamp.
But it would be challenging to hack original circuit so it would be mysensors aware.
I'm willing to check elements and do additional circuit verification. But ic is without id. I think it is BISS0001?
What do you guys think? I can provide additional information if somebody is interested?
@gyro Looks like the BISS0001 and this one needs 3V3, so I'm fairly certain there is a 3V3 DC regulator in the SOT-23 format on the board.
There is a SOT-23 package type chip next to C3. Can you measure the voltages over the pins ? Middle one is ground, left small pin is input (I think), right of ground is output, large in on the other side is ground as well I think (just stating from my feeble memory). Just measure DC voltage. If one side is 3.7V-4.2V, that could be from the LIPO battery, one of the other pins probably gives 3V3.
If i'm right, you could tap that pin for power. I do not know the output pin (which can drive the leds on) of the BISS0001, but probably one of the transistors (Q1 to 4) is driving U3, which is what I assume to be the IC driving the power to the leds. But that could be the LIP charging chip.
In any case: interesting
Time to break out the voltmeter ...
@GertSanders U3 has label 6206A, next to C3 is 5358A,
i have measured BISS0001 based on specifications:
pin7 -> GND
pin8 -> 3V
pin11 ->3V
where is 3V source?
battery voltage is 4.07V
@GertSanders i found similar ic specs:
id is 6206A 1521/30
where Marking Rule is
6206A
xxxx: Date Code
/xx : Output Voltage(e.g. 33=3.3V) in my case 30 means 3.0V
i have measure U3:
1 - 2 -> GND - Vin = 4V
1 - 3 -> GND - Vout = 3V ....so this will be my power source for your node
BIS0001 specifications:
PIR (id 500BP Nicera 571) detection trigger (source -signal) is connected to BISS pin 14 (high voltage level is 0,68)
BISS output signal should be on pin 2 - i have meassured voltage (2.4V), which i think is enough to trigger digital input on atmega328 ?
So if I lift pin 2 up and connect it to atmega328 input, i could detect motion with mysensor PIR libray ?
- and then if I connect another digital out pin on circuit where was previous soldered BISS0001 pin 2, i can control the LEDs with mysensors relay scetch?
- then i add rule in my controller (OPENHAB) when PIR detected motion - turn light on for 30s or blink if alarm is triggered ?
Please correct/comment me if i am thinking this wrong?
The lamp should have day/night sensor but i dont see any LDR ?
(moved discussion to a new thread)
- soif Plugin Developer last edited by soif
Really interested by your investigations, as i stared build some sensor based on this box (ie the Weather station project). But I had trashed the original pcb and replaced by mine, rebuilding a charger, and getting rid of the builtin PIR. But if it is enough hackable, why not keep it as an addionnal sensor.
So please post your results here.
A few remarks :
if there is no LDR, I guess they've simply used the solar panel to detect if there is sun
There should be something like :
Solar+ --------- Resistor ------------ BIS0001 pin 9, so the challenge would be to find which resitor it is, explode it with a cutter or a drill, and put pin 9 to high (ie 3,3v) to make PIR alway ON.
to controll the light there should be something like this:
BIS0001 pin 2 --------- Transistor (base) : remove this connection and connect pin2 to an arduino input and the transistor base to an arduino output.
You would also have to check is the retrigering time fit your need, and it the sensibility is well setup...
for the arduino software it will really simple, ie some pseudo code :
- When input pin is high send Mysensor Message to gateway
- On incoming Mysensor Message1, set output PIN to High for 30 seconds.
- On incoming Mysensor Message2, set output PIN to blink for 1 hour.
@soif your are partially right, day/night sensor is solar cell
-when solar cell is powering the battery over ic (XB5358A - see picture below), then the LEDs are disconnected (something disconnects load - LED lights),
but
additional findings:
voltage regulator (6206A) is powered from battery all the time (day and night):
- which is great that the atmega328 will have full time 3.0V power source.
for BISS0001:
pin 9 is always high (3V) - day and night (my solar cell is 5V power supply ON-day, OFF-night)
so PIR is powered all the time and when triggered its source signal is ON for around ~25s
when PIR is triggered (0,6V), on pin 2 is output ~2.4V for the trigger interval time (25s)
so i will try to break pin 2 connection to arduino input, and then connect (transistor base) to an arduino output.
what i still don't know is what drives LEDs:
- at day (solar cell power is high -5V), LEDs are off.
- at night (solar cell power is low- 0V): LEDs work at 2 phases
- when PIR is not active - LEDs ared dimmed (illuminate with little intensity)
- when PIR is active (motion detected)LEDs are verry bright for the time of the trigger interwal (25s)
Questions are:
what disconnects (control) the load (LEDs) when battery is being charged from solar cell?
is it safe to connect atmega328 to regulator (6206A), i guess it is comparable to BISS0001 & PIR power requirements
so additional very small load (like @GertSanders - minimal 2 switch node ) should not be a problem
what controlls the trigger interval (it could be shorter than 25s), because after connected arduino the
output pin will configure the LED HIGH intensity interval?
I think "mysensors" aware chinese LED solar lamp could work with a little effort, prototype is working now ( thanks to the forum members for help)
What needs to be done:
disconnect BISS0001 pin 2 from circuit (i have cut circuit wire on the back of PCB)
connect BISS0001 pin 2 to input pin 3 of arduino pro mini (3,3 V model)
connect resistor R9 (smd 102 - 1kohm) to output pin 4 of arduino which drives the transistor(Y1) base
i think that the easiest way would be to remove resister R9 from circuit. Then connect left part of resistor to input pin of arduino,
then wire to arduino output (pin 4) resistor 1K and then connect to right part of removed transistor.
Now i can use the solar light as motion detection and control LED lights independantly.
- motion detection works all day,
Received PUBLISH (d0, q0, r0, m0, 'mygateway1-out/11/12/1/0/16', ... (1 bytes)) 0 Received PUBLISH (d0, q0, r0, m0, 'mygateway1-out/11/12/1/0/16', ... (1 bytes)) 1
but i can control the lights
root@kali:~# mosquitto_pub -d -h 192.168.1.115 -t "mygateway1-in/11/1/1/0/2" -m "1" Received CONNACK Sending PUBLISH (d0, q0, r0, m1, 'mygateway1-in/11/1/1/0/2', ... (1 bytes)) root@kali:~# mosquitto_pub -d -h 192.168.1.115 -t "mygateway1-in/11/1/1/0/2" -m "0"
only when they are not charging (someone would need to figure out how to connect/disconnect
the solar charging - circuit now automatically disconnects the LEDs when there is voltage on solar cell and the battery is charging)
usage example:
with many solar lights on the garden if motion is detected on one we can light on all of them,
they can all blink as an alarm, (i would need help with code)
when we need light we can turn lights on from mobile app
or any other usage (control something that can be controlled with the power of lipo battery )
I think it would be good idea to meassure battery voltage, what divider should i use to meassure Lipo battery voltage, when arduino is powered from 3.0V battery (solar lamp onboard regulator is 3.0V), that we dont drain battery to much
i have combined motion detection and relay scetch:
the code is below (its not optimized, but it works):
// Enable debug prints #define MY_DEBUG #define MY_NODE_ID 11 // 12 // Id of the sensor child boolean lastMotion = false; // Initialize motion message MyMessage msg(CHILD_ID, V_TRIPPED); setup() { pir", ()); } }
very nice to see that good progress was made
- soif Plugin Developer last edited by soif
@gyro
Keep up the good work debugging this chinese box !
This box is a really good value for money box to build mysensors projects ( at least box + lipo + solar panel).
If your investigations let us to also use the built in Charger / PIR / LEDS , it would just be amazing.
Please keep us in touch !
@soif , @GertSanders
some progress was made, but i need an hardware advice.
This is the part of schematic that drives the LEDs.
First the battery is not connected directly (as here in schematic) but over protection IC logic.
Question 1:
Current circuit works as follows:
- When solar cell voltage is higher than 0.7V , LED is OFF (sort off day/night sensor).
- When solar voltage is lower than 0,7V, LED is shining with low intensity
What is the the correct way to connect arduino pin D1 to take control over above described default behavior? (disconnect the line at SW4 and remove R9 -1Mohm)?
Question 2:
Which voltage measurement technique (internal reference or reference to regulated Vcc voltage) should i use to meassure Lipo voltage?
What are correct/recommended resistor value for optimal battery utilization.
Should this reference be used:
Would make sense to measure also solar cell voltage?
Question 3:
Is arduino pin D2 connect properly? It control two level LED intensity:
pin HIGH - LED glows with HIGH intensity (between led+ and led- is ~4V) pin LOW LED glows with LOW intensity (between led+ and led- is ~2,5V)
Thanks for help and recommendations
I have asked the same here, but no perfect answer jet:
- GertSanders Hardware Contributor last edited by GertSanders
Since you are using a atmega328p, it does make sense to measure the battery level and solar power level.
If you make a node which does NOT sleep all the time, then monitoring both voltage levels is the starting point to make decisions.
I'm guessing you want to add a node using the battery, so a node that sleeps until something needs to be done.
In that case we need a trigger when the light is bright enough to switch off the leds, and dark enough to switch them on again.
The trigger to wake up from sleep should be a voltage change on pin D2 or D3, and the voltage difference needs to be from below the lowest "HIGH" threshold to above the highest "LOW" threshold. Check the specification for the atmega328 you use, as this depends on the working voltage of the processor.
The question I ask myself is what is the lowest voltage you get from the solar cell ? Looking at the circuit, they use the voltage of the solar cell to open the gate of a transistor, which then pulls the emitter voltage of a transistor to ground, which in turn switches off a second transistor with is feeding the battery current to the LEDs.
Your question of the values of R2 and R11 is actually how to measure voltages over a resistor divider? You can choose any value of resistors, but their ratio is what is important.
To calculate the resistor values:
To measure at a maximum of 5V on an analog pin:
R11 = R2 * (1 - voltage-ratio) / voltage-ratio
To go from 6V to 5V the ratio = 5V / 6V = 0.83
Take R2 as 1M Ohm, then R11 = 1M * (1 - 0.83) / 0.83 = 205K => a common value is 220K
To check: voltage over R2 (which is the input voltage on the analog pin A0)
V(A0) = Vinput * R2 / (R1 + R2) = 6V * 1M / (1M+220K) = 4.9 V
@GertSanders I think i managed to successfully connect arduino with solar lamp.
My prototype is working, and has the following functions:
- Measure battery voltage (when charging it is alway 100% - makes sense)
- Measure solar voltage (can be omitted - but resistor should be there for a transistor to work properly)
- Solar power day/night trigger with transistor as a switch (can be used wake up arduino from sleep)
- PIR sensor (can be used wake up arduino from sleep)
- Lights on/off dimmed brightness
- Lights on/off high brightness ( original resistor R9 -1k was replaced with 4.7k - i think it draws to much current and sometimes hangs arduino)
I will post the code later, but every part works with default "mysensor" examples
How to connect and how to add elements see picture:
higher math for me...
Great...
Can you use the motion seperate from the liight?
And can u use the light with a switch option? [ i will turn on the light when there is motion.. ]
Or is the light switching only on lux?
@GertSanders thanks, you have motivated my research
@Dylano the purpose of this project was exactly what you have asked.
The lamp is now mysensors aware.
Every task can be operated separately.
When there is dark, the trigger is send, when the sun shines, the trigger is send. (transistor as switch). In scetch i use it as magnet switch part of code. When trigger is received (can wake up arduino), than you decide with controller what you want to do .
PIR acts as classic PIR sensor and can also be used as trigger. (can wake up arduino), than you decide with controller what you want to do.
The Lamp have two phases and can be controlled with controller (i control it over mqtt for now)
phase one is dimmed light (relay 1)
phase two is high bright light (relay 2)
Below is code that works for now. I wil improve it in next few days
// Enable debug prints #define MY_DEBUG #define MY_NODE_ID 11 // Enable and select radio type attached #define MY_RADIO_NRF24 //#define MY_RADIO_RFM69 #include <SPI.h> #include <MySensor.h> #include <Bounce2.h> //unsigned long SLEEP_TIME = 120000; // Sleep time between reports (in milliseconds) #define DIGITAL_INPUT_SENSOR 2 // The digital input you attached your motion sensor. (Only 2 and 3 generates interrupt!) //#define INTERRUPT DIGITAL_INPUT_SENSOR-2 // Usually the interrupt = pin -2 (on uno/nano anyway) #define CHILD_ID 12 // Id of the sensor child boolean lastMotion = false; // Initialize motion message - start MyMessage msg(CHILD_ID, V_TRIPPED); //trigger solar power day on/off -start #define CHILD_ID_SW 5 #define BUTTON_PIN 5 // Arduino Digital I/O pin for button/reed switch Bounce debouncer = Bounce(); int oldValue = -1; // Change to V_LIGHT if you use S_LIGHT in presentation below MyMessage SolarMsg(CHILD_ID_SW, V_TRIPPED); // trigger solar - end #define RELAY_1 3 // Arduino Digital I/O pin number for first relay (second on pin+1 etc) #define NUMBER_OF_RELAYS 2 // Total number of attached relays #define RELAY_ON 1 // GPIO value to write to turn on attached relay #define RELAY_OFF 0 // GPIO value to write to turn off attached relay void setup() { //trigger solar power day on/off - start // Setup the button pinMode(BUTTON_PIN, INPUT); // Activate internal pull-up digitalWrite(BUTTON_PIN, HIGH); // After setting up the button, setup debouncer debouncer.attach(BUTTON_PIN); debouncer.interval(5); //trigger solar - end light", ); // Register binary input sensor to gw (they will be created as child devices) // You can use S_DOOR, S_MOTION or S_LIGHT here depending on your usage. // If S_LIGHT is used, remember to update variable type you send in. See "msg" above. present(CHILD_ID_SW, S_DOOR); } }); //trigger solar power day on/off - start debouncer.update(); // Get the update value int value = debouncer.read(); if (value != oldValue) { // Send in the new value send(SolarMsg.set(value == HIGH ? 1 : 0)); oldValue = value; //trigger solar power day on/off - stop } }()); } }```
mmm i ordered 1...
So i hope i gonna ix this...
It is looking higher mathematics
Will see... when i have time..
Thanks for the sketch!!!
Give it a try in a uno ...
guess what...
I do have a:
Exact the same board in of this light...
I think i going to try the make this work...
Only i do not understand exact the wiring of your example.
Will you please make a list of exact stuff to buy.
And make some more pictures, from the good places..
And a thing that i see.
You use D2 from the arduino.
Is that not a one for the radio?
@siklosi Resistor is used as transisistor base resistor. This was the best way to connect i could think off. But you need two output pins - 2 transistors are controlled.
first PIN - lights ON/OFF
second PIN - ligths bright ON/OFF
@Dylano Great to hear that the light has the same board...please post a picture.
My light is still protoype connected together on protoboard - connected to arduino pro mini - i can post only a picture connected elements on protoboard for now.
I am trying to put all together in openhab now.
I will try to make a list of exact elements.
I have free D2 pin on arduino pro mini.
As the easiest way to disable dimmed mode this lamp?
Got myself a similar PIR LED lamp. The functionality seems like it is the same but the circuit board is different.
@korttoma
Nice to see some interest in smart solar lamps
It looks like an updated version (at least i like it more from your pictures). Could you post an order link.
I think this one should be even easier to intercept with "mysensors", beacause i see only two transistors and circuit connectios are clearly visible.
so lets try to understand the circuit:
-U3 is probably battery protection circuit
- Q1 i think is voltage regulator HT33 - to power arduino wih 3.3V (meassure voltage)
- take a photo of PIR sensor from front side (is there any ic elements)?
- U1 - i would guess PIR sensor IC logic - leg 6 (count from dot on IC) should be output (meassure voltage - high 3,3 V when motion detected):
-if it is output from PIR just remove R3 and IC ouput goes over resistor to you arduino input.
- the output goes over resistor to Q2
-whats is left - you have to figure out what drives resistor Q1(J3y) :
- i gues its driven by solar cell, than collector is connected to resistor R10, resesitor is than connected over board to the other side.......
I guess you meant U4 is the 3.3V regulator. I measured it to 3.3V and HT33 should be a regulator. Q1 and Q2 I think are transistors. U3 handles the solar charging. U1 is for the PIR but unfortunately there is no text on the chip. I will get you the link tomorrow but it is quite easy to find on aliexpress.
@korttoma
sorry my typo: you are correct
- U4 - HT33 is voltage regulator
- yes Q1 and Q2 are transistors
- U1 - PIR out test - measure voltage between PIN 6 and R3 when PIR is OF and ON. I think this drives Q2 (high brightness / low brightnes)
also take picture of other side of circuit board, beause i think Q1 is conected to R10 and
then further on the other side
Here is the link to the one I got but it seems like the prize is allot higher now since I paid 23,84$ including shipping ->
Also it took like 10 weeks for it to arrive so maybe another seller would be a better fit.
There is not much on the other side of the board, just a status LED, a button to turn it on and the PIR.
@korttoma
Are you sure that this light has two modes of operation or only one: when a person is present is activated for 15s and then switched off
I'm sorry I did not test the device I just read the manual so I think that yes it has 2 modes for LED brightness.
@korttoma
Don't be sorry.I am not an circuits expert, I just try help you figure things out.
You wil have to test this lamp a little bit.
it looks that Q1 drives U1 active/not active when there is sun, but how/what turns on dimm lights in dark?
Did you measure U1 pin 6 when pir active/not active?
In the application I will use this one I actually do not care so much for the built in PIR and LED, I just see it as a smart enclosure with a solar battery power-supply built in. I will tap in to the 3.3V regulator to power a pro mini that has an external PIR and (LDR) Light sensor. In addition to this I would like to add voltage measurement for the battery. BTW, did you komplette your sketch? I would like to copy the battery voltage sensor part. I will look att figuring out the circuit later, it does not look to complicated.
@korttoma
I did try some battery measurement variants. The following code works best for me. I suggest that you first try the following sketch.
- measure the voltage with voltmeter on VCC pin and correct #define VREF value so it will be a close as possible to measured value before you integrate into case specific code
// define values for the battery measurement #define R1 1e6 #define R2 330e3 #define VMIN 2.8 #define VMAX 4.2 #define ADC_PRECISION 1023 #define VREF 1.13 int oldBatteryPcnt = 0; int batteryVoltage = 0; int BATTERY_SENSE_PIN = 0; int val = 0; void setup() { // use the 1.1 V internal reference #if defined(__AVR_ATmega2560__) analogReference(INTERNAL1V1); #else analogReference(INTERNAL); #endif Serial.begin(9600); } void loop() { //float batteryPcnt = getBatteryPercentage(); //val = analogRead(BATTERY_SENSE_PIN); //Serial.println(batteryVoltage); float batteryVoltage = getBatteryPercentage(); Serial.println(batteryVoltage); float batteryV= batteryVoltage; float batteryVmap = fabs(fmap(batteryV, 2.5, 4.2, 0.0, 1000.0)); int batteryPcnt = batteryVmap / 10; if (batteryPcnt >= 100) { batteryPcnt = 99; } Serial.print("Battery voltage: "); Serial.print(batteryPcnt); Serial.println(" %"); delay(2000); /*if (oldBatteryPcnt != batteryPcnt) { // Power up radio after sleep //gw.sendBatteryLevel(batteryPcnt); oldBatteryPcnt = batteryPcnt; }*/ // totally random test values } float getBatteryPercentage() { // read analog pin value int inputValue = analogRead(BATTERY_SENSE_PIN); // calculate the max possible value and therefore the range and steps float voltageDividerFactor = (R1 + R2) / R2; float maxValue = voltageDividerFactor * VREF; float voltsPerBit = maxValue / ADC_PRECISION; float batteryVoltage = voltsPerBit * inputValue; float batteryPercentage = ((batteryVoltage-VMIN)/(VMAX-VMIN))*100; //int batteryPercentage = map(batteryVoltage, 0, maxValue, 0, 100); //return batteryPercentage; return batteryVoltage; } float fmap(float x, float in_min, float in_max, float out_min, float out_max) { return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min; }
int BATTERY_SENSE_PIN = 0;
This means that you use A0 for measuring right?
#define R1 1e6
#define R2 330e3
And you have used 1Mohm and 330kohm for the resistors?
Thanks for the code and the additional info. Measuring the battery seems to work as expected but using the Internal reference seems to mess with the light level measurement LDR. Is it possible to use indifferent reference for different analog inputs? Or do I need to recalculate my LDR voltage divider?
I managed to squeeze in a pro mini and radio that is now powered from the 3.3V available from the built in circuit board. It has battery voltage measurement, light level and a external PIR.
@korttoma great that you maneged to connect it lamp..
I don't know if analog reference is than set for all analog interfaces..
Just one more thing you should adjust:
I have figured out that I need to raise alarm for lower threshold for battery to higher than 3V (now is 2.5V!) because regulator voltage drop is ~250mV, and mini pro drops out at around 2.8V..
But the whole concept now works fairly good
- ranseyer Hardware Contributor last edited by ranseyer
@ranseyer
This one looks is a little complicated. I think you should wait for china version
Take a picture of other side of circuit also.
- ranseyer Hardware Contributor last edited by
Thanks for the Answer. Mine was produced in China too...
Her the picture of the backside:
No connections, but a board name: YYGF-601L.
I would be happy for any hints... | https://forum.mysensors.org/topic/3107/chinese-solar-lipo-powered-pir-led-lamp/42?lang=en-US | CC-MAIN-2020-40 | refinedweb | 4,076 | 69.31 |
quickly.prompts.checklist()
When the developer clicks the "delete keys" button, I wanted to present them with a list of keys they could choose from. And then have all of those get deleted. This seemed like the kind of thing that I'd want to use in other programs, so I decided to solve this problem generically by adding it to quickly.prompts.
I accomplished this by deriving from quickly.prompts.Prompt, and also creating a helper function. CheckListPrompt works as you would expect for prompt. You set it up by passing in some configuration info, including a dictionary of strings as keys, which will be labels for the checkboxes, and a bool value to determine if the box is checked by default.
You get back a response and val. The val is a dictionary of keys again, with bools for whether the checkboxes are active or not.
So to use the CheckListBox, I just pass in a dictionary of the keys for the CouchGrid, and then see if any were selecct:
Hair Raising MungingHair Raising Munging
val = {}
for k in self.grid.keys:
val[k] = False
response, val = checklist(title, message, val)
if response == gtk.RESPONSE_OK:
#do stuff
Since "do stuff" is pretty destructive, I use a quickly.prompts.yes_no to confirm that the users wants to blow away all the data and screw up their database. Assuming they do want to delete the keys and values in the desktopcouch database, it turns out to be *not* easy to do the deletion without reading way into CouchGrid. The issue here is the couchdb reserves anything staring with a "_" for itself. But DictionaryGrid uses "__" as a convention to determine that a key should be hidden in the grid by default. So as a result of this CouchGrid munges _id and _rev and record_type before it reads to and from the database.
The second troublesome part was dealing with desktopcouch. It turns out that you can't just delete a key from a record. You have a delete the whole record and then create a new record without that key. so as a result the code deletes and recreates each and every row.
I really think this code belongs inside CouchGrid:
Who would ever be able to figure out to do all this?Who would ever be able to figure out to do all this?
def delete_keys_from_store(self, model, path, iter, keys_to_delete):
for k in keys_to_delete:
d = model.get_value(iter,len(self.grid.keys))
if k in d:
del(d[k])
if '__desktopcouch_id' in d:
keys = d.keys()
for k in keys:
if k.startswith("__desktopcouch"):
dc_key = k.split("__desktopcouch")[1]
d[dc_key] = d[k]
del(d[k])
if k == "__record_type":
d["record_type"] = d["__record_type"]
del(d["__record_type"])
self.database.delete_record(d['_id'])
del(d["_rev"])
del(d["_id"])
self.database.put_record(Record(d))
Refresh
So after this the refresh function was trivial. Just tell the CouchGrid to reset, and then recreate the grid:
desktopcouch Editordesktopcouch Editor
def refresh(self, widget, data=None):
self.grid._refresh_treeview()
self.remove(self.filt)
self.filt = GridFilter(self.grid)
self.pack_start(self.filt, False, False)
self.reorder_child(self.filt,1)
self.filt.show()
So now with adding a removing records and keys, along with freshing, I have a functional desktopcouch editor. This tool has already proved a bit useful in getting a peak into certain database. However, I can't actually create new record types yet. Maybe tomorrow?
Heh, I was trying to do this exact thing (deleting records with a UI) this weekend. Do you think it would be useful to include such convenience function in quickly-widgets?
@LaserJock:
I logged a few bugs to move some functions into CouchGrid:
Cheers, Rick | http://theravingrick.blogspot.com/2010/04/todays-slip-cover-features-deleting.html | CC-MAIN-2014-49 | refinedweb | 620 | 65.32 |
.
Network Policies give you a way to declaratively configure which pods are allowed to connect to each other. These policies can be detailed: you canspecify which namespaces are allowed to communicate, or more specifically you can choose which port numbers to enforce each policy on.
You cannot enforce policies for outgoing (egress) traffic from pods using this feature today. It’s on the roadmap for Kubernetes 1.8.
In the meanwhile, the Istio open source project is an alternative that supports egress policies and much more, with native Kubernetes support.
Why are Network Policies cool
Network Policies are fancy way of saying ACLs (access control lists) used in computing for many decades. This is Kubernetes’ way of doing ACLs between pods. Just like any other Kubernetes resource, Network Policies are configured via declarative manifests. They are part of your application and you can revise them in your source repository and deploy them to Kubernetes along with your applications.
Network Policies are applied in near real-time. If you have open connections between pods, applying a Network Policy that would prevent that connection will cause the connections will be terminated immediately. This near real-time gain comes with a small performance penalty on the networking, read this benchmark to learn more.
Example Use Cases
Below is a brief list of common use cases for Network Policies. You can find more use case examples with sample manifests at the kubernetes-networkpolicy-tutorial on GitHub.
How is Network Policy enforced
The Network Policy implementation is not a Kubernetes core functionality. Although you can submit a NetworkPolicy object to the Kubernetes master, if your network plugin does not implement network policy, it will not be enforced.
Please see this page for examples of network plugins that support network policy. Some examples of network plugins supporting policies are Calico and Weave Net.
Google Container Engine (GKE) provides alpha support for Network Policies by pre-installing Calico network plugin in the cluster for you.
Network Policies apply to connections, not network packets. Note that connections allow bi-directional transfer of network packets. For example, if Pod A can connect to Pod B, Pod B can reply to Pod A back on the same connection. This doesn’t mean Pod B can initiate connections to Pod A.
Anatomy of a NetworkPolicy
NetworkPolicy is just another object in the Kubernetes API. You can create many policies for a cluster. A NetworkPolicy has two main parts:
- Target pods: Which pods should have their ingress (incoming) network connections enforced by the policy? These pods are selected by their label.
- Ingress rules: Which pods can connect to the target pods? These pods are also selected by their labels, or by their namespace.
Here is a more concrete example of a NetworkPolicy manifest:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: api-allow
spec:
podSelector:
matchLabels:
app: bookstore
role: api
ingress:
- from:
- podSelector:
matchLabels:
app: bookstore
- from:
- podSelector:
matchLabels:
app: inventory
This sample policy allows pods with
app=bookstore or
app=inventory labels to connect to the pods with labels
app=bookstore and
role=api. You can read this as “give microservices of bookstore application access to the bookstore API”.
How are Network Policies evaluated
Although the design document and the API reference for Network Policies may seem complicated, I managed to break it down to a couple of simple rules:
- If a NetworkPolicy selects a pod, traffic destined to that pod will be restricted.
- If there is no NetworkPolicy defined for a pod, all pods in all namespaces can connect to that pod. This means that by default with no Network Policy defined for a specific pod there is an implicit “allow all”.
- If traffic to Pod A is restricted and Pod B needs to connect to Pod A, there should be at least a NetworkPolicy selecting Pod A, that has an ingress rule selecting Pod B.
Things get a bit complicated when cross-namespace networking gets involved. In a nutshell, here is how it works:
- Network Policies can enforce rules only for connections to Pods that are in the same namespace as the NetworkPolicy is deployed in.
podSelectorof an ingress rule can only select pods in the same namespace the NetworkPolicy is deployed in.
- If Pod A needs to connect to Pod B in another namespace and network to Pod B is enforced, there needs to be a policy in Pod B that has a
namespaceSelectorthat selects the Pod A.
Are Network Policies Real Security?
Network Policies restrict pod-to-pod networking, which is part of securing your cluster traffic and applications. They are not firewalls that perform deep packet inspection.
You should not solely rely on Network Policies for securing traffic between pods in your cluster. Methods such as TLS (transport layer security) with mutual authentication give you ability to encrypt the traffic and authenticate between microservices.
Take a look at Google Cloud Security Whitepaper (emphasis mine):. […]
Data is vulnerable to unauthorized access as it travels across the Internet or within networks. […] The Google Front End (GFE) servers mentioned previously support strong encryption protocols such as TLS to secure the connections between customer devices and Google’s web services and APIs.
As I said earlier, service mesh projects like Istio and linkerd offer promising advancements in this area. For example, Istio can encrypt traffic between your microservices using TLS and enforce network policies transparently without changing your application code.
Learn more
If you are interested in trying out Network Policies, the easiest way to get started would be creating a GKE cluster. You can also read:
Thanks to Matthew DeLio and Daniel Nardo for reviewing drafts of this article.
Originally published at ahmet.im. If you liked this post, you can follow me on Twitter or subscribe by email to my blog (no more than an article/month).
Learned something? Click “clap”👏 to spread the word. | https://medium.com/google-cloud/securing-kubernetes-cluster-networking-cec708b82510 | CC-MAIN-2022-40 | refinedweb | 978 | 53.81 |
WHat IDE did you use??.
I'm using anaconda 3 on windows.
Downloaded to dir under anaconda (IbPy-Master).
In commandline I go to that dir. Run the setup. Seems OK. I get: (changed myuser)
`Writing C:\Users\myuser\AppData\Local\Continuum\anaconda3\Lib\site-packages\IbPy2-0.8.0-py3.6.egg-info`
But in the code I get: ibapi module not found. Help!!!!.
Thank you very much for the tutorial! It certainly enlightens me a lot although there are still some (minor) issues. When I executed your code, I got the following output:
So, I did get the starting message and the unix time but how come there are errors messages in between. I checked the error/warning codes and still does not understand what this error mean.
Additionally, I also got this error at the very end:'
Thank you again for writing this great post. Any help would be greatly appreciated!
Relax - these "errors" are perfectly normal.
IB error id -1 errorcode 2106 string HMDS data farm connection is OK:ilhmds
This is literally the system saying "I am okay". The code doesn't distinguish between errors and status messages like this. It can easily be modified to achieve that by adding error code 2104 to a whitelist of ignored messages.
AttributeError: 'NoneType' object has no attribute 'recv'
AttributeError: 'NoneType' object has no attribute 'isConnected'
This is a known bug with the current IB API software. Since you can do everything you want succesfully I'd suggest adding a try: except: clause to the end of your code to deal with the connection problem.
the other day I was listening to a webinar from IB about API. There it was mentioned by one of IB's developers that only "IB error id" with a positive value are really error messages. If a negative error id is supplied it means that it is a status message. Thus this error id can be used for filtering.
hi Rob,
you mentioned add try: except: clause to the of the code to fix the "AttributeError: 'NoneType' object has no attribute". I am not sure what I should include in try: except. i added below:
if __name__ == '__main__':
##
## Check that the port is the same as on the Gateway
## ipaddress is 127.0.0.1 if one same machine, clientid is arbitrary
app = TestApp("127.0.0.1", 4002, 10)
current_time = app.speaking_clock()
print(current_time)
try:
app.disconnect()
except:
print("bug in IB API to disconnect")
but still get the same message.
Thanks
Yes try doesn't always work with threaded code.
Thank you very much for the blog! It certainly enlightened me. I have 2 lingering issues running your sample code. 1:
What do these error/warning messages (between the starting message and the desired unix time) mean?
2:'
I got the above error message at the end of output. What might be causing this. Thank you again for writing this great post. Any help will be much appreciated.
Hi Rob,
Firstly, thank you for your very interesting post. I have one question regarding the installation of the IB API. I do not understand the significance of executing the setup.py program. As an UBUNTU user, I followed the installation orders on the IB site (which do not include executing setup.py) and have now all the API code in the IBJts directory. Am I missing anything?
Thanks, Nathan
When you come to run python and import the API module you might find that your python cannot see it. One of the things setup.py does is ensure that the module is copied to a directory that is in the python search path (something like /usr/local/lib/python3.5/dist-packages/...). Alternatively you can manually add the IBJts directory to your pythonpath.
Setup.py also ensure dependencies are installed but that is probably less problematic in this case.
Hi Rob,
Thanks for the nice article.
I am trying to explore TWS Python API as automated trading tool. I have following concerns that I couldn't figure out from API reference guide or any other web resources. Your thoughts could be very helpful and appreciated. At the moment, I am trying to get 5-min moving average of a set of stocks.
1. I felt TWS API does not provide any studies. So, I have to calculate those from historical price. How can I get historical price of a set of stocks in regular basis? Should I send historical data request in a loop after certain interval? or there is any way to setup the API server so that it will stream it? What about multiple contract? Should I initiate data request for each securities separately?
Thanks a lot.
Wali
Hi Rob,
Thanks for the nice post.
I am trying to get intra day 10-min moving average of a set of securities from TWS API. I have following concerns. Your thoughts will be very helpful and appreciated.
How to get historical data of a set of securities in a continuous stream? Should I issue the historical data request for each securities separately in a loop with a time sleep interval? Or I can setup the server once to get the information continuously until I cancel?
I felt TWS API doesn't provide any studies. Should we calculate all studies in the TestApp even though it is available in TWS?
Thanks
Yes, you have to use a repeating loop if you want to place a regular data request for historical data. And yes, you have to place a request for each security separately.
Hi Rob,
Thanks for your post. I have tried all steps mentioned, but when I run the gist example, it looks like ibapi is not installed.
File "E:/scratchthis.py", line 21, in
from ibapi.wrapper import EWrapper
ModuleNotFoundError: No module named 'ibapi'
I don't know what to suggest - if you've run setup.py the module should be visible. Is there an error message when you run setup.py?
i think i found the solution. I just copied ibapi folder from /IBJts/source/pythonclient into the working directory. Thanks
Hi again Rob,
I am not sure what the issue is but when I run "IBAPIpythonexample1" I get "Exception in thread Thread-7" error, however when I run "scratchthis" code it runs fine and places the order. Thanks
Hi Rob,
I was thinking of using a 3rd party package like IBridgePy until I found your post - very helpful!
I tried to run your sample code "IBAPIpythonexample1" but got the following error: AttributeError: 'TestApp' object has no attribute '_my_errors'
Have you run into anything like this in the past by any chance?
Thanks a lot,
Nicolas
Can you use the 'contact me' box below. I'll then reply and you can email me the full stack trace.
Hi Rob,
I am having the same issue. Do you have a resolution to this?
Thanks in advance,
Ed
Swap lines 117 and 124 (as numbered in the gist) and it should give you a more memorable error.
Nicolas error was because he didn't have the same ports in the API and code.
Thanks Rob. I should have investigated more. I really appreciate it!!!!
Hi Rob,
Thank you for your great introduction to IBAPI! As a beginner in python this post really saved me a lot of pain.
There's one question I wish to ask on the use of threading on the above sample code:
thread = Thread(target = self.run)
thread.start()
setattr(self, "_thread", thread)
Could I know why's threading needed in this program? I tried deleting the above lines of code and the program returns:
Getting the time from the server...
Exceeded maximum wait for wrapper to respond
None
Is threading required in general when using the IBAPI?
Many thanks!!!
Vincent
Yes, threading is needed, because the server communicates on an aysnch basis with the client.
Hi Rob,
Thank you for your quick reply it's really encouraging ...I have been learning Python for two months now and it is my first programming language so please forgive me if I have asked a dumb question.
On the threading codes you implemented in the program would you share more on the logic behind the codes? I have read a few tutorials online on threading but none of them resembles the codes you wrote... Also, I have read the sample codes from IBAPI as well and it uses the methods app.run() and self.done = true to start and end the thread instead; could I know what's the difference between the two approaches?
At last, on the line of code:
time_storage=self.wrapper.init_time()
I wish to ask why self.wrapper is necessary there? Is it because the data is pulled out from the queue object? I read the next example on fetching historical data and saw that in the code line:
contract_details_queue = finishableQueue(self.init_contractdetails(reqId))
self.wrapper was not used. What makes the difference between the two?
Again thank you for your really kind help on answering my amateur questions!
Vincent
Hi Vincent. It's important for me to say that I'm not an expert on threading but after reading the relevant chapters in this book () and the messages on the forum where there are some real experts, I managed to write something which works and is relatively simple.
You're actually right the .wrapper isn't needed (legacy of the previous IB API I was using): time_storage = self.init_time() should work (try it!)
But to make things clearer as to what is going on:
self.wrapper.init_time()
This creates a queue which the server will dump the result into. This is a 'one time only' queue; we expect to get one result and then we are done.
For the second example (historical prices) we're going to get more than one result, and we're going to be told by the server when we have all the data we need, so I created this new kind of object finishableQueue to handle that in a more elegant way.
self.init_historicprices(tickerid) is the actual queue and then the other class sits around it to make it easier to gather multiple items and handle the time out conditions.
Hi. Found this amazing python package which makes the use of the IB API way easier for less proficient python programmers. IB_INSYNC. check it out. the author is very responsive on GitHub.
Thanks Steffen. Heres the link
The author is also active on the TWS API user group at groups.io
Can run and get result. But got the following error message:
Exception in thread Thread-7:
Traceback (most recent call last):
File "C:\Program Files\Anaconda3\envs\pyqt5\lib\threading.py", line 914, in _bootstrap_inner
self.run()
File "C:\Program Files\Anaconda3\envs\pyqt5\lib\threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "C:\Program Files\Anaconda3\envs\pyqt5\lib\site-packages\ibapi-9.73.2-py3.5.egg\ibapi\client.py", line 248, in run
self.disconnect()
File "C:\Program Files\Anaconda3\envs\pyqt5\lib\site-packages\ibapi-9.73.2-py3.5.egg\ibapi\client.py", line 195, in disconnect
self.conn.disconnect()
File "C:\Program Files\Anaconda3\envs\pyqt5\lib\site-packages\ibapi-9.73.2-py3.5.egg\ibapi\connection.py", line 55, in disconnect
self.socket.close()
AttributeError: 'NoneType' object has no attribute 'close'
It appears the connection is closed twice. And at the second time, the socket is None and produce this error. I wonder how to properly close the connection when a thread is involved.
This is a common problem but it doesn't affect the functionality.
If your TWS/Gateway is refusing connections you get an error during the self.connect(...) in TestApp, but the error Queue has not been created yet so you crash. To fix it, move the self.init_error() to the line before self.connect().
I'm fairly happy sending orders back and forth.. exactly once until the connection drops and the client can connect anymore to this clientid. Seems to be an error w/ orderStatus()
File "/home/ubuntu/IBJts/source/pythonclient/ibapi/decoder.py", line 133, in processOrderStatusMsg
avgFillPrice, permId, parentId, lastFillPrice, clientId, whyHeld, mktCapPrice)
TypeError: orderStatus() takes 11 positional arguments but 12 were given | https://qoppac.blogspot.com/2017/03/interactive-brokers-native-python-api.html | CC-MAIN-2018-09 | refinedweb | 2,056 | 67.45 |
The method
re.compile(pattern) returns a regular expression object from the
pattern that provides basic regex methods such as
pattern.search(string),
pattern.match(string), and
pattern.findall(string). The explicit two-step approach of (1) compiling and (2) searching the pattern is more efficient than calling, say,
search(pattern, string) at once, if you match the same pattern multiple times because it avoids redundant compilations of the same pattern.
Why have regular expressions survived seven decades of technological disruption? Because coders who understand regular expressions have a massive advantage when working with textual data. They can write in a single line of code what takes others dozens!
This article is all about the
re.compile(pattern) method of Python’s
re library. Before we dive into
re.compile(), let’s get an overview of the four related methods you must understand:
- The
findall(pattern, string)method returns a list of string matches. Read more in our blog tutorial.
- The
search(pattern, string)method returns a match object of the first match. Read more in our blog tutorial.
- The
match(pattern, string)method returns a match object if the regex matches at the beginning of the string. Read more in our blog tutorial.
- The
fullmatch(pattern, string)method returns a match object if the regex matches the whole string. Read more in our blog tutorial.
Related article: Python Regex Superpower – The Ultimate Guide
Equipped with this quick overview of the most critical regex methods, let’s answer the following question:
How Does re.compile() Work in Python?
The
re.compile(pattern) method returns a regular expression object. You then use the object to call important regex methods such as
search(string),
match(string),
fullmatch(string), and
findall(string).
In short: You compile the pattern first. You search the pattern in a string second.
This two-step approach is more efficient than calling, say,
search(pattern, string) at once. That is, IF you call the
search() method multiple times on the same pattern. Why? Because you can reuse the compiled pattern multiple times.
Here’s an example:
import re # These two lines ... regex = re.compile('Py...n') match = regex.search('Python is great') # ... are equivalent to ... match = re.search('Py...n', 'Python is great')
In both instances, the match variable contains the following match object:
<re.Match object; span=(0, 6),
But in the first case, we can find the pattern not only in the string
'Python is great‘ but also in other strings—without any redundant work of compiling the pattern again and again.
Specification:
re.compile(pattern, flags=0)
The method has up to two arguments.
pattern: the regular expression pattern that you want to match.
flags(optional argument): a more advanced modifier that allows you to customize the behavior of the function. Want to know how to use those flags? Check out this detailed article on the Finxter blog.
We’ll explore those arguments in more detail later.
Return Value:
The
re.compile(patterns, flags) method returns a regular expression object. You may ask (and rightly so):
What’s a Regular Expression Object?
Python internally creates a regular expression object (from the
Pattern class) to prepare the pattern matching process. You can call the following methods on the regex object:
If you’re familiar with the most basic regex methods, you’ll realize that all of them appear in this table. But there’s one distinction: you don’t have to define the pattern as an argument. For example, the regex method
re.search(pattern, string) will internally compile a regex object
p and then call
p.search(string).
You can see this fact in the official implementation of the
re.search(pattern, string) method:
def search(pattern, string, flags=0): """Scan through string looking for a match to the pattern, returning a Match object, or None if no match was found.""" return _compile(pattern, flags).search(string)
(Source: GitHub repository of the re package)
The
re.search(pattern, string) method is a mere wrapper for compiling the pattern first and calling the
p.search(string) function on the compiled regex object
p.
Do you want to master the regex superpower? Check out my new book The Smartest Way to Learn Regular Expressions in Python with the innovative 3-step approach for active learning: (1) study a book chapter, (2) solve a code puzzle, and (3) watch an educational chapter video.
Is It Worth Using Python’s re.compile()?
No, in the vast majority of cases, it’s not worth the extra line.
Consider the following example:
import re # These two lines ... regex = re.compile('Py...n') match = regex.search('Python is great') # ... are equivalent to ... match = re.search('Py...n', 'Python is great')
Don’t get me wrong. Compiling a pattern once and using it many times throughout your code (e.g., in a loop) comes with a big performance benefit. In some anecdotal cases, compiling the pattern first lead to 10x to 50x speedup compared to compiling it again and again.
But the reason it is not worth the extra line is that Python’s re library ships with an internal cache. At the time of this writing, the cache has a limit of up to 512 compiled regex objects. So for the first 512 times, you can be sure when calling
re.search(pattern, string) that the cache contains the compiled pattern already.
Here’s the relevant code snippet from re’s GitHub repository:
# -------------------------------------------------------------------- #
Can you find the spots where the cache is initialized and used?
While in most cases, you don’t need to compile a pattern, in some cases, you should. These follow directly from the previous implementation:
- You’ve got more than
MAXCACHEpatterns in your code.
- You’ve got more than
MAXCACHEdifferent patterns between two same pattern instances. Only in this case, you will see “cache misses” where the cache has already flushed the seemingly stale pattern instances to make room for newer ones.
- You reuse the pattern multiple times. Because if you don’t, it won’t make sense to use sparse memory to save them in your memory.
- (Even then, it may only be useful if the patterns are relatively complicated. Otherwise, you won’t see a lot of performance benefits in practice.)
To summarize, compiling the pattern first and storing the compiled pattern in a variable for later use is often nothing but “premature optimization”—one of the deadly sins of beginner and intermediate programmers.
What Does re.compile() Really Do?
It doesn’t seem like a lot, does it? My intuition was that the real work is in finding the pattern in the text—which happens after compilation. And, of course, matching the pattern is the hard part. But a sensible compilation helps a lot in preparing the pattern to be matched efficiently by the regex engine—work that would otherwise have be done by the regex engine.
Regex’s
compile() method does a lot of things such as:
- Combine two subsequent characters in the regex if they together indicate a special symbol such as certain Greek symbols.
- Prepare the regex to ignore uppercase and lowercase.
- Check for certain (smaller) patterns in the regex.
- Analyze matching groups in the regex enclosed in parentheses.
Here’s the implemenation of the
compile() method—it looks more complicated than expected, no?
def _compile(code, pattern, flags): # internal: compile a (sub)pattern emit = code.append _len = len LITERAL_CODES = _LITERAL_CODES REPEATING_CODES = _REPEATING_CODES SUCCESS_CODES = _SUCCESS_CODES ASSERT_CODES = _ASSERT_CODES iscased = None tolower = None fixes = None if flags & SRE_FLAG_IGNORECASE and not flags & SRE_FLAG_LOCALE: if flags & SRE_FLAG_UNICODE: iscased = _sre.unicode_iscased tolower = _sre.unicode_tolower fixes = _ignorecase_fixes else: iscased = _sre.ascii_iscased tolower = _sre.ascii_tolower for op, av in pattern: if op in LITERAL_CODES: if not flags & SRE_FLAG_IGNORECASE: emit(op) emit(av) elif flags & SRE_FLAG_LOCALE: emit(OP_LOCALE_IGNORE[op]) emit(av) elif not iscased(av): emit(op) emit(av) else: lo = tolower(av) if not fixes: # ascii emit(OP_IGNORE[op]) emit(lo) elif lo not in fixes: emit(OP_UNICODE_IGNORE[op]) emit(lo) else: emit(IN_UNI_IGNORE) skip = _len(code); emit(0) if op is NOT_LITERAL: emit(NEGATE) for k in (lo,) + fixes[lo]: emit(LITERAL) emit(k) emit(FAILURE) code[skip] = _len(code) - skip elif op is IN: charset, hascased = _optimize_charset(av, iscased, tolower, fixes) if flags & SRE_FLAG_IGNORECASE and flags & SRE_FLAG_LOCALE: emit(IN_LOC_IGNORE) elif not hascased: emit(IN) elif not fixes: # ascii emit(IN_IGNORE) else: emit(IN_UNI_IGNORE) skip = _len(code); emit(0) _compile_charset(charset, flags, code) code[skip] = _len(code) - skip elif op is ANY: if flags & SRE_FLAG_DOTALL: emit(ANY_ALL) else: emit(ANY) elif op in REPEATING_CODES: if flags & SRE_FLAG_TEMPLATE: raise error("internal: unsupported template operator %r" % (op,)) if _simple(av[2]): if op is MAX_REPEAT: emit(REPEAT_ONE) else: emit(MIN_REPEAT_ONE) skip = _len(code); emit(0) emit(av[0]) emit(av[1]) _compile(code, av[2], flags) emit(SUCCESS) code[skip] = _len(code) - skip else: emit(REPEAT) skip = _len(code); emit(0) emit(av[0]) emit(av[1]) _compile(code, av[2], flags) code[skip] = _len(code) - skip if op is MAX_REPEAT: emit(MAX_UNTIL) else: emit(MIN_UNTIL) elif op is SUBPATTERN: group, add_flags, del_flags, p = av if group: emit(MARK) emit((group-1)*2) # _compile_info(code, p, _combine_flags(flags, add_flags, del_flags)) _compile(code, p, _combine_flags(flags, add_flags, del_flags)) if group: emit(MARK) emit((group-1)*2+1) elif op in SUCCESS_CODES: emit(op) elif op in ASSERT_CODES: emit(op) skip = _len(code); emit(0) if av[0] >= 0: emit(0) # look ahead else: lo, hi = av[1].getwidth() if lo != hi: raise error("look-behind requires fixed-width pattern") emit(lo) # look behind _compile(code, av[1], flags) emit(SUCCESS) code[skip] = _len(code) - skip elif op is CALL: emit(op) skip = _len(code); emit(0) _compile(code, av, flags) emit(SUCCESS) code[skip] = _len(code) - skip elif op is AT: emit(op) if flags & SRE_FLAG_MULTILINE: av = AT_MULTILINE.get(av, av) if flags & SRE_FLAG_LOCALE: av = AT_LOCALE.get(av, av) elif flags & SRE_FLAG_UNICODE: av = AT_UNICODE.get(av, av) emit(av) elif op is BRANCH: emit(op) tail = [] tailappend = tail.append for av in av[1]: skip = _len(code); emit(0) # _compile_info(code, av, flags) _compile(code, av, flags) emit(JUMP) tailappend(_len(code)); emit(0) code[skip] = _len(code) - skip emit(FAILURE) # end of branch for tail in tail: code[tail] = _len(code) - tail elif op is CATEGORY: emit(op) if flags & SRE_FLAG_LOCALE: av = CH_LOCALE[av] elif flags & SRE_FLAG_UNICODE: av = CH_UNICODE[av] emit(av) elif op is GROUPREF: if not flags & SRE_FLAG_IGNORECASE: emit(op) elif flags & SRE_FLAG_LOCALE: emit(GROUPREF_LOC_IGNORE) elif not fixes: # ascii emit(GROUPREF_IGNORE) else: emit(GROUPREF_UNI_IGNORE) emit(av-1) elif op is GROUPREF_EXISTS: emit(op) emit(av[0]-1) skipyes = _len(code); emit(0) _compile(code, av[1], flags) if av[2]: emit(JUMP) skipno = _len(code); emit(0) code[skipyes] = _len(code) - skipyes + 1 _compile(code, av[2], flags) code[skipno] = _len(code) - skipno else: code[skipyes] = _len(code) - skipyes + 1 else: raise error("internal: unsupported operand type %r" % (op,))
No need to understand everything in this code. Just note that all this work would have to be done by the regex engine at “matching runtime” if you wouldn’t compile the pattern first. If we can do it only once, it’s certainly a low-hanging fruit for performance optimizations—especially for long regular expression patterns.
How to Use the Optional Flag Argument?
As you’ve seen in the specification, the
compile() method comes with an optional third
flags argument:
re.compile(pattern, text = 'Python is great (python really is)' regex = re.compile('Py...n', flags=re.IGNORECASE) matches = regex.findall(text) print(matches) # ['Python', 'python']
Although your regex
'Python' is uppercase, we ignore the capitalization by using the flag
re.IGNORECASE.
Where to Go From Here?
You’ve learned about the
re.compile(pattern) method that prepares the regular expression pattern—and returns a regex object which you can use multiple times in your code.). | https://blog.finxter.com/python-regex-compile/ | CC-MAIN-2020-50 | refinedweb | 1,977 | 56.15 |
setpgid(2) setpgid(2)
NAME
setpgid(), setpgrp2() - set process group ID for job control
SYNOPSIS
#include <<<<unistd.h>>>>
int setpgid(pid_t pid, pid_t pgid);
int setpgrp2(pid_t pid, pid_t pgid);
DESCRIPTION
The setpgid() and setpgrp2() system calls cause the process specified
by pid to join an existing process group or create a new process group
within the session of the calling process. The process group ID of
the process whose process ID is pid is set to pgid. If pid is zero,
the process ID of the calling process is used. If pgid is zero, the
process ID of the indicated process is used. The process group ID of
a session leader does not change.
setpgrp2() is provided for backward compatibility only.
RETURN VALUE
setpgid() and setpgrp2() return the following values:
0 Successful completion.
-1 Failure. errno is set to indicate the error.
ERRORS
If setpgid() or setpgrp2() fails, errno is set to one of the following
values.
[EACCES] The value of pid matches the process ID of a child
process of the calling process and the child
process has successfully executed one of the
exec(2) functions.
[EINVAL] The value of pgid is less than zero or is outside
the range of valid process group ID values.
[EPERM] The process indicated by pid is a session leader.
[EPERM] The value of pid is valid but matches the process
ID of a child process of the calling process, and
the child process is not in the same session as
the calling process.
[EPERM] The value of pgid does not match the process ID of
the process indicated by pid and there is no
process with a process group ID that matches the
value of pgid in the same session as the calling
process.
Hewlett-Packard Company - 1 - HP-UX Release 11i: November 2000
setpgid(2) setpgid(2)
[ESRCH] The value of pid does not match the process ID of
the calling process or of a child process of the
calling process.
AUTHOR
setpgid() and setpgrp2() were developed by HP and the University of
California, Berkeley.
SEE ALSO
bsdproc(3C), exec(2), exit(2), fork(2), getpid(2), kill(2), setsid(2),
signal(2), termio(7).
STANDARDS CONFORMANCE
setpgid(): AES, SVID3, XPG3, XPG4, FIPS 151-2, POSIX.1
Hewlett-Packard Company - 2 - HP-UX Release 11i: November 2000 | http://modman.unixdev.net/?sektion=2&page=setpgid&manpath=HP-UX-11.11 | CC-MAIN-2017-17 | refinedweb | 387 | 69.92 |
LOB not being read correctly
Hello!
I am trying to read a LOB object returned by a spatial function in an Oracle 12c database. In general this works fine, but I've noticed that if the size of the LOB exceeds ~9963 bytes the cursor will omit everything up to the first left parenthesis. So essentially what I'm expecting is:
LINESTRING ( 2720095.00001681 281993.71874480, 2720084.25003831 281969.81262463, [...])
and what I'm getting back is:
( 2720095.00001681 281993.71874480, 2720084.25003831 281969.81262463, [...])
No matter how big the LOB, it always starts right there at the
(. I don't see anything else wrong with the output. It doesn't appear to be truncated at the end.
I tried the same query in a database client (SQL Developer) and couldn't reproduce the issue.
For reference this is my SQL:
SELECT SDE.ST_AsText(SHAPE) FROM STREET_CENTERLINES WHERE SEG_ID = 960897
And Python:
import cx_Oracle db = cx_Oracle.connect('...') c = db.cursor() shape = c.execute('SELECT SDE.ST_AsText(shape) FROM street_centerline where seg_id = 960897').fetchone()[0] print(shape.read())
Any insights would be much appreciated!
Thank you.
I have tried to reproduce the problem with my own sample clob data. However I am not able to see the problem. Can you give schema and data to reproduce the problem? For example:- create table street_centerline(...) insert into table (...)
Development has moved to GitHub. Please open an issue there if this is still an issue for you. Apologies for the inconvenience! | https://bitbucket.org/anthony_tuininga/cx_oracle/issues/20/lob-not-being-read-correctly | CC-MAIN-2018-17 | refinedweb | 247 | 69.79 |
While tidying up some code recently I was pondering the problems with the use of the _method_name() convention to indicate private methods in Perl.
Of course their are many alternative conventions that prevent the accidental overriding of methods in subclasses, for example:
package Foo;
my $_private = sub { ... };
my _private = sub { ... };
sub foo {
my $self = shift;
# we can specify the full package name
$self->Foo::_private();
# we can call as a subroutine
_private($self);
# we can use lexically scoped subs
$self->$_private();
};
[download]
They also all have disadvantages of one sort or another, for example:
With the package name and subroutine calling methods we can prevent the method from being inherited by subclasses by putting it in a seperate package (see (tye)Re: Private Class Methods for an example). So we can do things like:
package Foo;
sub Foo::private::method { ... };
sub foo {
my $self = shift;
$self->Foo::private::method { ... };
};
[download]
Which is nice, but we still have to repeat the Foo::private package every time we call the method - which for VeryLongPackageNames could be tedious.
A convenient shortcut to the private method space would be nice. Maybe something like:
sub Foo::MY::method { ... };
sub foo {
my $self = shift;
$self->MY::method();
};
[download]
This fits in quite nicely with SUPER:: and NEXT::.
This is actually pretty trivial to implement - just stick an AUTOLOAD in the MY package:
package MY;
sub AUTOLOAD {
my $self = shift;
our $AUTOLOAD;
my $method = caller() . "::$AUTOLOAD";
$self->$method(@_);
};
[download]
and Bob's the parental sibling of your choice.
Unfortunately this adds an extra method call worth of overhead to every private method. Probably a bad move - Perl's method invocation is slow enough as it is.
It would also be nice to be able to do:
sub MY::method { ... };
[download]
rather than
sub MyLongPackageName::MY::method { ... };
[download]
Having to repeat MyLongPackageName for every private method definition is a pain.
Looks like a job for (and I don't say this very often because they're evil :-) a source filter.
package MY;
use strict;
use warnings;
my $Imported_from;
sub import { $Imported_from = caller };
use Filter::Simple;
FILTER_ONLY code => sub { s/(\w+::)*MY::/${Imported_from}::MY::/gs };
1;
[download]
Which will allow us to write code like this:
package Foo;
use MY;
sub new { bless {}, shift };
sub hello {
my $self = shift;
$self->MY::greet_world("hi");
};
sub MY::greet_world {
my ($self, $greeting) = @_;
print "$greeting world\n";
};
package Bar;
use base qw(Foo);
use MY;
sub new {
my $class = shift;
$class->MY::greet_world();
return $class->SUPER::new(@_);
};
sub MY::greet_world {
print "A new Bar has entered the world\n";
};
[download]
Without the two private greet_world() methods of Foo and Bar interfering, and without any run-time overhead.
Neat.
Worth throwing at CPAN?
Just model.)..
You :-)
Actually, how about OUR:: instead of My:: - or is that somewhere in Perl 6 too?
Reasoning:
Sound vaguely sane?
Maybe I am being niave but, is there something wrong with MyClass::PRIVATE:: ? It would be the least ambiguous of all IMO. I mean it would be hard to grab the PRIVATE:: root namespace, but if you are using a source filter anyway, you could generate the MyClass::PRIVATE:: "inner"-package with ease.
Actually,.
If
If you want to avoid namespace clashes, you can just avoid the method lookup entirely and pass the object as the first parameter to the private method
This option was mentioned several times in the OP you know :-)
Two problems:,
has $.foo; # public attribute
has $:bar; # private attribute
method foo (){...} # public method
method :bar () {...} # private method
...
$self.foo() # always calls public method virtually
.foo() # unary variant
$self.:bar() # always calls private method
.:bar() # unary variant
...
$self.bar() # calls public bar, not private.
[download]
has $.foo; # public attribute
has $:bar; # private attribute
method foo (){...} # public method
method :bar () {...} # private method
...
$self.foo() # always calls public method virtually
.foo() # unary variant
$self.:bar() # always calls private method
.:bar() # unary variant
...
$self.bar() # calls public bar, not private.
| http://www.perlmonks.org/?node_id=332744 | CC-MAIN-2015-18 | refinedweb | 654 | 64.71 |
Agenda
See also: IRC log
Date: 31 January 2008
<scribe> Scribe: Norm
<scribe> ScribeNick: Norm
mv Mob*.bin /tmp
<ht>
<jar> i've muted my phone... i think that helped
<jar> didn't raman give regrets?
Dave is expected partway through the meeting
Stuart: Pretty much as published, with a little reordering and a new item from Henry
Agenda accepted
Accepted
Proposed to scribe: Dave
Stuart to chair
<timbl> Regrest: feb 21
<AshokMalhotra> Possible regrets next week
No regrets given for 7 Feb; Tim for 21 Feb.
Stuart: Welcome in a more formal
way to Ashok and Jonathan. Also congratulations and welcome
back to Henry and Raman.
... Perhaps we could do a bit of a round table.
Dan: I co-chair the HTML WG, occupying about 150% of my brain. Tag soup integration is always on my mind. Also IETF liason so mime-type issues always pique my interest. I'm interested in the Namespace Document 8 and sem-web related issues.
Henry: I have three documents on the critical path: Namespace Document 8, which is close, XML Functions 34, URNsAndRegistries 50. Otherwise known as why all schemes other than http: are evil.
<DanC_lap> (I forgot to say: I'm interested in learning about information theory and economics, since large-scale considerations often dominate semicolon-vs-comma level design decisions, even in HTML)
Henry: I'd like to spend more
time on the vocabulary work currently going on in the sem web
subgroup.
... we could do better making it clear about what URIs are and what resources are, etc.
Jonathan: I'm at Science Commons and from that PoV we have a strong interest in the semantic web and identifier schemes and document metadata.
Noah: I'm not sure how much
introduction is needed, I know Ashok and Jonathan a bit. I'm no
longer on the Protocol WG. I am still involved in XML
Schema.
... I can't say I have a technical hot button, I just think the web is really important and at its best the TAG has an opportunity to explain things that are subtle.
... We can also promote clear thinking.
... The web is something like a telephone system, it has to keep working in 30 or 50 years.
... I'm wrapping up a draft on the self-describing web, which doesn't have an issue.
... I tried to take a crack at the relationship between schemes and protocols, but I've put that down for a bit.
Stuart: I've been co-chairing for a while. My strong interests are in the semantic web. I can't seem to leave issues related to identifiers alone. I find some of the ontology aspects really absorbing and hard.
Tim: Generally the semantic web.
I think it's great that we have a subgroup doing semantic web
architecture.
... We need to be able to write these things in RDF and describe relationships between them.
... My current 'tabulator' project makes some of these issues urgent for me.
... All sorts of other things hit me at glancing angles: versioning in HTML and XML.
... There have been discussions, for example, about XML being upgraded. That's an example of one of the many times we've messed up versioning. We've got a lot of material thanks to Dave but we haven't boiled it down to truths.
Ashok: I started on Schema in
1999. I worked with Noah and Henry on it for many years. I also
did XML Query where I worked with Norm. Most recently, I've
been doing WS-Policy where I'm working with Dave.
... Now I'm focussed mainly on web services. I've been doing lots of OASIS work on web services: WS-Policy, etc. The other thing I'm trying to start is an incubator group to map relational data to RDF and OWL.
... That's taken a little while to get started, but once it starts, I think the TAG might have some wisdom to offer.
Norm: I'm co-chair of the XML Core WG and chair of the XML Processing Model WG so XML issues are always on my mind. I'm interested in the tag-soup nexus of issues. I'm interested in issues related to URIs and resources and the semantic web as well.
Staurt: I'd like to make a formal decision about the two meetings following Vancouver.
Stuart: There's been a WBS poll
for a while now. The September proposal is pretty strong.
... For Bristol, we are at risk for not having TV, Dave, and Dan for some or all of that meeting.
Dan: The risk for me is a semweb conference on the west coast that looks really cool, but I guess I could miss it.
Dave: Monday is a public holiday in CA, so we're likely to have plans, though we don't have any yet.
Stuart: Does anyone have reservations about us meeting w/o those participants.
Henry: Given how hard we've tried to find another date without success, I think we should go ahead.
Dan: My risk is negligible, let's ignore it.
Norm: I'm with Henry, it may not be ideal, but we can't find anything better.
Stuart: I propose that we adopt those two sets of dates.
Dave abstains, no objections.
Accepted.
<timbl> * Spring: 19th-21st May 2008 (Mon-Wed), Bristol UK, hosted by HP Labs, Bristol (Stuart)
<timbl> * Summer: 23rd-25th September 2008 (Tue-Thu), Kansas City, USA, hosted by W3C (DanC)
RESOLUTION: The TAG will meet 19-21 May in Bristol and 23-25 September in Kansas City
Stuart: Noah posted a note about the use of META tags to trigger standards-compliant rendering in browsers
<Noah>
Noah summarizes his message and how he came to discover this topic.
Noah: Roughly what's going on is
that users got dependent on how older versions of IE rendered
pages.
... But there is a desire to move forward. Some versions keyed off the presence of the DOCTYPE declaration.
... For a combination of reasons, they feel that's no longer working. If they did the same thing in IE8, it would break a lot of content tailored for IE7 and IE6.
... The proposal that's been floated is to use a new http-equive meta tag.
... I think the spin on that is that a site-wide HTTP header can set a global optoin.
... If you don't use the meta tag, you get quirky interpretation. If you do use the meta tag, then you identify the level of IE that you believe is best for your content.
... I have at least two concerns: the first is whether this is in any way, shape or form a good idea. The other is, what happens to follow your nose.
... I don't think it woudl break webarch at that level if (scribe: iff?) the HTML spec says something about that meta tag.
... Without that in the HTML spec, I'm not sure it's legitimate at all.
Dave: I think this is a great
thing to discuss. This is effectively a kind of browser
sniffing as TV pointed out.
... I guess there's a bunch of different aspects that are ... interesting.
... One is that if there's a version attribute, it'll be the *browser* version.
... Then there's where it's going to be, in the meta tag instead of a version attribute on the HTML tag or as a parameter on the media type.
... Then there's the fact that the default is going to be IE7 mode. The expectation is that a lot of people are going to forget to do this, so they'll be frozen indefinitely at IE7.
... Then there's the question of whether or not anything can actually be done about this.
<Noah> Norm: I don't think this is a great solution.
Norm: I appreciate that there are some hard problems here, but I think the proposed solution is awful.
<Zakim> DanC_lap, you wanted to think about economics and information theory of the http header
<timbl> How about an HTTP spec where you can quote the tracker URI of a bug you require?
Dan: David Barren gave a pretty coherent argument about the economics of putting the version identifier inside the document.
<timbl> So we have a tag for "Best viewed by" at last .. sigh.
Dan: If the HTML WG decided that
this was the right thing to do then, Firefox version 12 would
contain versions 11, 10, 9, etc.
... This is only practical for the guys with the biggest guns.
... I found this pretty compelling argument against a version attribute in the language
<timbl> Maybe the HTML spec should give a set of "Best viewed with" which are automatically inserted when this ttribute is found.
DanC: On the other hand, having
the version inside or outside the document is important.
... The spec documents say I send you a request, you send a document.
... In practice, you send me some bytes and you expect those to be interpreted according to the dominant browser at the time.
... So if you want your document to be interpreted per the specification, you're in the minority.
... It makes sense from an economic sense that the minority should pay a few more bits.
... If we get to the story where the deployed software obeys the specs, then you can throw away the HTTP header.
<Zakim> ht, you wanted to query dan
Henry: I don't understand how
what you just said renders less signficant Dave Baron's
observation.
... I thought you were going to say that if you move it into the HTTP header, then you can just launch the right browser.
... But then I thought you said it worked equally well inside or outside and that doesn't work for me.
DanC: What I mean is that if you
have a version flag that can be used in either place, you can
have a marketplace where some browsers ignore the flag and just
go as close to the specs as they can
... and other browsers obey it and the web gets better over time.
Henry: I don't see the connection with inside or outside
DanC: If it's outside, then the document doesn't have to change as the browsers evolve.
Some more discussion
DanC: I'm not interested in
supporting users who write code for a specific browser.
... MS can't ship a browser that obeys the standards because it won't get uptake.
<Zakim> Stuart, you wanted to ask folks how we feel abouts a situation where we have to deal with versions of interpretation/implmentations rather than the spec.
Stuart: We're now in a situation where we're concerned about the interpretation of a particular version of a spec. That seems weird.
<ht> HST wonders how serious the pushback was to the IE7 move which sparked this
<Zakim> Noah, you wanted to ask about range of user agents
DanC: Everything is weird about the HTML space. It's about economics and biology more than computer science.
Noah: The rule of least power
encourages users to write content that is idependent of
particular user agents. That's a good thing when you can get
ther.
... The simplest HTML is sort of like that. There are headers and paragraphs, and exactly how that's interpreted is up to the UA.
... Certain kinds of commercial work demanded greater fidelity.
... When you see this meta thing, if we could say that the core abstractions were the same, but that the meta would promise that corners on tables wouldn't be rounded, that'd be one thing.
... But I don't see any bound on it. I'd love to see a stake in the ground that says "here are the things you can't change in the meta tag".
... As long as I stick to certain things, I'll know that everyone is going to interpret it the same. If I go beyond that, to CSS corners or broken markup, then maybe the meta value will matter.
Stuart: Increasingly with subscription environments, the question is less about what pixels go on the screen and more about what DOM gets built.
<Zakim> Stuart, you wanted to stay that it goes way beyond screen rendering
Noah: The punchline for me is, when I see a meta tag, are all bets off or is there some level of functaionlity that I can rely on.
DanC: The hardest part about this
stuff is that you don't find out what the tokens mean until
well after they're issued. The browsers see the "Mozilla" token
and so they send CSS. So IE sends the Mozilla token. And then
some labels become labels for sets of bugs.
... What the label stands for is really hard to figure out in advance.
... Another kind of code tries functions and based on return values makes decisions about functions it can actually use.
... Consider the GNU autoconf stuff. It starts with now information and probes for various things.
<jar> danc, i think you meant 'autoconf'
Stuart: Is there more to be said now?
Dave: I wonder how this relates to our work on the versioning finding. I haven't really thought that through.
TimBL: This would definitely be a
good story.
... So would the XML 1.0 5e story.
Stuart: I don't see a particular action to leave dangling here.
Stuart: Dave had an action to publish it and solicit comments.
Dave: The edits that I did were
slightly more than I was asked to do. Because I picked up the
ball recently, I wanted to make sure that the group was happy
with my changes.
... I had hoped to get a diff out. Norm offers to diff them.
Norm: Diff 20071124 with 20080124.
Dave: yes.
<scribe> ACTION: Norm to create a diff of passwordsInTheClear [recorded in]
<trackbot-ng> Sorry, couldn't find user - Norm
<scribe> ACTION: Walsh to create a diff of passwordsInTheClear [recorded in]
<trackbot-ng> Created ACTION-97 - Create a diff of passwordsInTheClear [on Norman Walsh - due 2008-02-07].
<ht>
Dave: I'll listen until Wednesday and send something out if no one objects.
Stuart: Ok, that's what we'll do then.
<ht>
Henry: It turns out that the XRI
TC has published a Committee Specification for XRI resolution
2.0.
... The comment period closes tomorrow.
<DanC_lap> (ends tomorrow? when did it start? ah... 2 Dec. hmm... who is our oasis liaison, I wonder...)
<ht>
<Stuart> "
Henry: This is what I wrote on the basis that it's been a long time since we talked about it.
<jar> do w3c and oasis coordinate?
<DanC_lap> (the main place where XRI shows up on my radar lately is near OpenID)
<DanC_lap> (oasis liaison is Karl, says )
Henry: For reasons I have to say
I don't understand, they've gotten themselves written into
OpenID 2.0.
... Implementing OpenID 2.0 mandates implementing XRI.
Noah: Can you explain that?
<dorchard> I had understood that it was optionally in open id.
Henry: You have to be able to decode XRIs and implement the authority lookup protocol in order to find out what the OpenID is.
Danc: Folks are saying http:// is
too ugly, let's have =danc instead. And then people ask about
email addresses. The subtext is "oh, no, no, no, we want to be
able to collect money when people invent these"
... I've heard that one of the reasons the OpenID folks didn't go to the IETF is because the IETF would expose this.
Henry: It's very hard to find the
current, relevant bits. Lots of stuff on the web is old.
... I was told I could register =henry, =henrythompson, and @ibm!
... I
<jar> dns costs money too... ??
Henry: I'd like to talk about
this more, but the fundamental architectural proposal behind
this is to introduce a mandatory level of indirection into all
addressing.
... The core operation you can do is to retreive metadata about a resource.
DanC: So the design mandates an
extra round trip on the network. That's the number one thing to
avoid in a protocol.
... I'm happy to say that on behalf of the TAG by tomorrow.
Henry: That will put a stake in
the ground, but it's fundamental to their design.
... at the end of the docment I pointed to earlier, you'll see a list of the services you can get on an XRI
... With the right bits, the redirection would have been automatic.
Tim: If we haven't said it strongly enough, we should say again and again that conneg should only be used for two different representations of teh same thing.
Henry: Yes, that seems to be broken here too
Stuart: I've heard two things, one on the content negotiation, and one on the mandatory round trip.
Henry: I don't fully undertand all the dimensions of this yet. There's a distinction between URIs with and without service identifiers, for example.
Stuart: It is possible to express concerns in a general way and ask for more time?
DanC: We can also ask them as questions in the meantime.
Tim: Can't we do both?
... Lodge the complaints we really have and ask for clarification elsewhere.
TimBL: Isn't the privately owned naming scheme a problem to OASIS?
Henry: I don't see how we can make that argument given taht you have to pay someone to get a DNS name.
Stuart: Another possible technical question, XRI has been injected into OpenID, does that mean that XRI URIs are special in OpenID. So you're not treating URIs in a general way.
Henry: On the wiki's and things, they use XRIs so the agents do have to be able to recognize and interpret them.
Noah: Is the lack of URI
syntactic compatibility another issue? Let's say that XRIs
happen, can I put them in the same slots where URIs can go or
is that another issue?
... Do you really always know that when you want an XIR you don't want any other kind of URI or vice-versa?
<ht> Something like "Do we understand that XRIs _without the xri:// part_ must be recognised as alternatives to http: URIs for OpenID2.0 implementations?"
Stuart looks for volunteers to submit these comments
Henry says he's draft it now
<Noah> I think it's more than Open ID. "To what extent is it expected that there will be use cases in which a choice of URI or XRIs without the explicit scheme name to be allowed in, for example, the same attribute value or input field? If so, then how are the syntaxes to be coordinated to avoid collision?
<ht> "Is it a consequence of the spec., as it appears to us to be, that a) All access to resources identifies by XRIs requires (at least) two round trips and b) that content negotiation is used to return metadata or resource representations?"
<jar> i'm not a tag member
<jar> when is it due
Some discussion of the optional nature of the xri: part of the URIs.
<jar> yuck. it would take a while for me to track down the round-trip logic, etc
<DanC_lap> (what's the email address comments are due to?)
<AshokMalhotra> There is a ptr in Henry's document
<DanC_lap> it's due 1 Feb, tomorrow, per
<jar> sorry i'm not more forthcoming. in the middle of grants stuff
<ht> Sigh, it's like the last time: comments are to be made via a web form
<ht> So we send email to www-tag with our comments, and point to it from the form, I think
<ht> form is at
<DanC_lap> (ah; good; ht can take a look at a draft)
<scribe> ACTION: Noah to craft comments and send them on our behalf. [recorded in]
<trackbot-ng> Created ACTION-98 - Craft comments and send them on our behalf. [on Noah Mendelsohn - due 2008-02-07].
<Noah> That due date looks suspiciously late. I think it's 1 Feb 2007
<Zakim> Noah, you wanted to suggest my agenda items
Noah: As promised, I'm mighty close to a new draft on self describing web.
<DanC_lap> (oops; 4 overdue actions... )
Noah: I'd like that on the
agenda.
... I've also been thinking about http-range and 303 and that might be ready in time.
Stuart: I think namespaceDocument-8 is really on the brink of closure, we should try to get that closed.
Norm: I'd like to see xmlFunctions-34 on the agenda.
Stuart: Dave's not here so I
can't ask about logistics.
... Are folks generally happy with the logistics?
For the meeting, yes.
Adjourned
<jar> bye
This is scribe.perl Revision: 1.133 of Date: 2008/01/18 18:48:51 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/ityt/ity/ Succeeded: s/I'm co/Norm: I'm co/ Succeeded: s/iff?/scribe: iff?/ Succeeded: s/Barren/Baron/ Succeeded: s/nc.e/nce./ Succeeded: s/automake/autoconf/ Succeeded: s/service providers/service identifiers/ Succeeded: s/wehre/where/ Succeeded: s/hwen/when/ Succeeded: s/or XRI is/or XRIs without the explicit scheme name/ Succeeded: s/hear/here/ Found Scribe: Norm Inferring ScribeNick: Norm Found ScribeNick: Norm Present: Stuart Norm Jonathan Tim Ashok Dan Noah Regrets: Dave Raman Agenda: Found Date: 31 Jan 2008 Guessing minutes URL: People with action items: noah norm walsh[End of scribe.perl diagnostic output] | http://www.w3.org/2008/01/31-tagmem-minutes.html | CC-MAIN-2015-48 | refinedweb | 3,595 | 72.66 |
2D rotations in Eigen (C++).03 May 2021
Eigen, the matrix library in C++, is so cool. Having written some of these functions during the 00s in my own code, agonizing about whether I got the subscripts right, and now to have this – it feels like magic. Here’s an example.
So I have a homography, which is a 3x3 matrix that happens to be in OpenCV’s Mat format. So that this gets written, I’ll use what I am working on now. I want to extract the rotation matrix from this homography, and grab the rotation angle. For this I can use Eigen’s Rotation2D class.
You’ll want to include these includes, possibly
using namespace Eigen if you like doing so. I am putting the namespace in front of everything in the code example so you know where everything comes from, though it reads a little clunky.
#include <Eigen/Dense> #include <Eigen/Eigenvalues> #include <Eigen/Geometry>
So I have
cv::Mat H = findHomography(... , ...);
and know that we you can decompose a homography into a chain of transformations (see Richard Hartley and Andrew Zisserman’s book Multiple View Geometry in Computer Vision, chapter two and specifically 2.4.6). I have some prior information about this case, and a general idea that the rotations will fall into 4 general cases. I just want that angle!
Ok, to get from a homography to a rotation matrix we use our friend SVD. First, copy the OpenCV Mat over to an Eigen MatrixXd object and then run Eigen’s SVD function on it. SVD needs a matrix with a dynamic size, so MatrixXd, versus Matrix2d. How do I know this? Runtime error.
Eigen::MatrixXd mat2d(2,2); // copy for (int r = 0; r < 2; r++){ for (int c = 0; c < 2; c++){ mat2d(r, c) = H.at<double>(r, c); } } // svd Eigen::JacobiSVD<MatrixXd> svd(mat2d, ComputeThinU | ComputeThinV); // ortho = U*V^T Eigen::Matrix2d ortho2d = svd.matrixU()*svd.matrixV().transpose();
ortho2d is an orthonormal 2x2 matrix, in other words, the rotation matrix we’ve been looking for. Here’s the
Eigen::Rotation2D class part.
Eigen::Rotation2D<double> rot2d(ortho2d);
Eigen::Rotation2D is a templated class, so when you declare a variable it has to be with a type. I was initially confused because in Eigen
d is
double and say for matrices, when you declare a variable, you select types Matrix3f (3x3, float), Matrix2d (2x2, double), etc. In this case,
2D does not change, which represents ‘two dimensional’ and you change the type to
float,
double, whatever.
This isn’t super well-documented in the front matter of the class, but this is the standard that the angle of the rotation will be in radians and counter clock-wise. I used the matrix constructor by sticking in the Matrix2d
ortho2d.
Then, I can quickly use the angle:
out << "angle: " << rot2d.smallestPositiveAngle() << endl;
smallestPositiveAngle() is very handy because it returns the angle within [0, 2pi]. For anyone who has ever converted angles to try to reason about them … this is great. There are other functions too.
I haven’t tried to break it yet and see what happens if I try to construct an
Eigen:Rotation2d object with a non orthonormal matrix. The constructor documentation says the argument matrix needs to be a rotation matrix.
© Amy Tabb 2018-2021. All rights reserved. The contents of this site reflect my personal perspectives and not those of any other entity. | https://amytabb.com/til/2021/05/03/eigen-rotation-2d/ | CC-MAIN-2021-21 | refinedweb | 576 | 64.91 |
Generic code to execute any stored procedure/batch of stored procedures with different number of parameters and data types
Latest C# Articles - Page 99.
Access Newly Available Network Information with .NET 2.0
A new namespace in the upcoming 2.0 release of the Microsoft .NET Framework adds support for some very useful network-related items. Explores some of these new items and how you can use them to your advantage.
Batched Execution Using the .NET Thread Pool
The .NET thread pool's functionality for executing multiple tasks sequentially in a wave or group is insufficient. Luckily, a Visual C++.NET helper method that uses other types within the System.Threading namespace provides this batch-execution model.
Asynchronous Socket Programming in C#: Part II
Second part of the C# asynchronous socket example, showing more features in socket programming.. | http://www.codeguru.com/csharp/csharp/588/ | CC-MAIN-2017-04 | refinedweb | 138 | 52.87 |
Efficient Lightweight JMS with Spring and ActiveMQ
2009/10/16 56 Comments
Asynchronicity, its the number one design principal for highly scalable systems, and for Java that means JMS, which in turn means ActiveMQ. But how do I use JMS efficiently? One can quickly become overwhelmed with talk of containers, frameworks, and a plethora of options, most of which are outdated. So lets pick it apart.
Frameworks
The ActiveMQ documentation makes mention of two frameworks; Camel and Spring. The decision here comes down to simplicity vs functionality. Camel supports an immense amount of Enterprise Integration Patterns that can greatly simplify integrating a variety of services and orchestrating complicated message flows between components. Its certainly a best of breed if your system requires such functionality. However, if you are looking for simplicity and support for the basic best practices then Spring has the upper hand. For me, simplicity wins out any day of the week.
JCA (Use It Or Loose It)
Reading through ActiveMQ’s spring support one is instantly introduced to the idea of a JCA container and ActiveMQ’s various proxies and adaptors for working inside of one. However, this is all a red herring. JCA is part of the EJB specification and as with most of the EJB specification, Spring doesn’t support it. Then there is a mention of Jencks, a “lightweight JCA container for Spring”, which was spun off of ActiveMQ’s JCA container. At first this seems like the ideal solution, but let me stop you there. Jencks was last updated on January 3rd 2007. At that time ActiveMQ was at version 4.1.x and Spring was at version 2.0.x and things have come a long way, a very long way. Even trying to get Jencks from the maven repository fails due to dependencies on ActiveMQ 4.1.x jars that no longer exist. The simple fact is there are better and simpler ways to ensure resource caching.
Sending Messages
The core of Spring’s message sending architecture is the JmsTemplate. In typical Spring template fashion, the JmsTemplate abstracts away all the cruft of opening and closing sessions and producers so all the application developer needs to worry about is the actual business logic. However, ActiveMQ is quick to point out the JmsTemplate gotchas, mostly that JmsTemplate is designed to open and close the session and producer on each call. To prevent this from absolutely destroying the messaging performance the documentation recommends using ActiveMQ’s PooledConnectionFactory which caches the sessions and message producers. However this too is outdated. Starting with version 2.5.3, Spring started shipping its own CachingConnectionFactory which I believe to be the preferred caching method. (UPDATE: In my more recent post, I talk about when you might want to use PooledConnectionFactory.) However, there is one catch to point out. By default the CachingConnectionFactory only caches one session which the javadoc claims to be sufficient for low concurrency situations. By contrast, the PooledConnectionFactory defaults to 500. As with most settings of this type, some amount of experimentation is probably in order. I’ve started with 100 which seems like a good compromise.
Receiving Messages
As you may have noticed, the JmsTemplate gotchas strongly discourages using the recieve() call on the JmsTemplate, again, since there is no pooling of sessions and consumers. Moreover, all calls on the JmsTemplate are synchronous which means the calling thread will block until the method returns. This is fine when using JmsTemplate to send messages since the method returns almost instantly. However, when using the recieve() call, the thread will block until a message is received, which has a huge impact on performance. Unfortunately, neither the JmsTemplate gotchas nor the spring support documentation mentions the simple Spring solution for these problems. In fact they both recommend using Jencks, which we already debunked. The actual solution, using the DefaultMessageListenerContainer, is buried in the how do I use JMS efficiently documentation. The DefaultMessageListenerContainer allows for the asynchronous receipt of messages as well as caching sessions and message consumers. Even more interesting, the DefaultMessageListenerContainer can dynamically grow and shrink the number of listeners based on message volume. In short, this is why we can completely ignore JCA.
Putting It All Together
Spring Context XML
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <!-- enables annotation based configuration --> <context:annotation-config /> <!-- scans for annotated classes in the com.company package --> <context:component-scan <!-- allows for ${} replacement in the spring xml configuration from the system.properties file on the classpath --> <context:property-placeholder <!-- creates an activemq connection factory using the amq namespace --> <amq:connectionFactory <!-- CachingConnectionFactory Definition, sessionCacheSize property is the number of sessions to cache --> <bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory"> <constructor-arg <property name="exceptionListener" ref="jmsExceptionListener" /> <property name="sessionCacheSize" value="100" /> </bean> <!-- JmsTemplate Definition --> ="Queue.Name" ref="queueListener" /> </jms:listener-container> </beans>
There are two things to notice here. First, I’ve added the amq and jms namespaces to the opening beans tag. Second, I’m using the Spring 2.5 annotation based configuration. By using the annotation based configuration I can simply add @Component annotation to my Java classes instead of having to specify them in the spring context xml explicitly. Additionally, I can add @Autowired on my constructors to have objects such as JmsTemplate automatically wired into my objects.
QueueSender
package com.company; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.jms.core.JmsTemplate; import org.springframework.stereotype.Component; @Component public class QueueSender { private final JmsTemplate jmsTemplate; @Autowired public QueueSender( final JmsTemplate jmsTemplate ) { this.jmsTemplate = jmsTemplate; } public void send( final String message ) { jmsTemplate.convertAndSend( "Queue.Name", message ); } }
Queue Listener
package com.company; import javax.jms.JMSException; import javax.jms.Message; import javax.jms.MessageListener; import javax.jms.TextMessage; import org.springframework.stereotype.Component; @Component public class QueueListener implements MessageListener { public void onMessage( final Message message ) { if ( message instanceof TextMessage ) { final TextMessage textMessage = (TextMessage) message; try { System.out.println( textMessage.getText() ); } catch (final JMSException e) { e.printStackTrace(); } } } }
JmsExceptionListener
package com.company; import javax.jms.ExceptionListener; import javax.jms.JMSException; import org.springframework.stereotype.Component; @Component public class JmsExceptionListener implements ExceptionListener { public void onException( final JMSException e ) { e.printStackTrace(); } }
Great article. We so need to tidy up the ActiveMQ website!
Hiram gave me wiki edit privileges, its on my todo list. Of course that will invalidate most of my whining in the post… ;-)
Great article, Benjamin! I’m just trying to integrate ActiveMQ v5.3 into SpringSource dm Server v2.0.4 and it seems to work .-) Well done, keep writing!
Nice post.
Thanks for clearing up the Jencks / session pooling issues, I think this was very much needed.
When you update the ActiveMQ website, would be great if you can followup this post with the link…and any other resource optimizations you’ve learned
Very nice write up of this topic. I wish I had seen it before I had finished my project. Thanks for the information about Jencks, I had started looking down that path, but never got that far… I guess I don’t need to revisit.
Do you have any numbers to compare the ActiveMQ pooled connection factory vs the one shipped with Spring? I agree that Spring usually does a decent job with performance, but with the JMSTemplate written the way it is, I just wasn’t sure whether to use Spring or ActiveMQ’s connection pooling.
Again, thanks for the write up on this topic.
I don’t have numbers comparing the two but its a very simple change in the xml config so switching them back and forth in your system should be trivial. My gut feeling is that the Spring version is more aware of how to deal with the rest of the Spring JMS classes and with the Spring lifecycle which is why I went with it. Additionally, ActiveMQ pool requires the activemq-pool jar and has a dependency on apache commons pool where as the Spring version has no dependancies and comes for free with the spring-jms jar. If you do test this, make sure you set the number of sessions to be the same on both.
Thanks for this concise article.
A lot information and explanation such a short way.
Would be great if we can follow on your blog about transaction integration, sth like sessionTransacted on AbstractPollingMessageListenerContainer, in the future…
Kurt, thanks for the complement. Unfortunately I use transactions so I’m not that familiar with it though you probably need to use a JmsTransactionManager. If you need to coordinate message and database transactions I think you need to use JCA which spring supports as of 2.5 . Hope that helps.
Hey Benjamin,
This is very nice stuff. But i am a little skeptical about how to execute JMS effeciently in a non Spring framework. Am working in struts currently and i have initialized the factory and producers in a static way. As u just mentioned that it creates connection on every request, thats y i kept the connection and producers in static variables.
/**start the broker
*/
static{
BrokerService broker = new BrokerService();
try {
broker.addConnector(“vm://localhost”);
broker.start();
} catch (Exception e) {}
}
/*
* Make the connection
*/
static{
try{
connectionFactory = new ActiveMQConnectionFactory(
ActiveMQConnection.DEFAULT_USER,
ActiveMQConnection.DEFAULT_PASSWORD,”vm://localhost”);
connection = connectionFactory.createConnection();
connection.setExceptionListener(new JMSExceptionListener());
connection.start();
session = connection.createSession(transacted, Session.AUTO_ACKNOWLEDGE);
destination = session.createTopic(“Message Center”);
producer = session.createProducer(destination);
producer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
}catch (Exception e) {}
}
And after this i ll directly use my producer object from other classes directly without using new operator.
Please advice if the way i am doing is effecient or do u think somethig else can be done??
Really thanks in advance
Only connections are thread safe, producers and sessions are not. If you are using the producer object in a multi threaded environment (which you probably are with struts) you will probably run into concurrency issues.
I would recommend using the PooledConnectionFactory since it will cache sessions and message producers. Then you can get a new session and producer every time you want to send a message.
Luckily message sending is pretty straight forward using the PooledConnectionFactory. If you are also doing message consumption then you have a whole other set of issues.
What he said, but also starting that stuff in static initializers inside try’s with no catches is going to make that next to impossible to a) test b) debug if there is actually a problem with initialization. In static blocks means that you won’t even be able to write a method to check to see if the broker is running without starting it. Any time you load the class in any way you’ll start up the message bus.
Just a thought.
@Benjamin
PooledConnectionFactory is a good idea.Thanks Benjamin.
—————————————————–
@John
Reason for Static initialzers:
Actually i implemented the singleton approach for the producer object. Thats y i was initializing it once in the static block and thereafter i’ll keep using the same object to send different messages.
I was just curious about wheter i ll run into any thread conflicts if i use the same producer object for sending msgs.
Well i think,even though the producer object is sared b’w threads but because modifying the internal state of the producer obj, i should not cause any concurency problems.
Please bear with my questions coz am a novice and u both are pros.
Apurav,
Maybe I’m not fully understanding you but you said the producer is shared between threads. If thats the case then you will probably run into concurrency problems.
Also you should look into lazy loading the singleton not this idiom is tricky and relies on some weirdness in the jvm so you should use that code to the letter and do something useful in your catch block if all you do is write to standard error.
hmm! thanks
I’m working with Flex, Spring, ActiveMQ and BlazeDS. I have a producer that posts messages to a Topic every second. When i first subscribe to that topic with a Flex client everything is ok. After i close the browser and i re-open the application i receive all the messages that were sent while i was offline. The topic is NOT durrable. It seems that ActiveMQ is not closing the connection with the Flex cleint event that i closed the browser. Do you know what the problem might be? If i subscribe/unsubscribe from Java client everything is ok.
1) How have you checked that the topic is non durable? It never hurts to question assumptions.
2) Is it possible to explicitly unsubscribe and or close the connection / socket in flex before you close the browser? I know that flash doesn’t have a shutdown hook but I’m not sure about flex.
3) Is it possible to explicitly start up a new subscription / connection /socket when you reopen the app?
We have seen some crazy behavior in flash re: sockets. The vm will hold on to them and keep trying to reconnect gracefully for the client app. Seems nice when you are a novice programmer dealing with intermittent network issues but can actually be kinda annoying and unexpected if you are more advanced.
Besides that I have no clue, I’m not a flex/flash/blazeds guy.
Try doing:
producer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
Might work
Pingback: Synchronous Request Response with ActiveMQ and Spring « CodeDependents
Pingback: ConnectionFactories and Caching with Spring and ActiveMQ « CodeDependents
Certainly, the documentation surrounding the Jencks libraries is confusing, and performance comparisons between the various connection pooling options are far and few between, but I thought I should point out that there are more recent releases then stated above (Jencks 2.2 is the latest from Sept 2010 and can be found at ).
Mark
Hello !
I am new to spring and amq.
I have created springcontext.xml and system.properties and classes as mentioned in article.
and wrote Test class is as follows:
public class Test {
public static void main(String[] args) {
String[] configuration = new String[] { “/springcontext.xml” };
Test.applicationContext = new ClassPathXmlApplicationContext(configuration);
Test.applicationContext.registerShutdownHook();
QueueSender queueSender = (QueueSender) Test.applicationContext.getBean(“queueSender”);
queueSender.send(“FIRST MSG”);
Test.applicationContext.close();
}
}
Spring obviously throws exception “NoSuchBeanDefinitionException” queueSender.
May be its basic but please help me find better way to create/access instance of QueueSender and send sample message to queue.
Your help will get me started.
Thank you,
Nitin.
I noticed that I had the package name wrong in QueueSender. Try it again. If you have changed any of the packages be sure to change
in the spring xml to match your new package names
Hi Benjamin,
Thank you for your instant reply :)
I had changed package names and xml but found It was classpath for xml and few “class not found” issues :)
It is working perfectly fine now, receiver gets message and admin page of activemq shows right count too
Please let me know if i should get instance of QueueSender in any better way at all :)
Thanks again,
Nitin
This is by far the best article on Integrating ActiveMQ and Spring ….
Nice, tidy and neat.
Thanks Benjamin
I have an issue with using CachingConnectionFactory and JMSTemplate on IBM Websphere 6.1
1) How to reconnect to MQ, when there is an queue Mangers goes down and comes back.
2) how to use reconnectOnException in IBM web sphere.
Any help is appreciated.
I don’t know the first thing about Websphere but I bet someone over at ActiveMQ can help you out.
What is the proper way to destroy embedded broker with vmtransport? Tomcat logs following lines when application stops: [ActiveMQ Task] but has failed to stop it. This is very likely to create a memory leak.
25.11.2010 13:49:23 org.apache.coyote.http11.Http11AprProtocol destroy
You should ask the AMQ mailer
Thank you for this post, it has definitely helped me with my Spring JMS configuration.
One question though:
In the Spring Context XML, does the listener-container need a reference to the connection factory?
eg.
< jms:listener-container concurrency="10" connection-factory=”connectionFactory” >
thanks.
If I remember correctly if there is a connection factory object configured in spring with the name connectionFactory then it will be used by default. If you have named your connection factory something else then yes you need to give it a reference.
Hi, Benjamin
I am an absolute starter with ActiveMQ. The above code snippets need to be supported with some other code. Can you share me the link to the full code repo for this example which would contain all config and properties file. Thanks in advance
The ActiveMQ website has some great resources for beginners and I updated the Spring documentation there to reflect this blog post. Try starting with this. You might also find some other useful blog posts here or you can splurge and buy the just released book on ActiveMQ.
Hi, Benjamin
I am an absolute starter with ActiveMQ,I just use the above code in my project, but i find an error
Unable to locate Spring NamespaceHandler for XML schema namespace []
For the ‘Unable to locate Spring NamespaceHandler for XML schema namespace []’ error, you’re missing a dependency on Apache XBean. Depending on the version of ActiveMQ you’re using, you will need to grab the appropriate version of xbean-spring here:
Here’s a quick mapping from memory for the last few versions:
ActiveMQ 5.5 => XBean 3.7
ActiveMQ 5.4.x => XBean 3.7
ActiveMQ 5.3.x => XBean 3.6
Hope that helps.
Bruce
Hello !
I am new to spring and amq
I use the example above, Fast data transfer,i used 100 threads to send 100,000 data,and open 100 consumers to received message, No problem the first time,Then I stop all the threads, A few minutes later,I open the 100 threads again, I find the activeMQ page display the Messages Enqueued was showly, after the theads completed, i find the Messages Enqueued and the Messages Dequeued only has 1000 messages, lost 99,000 data,then I restart the activeMQ, the 99,000 data was in the display in the Number Of Pending Messages. so I think the second time all the messages has send to the activeMQ
But only display 1/10 messages ,I has set the sessionCacheSize 100,Can you help me analyze the reasons ?
thanks
Try asking AMQ mailer
Thanks for the post extremely useful
The configuration does not work out of the box for spring 2.5.5, org.springframework.jms.connection.CachingConnectionFactory does not have constructer used in the example, however spring 2.5.6 works
Thanks for the tip!
Thank god some bloggers can write. Thank you for this piece of writing.
Thanks! I’m glad it was helpful.
I keep getting the following error:
Error creating bean with name ‘queueSender': Unsatisfied dependency expressed through constructor argument with index 0 of type [org.springframework.jms.core.JmsTemplate]: Error creating bean with name ‘jmsTemplate’ defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Cannot resolve reference to bean ‘connectionFactory’ while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘connectionFactory’ defined in ServletContext resource [/WEB-INF/applicationContext.xml]: 1 constructor arguments specified but no matching constructor found in bean ‘connectionFactory’ (hint: specify index and/or type arguments for simple parameters to avoid type ambiguities)
did any one run into the same error?
@Sridhar above mentions that the xml only works with Spring 2.5.6 not 2.5.5. The issue is with the constructor for org.springframework.jms.connection.CachingConnectionFactory. I would check the javadoc for your version and make the necessary changes.
Hi Benjamin
I tried this and it works great in my system… I am getting a response in sync mode.. But I see that the message consumers count keeps on increasing for ever… I tried switching to active mq pooled connection factory too…. that doesnt seem to solve my issue…Can you please advise…
Hey Shiva, its been a couple of years now since I last used AMQ so I don’t think I can be of much help. However, the AMQ user forum is very active and helpful.
Good article. I will be dealing with a few
of these issues as well..
Hi,
I would like to know by your experience what do you think of having ActiveMQ comparing to HornetQ both integrated in Spring.
Latency and throughout manners.
thanks,
I’m glad to see Spring integrates with both. When I looked at this a few years ago HornetQ had some compelling performance numbers but I found ActiveMQ to be a more welcoming community which was very important when things didn’t work as expected. Additionally, when I started the project, HornetQ wasn’t an option so I chose ActiveMQ by default. I haven’t looked at either of these in years so I cannot comment on their current performance but I’m glad there is competition. Apollo looks compelling as well and will eventually become ActiveMQ 6.0.
Hi, when i configured like this to maintain queue ordering, but still there are 2 concurrent listeners are consuming the queue messages why? and how to get rid of this.
Have you set concurrency=”1″ on the jms:listener-container?
in jms:listener-container concurrency=”1″ set the attribute to 1
When spring application context is loaded one listener is creating, when a message comes to the queue another listener is creating, hence the 2 listeners are consuming the message which leads to message disordering.
I haven’t used this in 3 years so I would ask the AMQ mailing list
I like this web site extremely a lot so much great information. “Books are not made to be believed, but to be subjected to inquiry.” by Umberto Eco.
Pingback: Active MQ – Producer / Consumer Spring | technicalpractical
Awesome article. It helped me a lot. Thanks. Keep posting!!! :) | http://codedependents.com/2009/10/16/efficient-lightweight-jms-with-spring-and-activemq/ | CC-MAIN-2014-52 | refinedweb | 3,658 | 57.06 |
MicroPyramid, a standard AWS consulting partner with astonishing AWS consultants, who can guide you to process message queuing system by Amazon SQS. Consult our technical experts to get best AWS consulting services with outstanding quality and productivity.
Amazon Simple Queue Service (Amazon SQS) is a distributed messaging queue oriented service.
messages are queued into SQS which are variable in size but can be no larger than 256KB. SQS doesn’t give the guarantee to the delivery order of messages. SQS messages will be delivered more than once with no guarantee to the order of message. Using Visibility Timeout we can ensure once a message has been retrieved it will not be resent for a given period of time.
In this tutorial, we'll see how to manage SQS queues and messages using boto3.
import boto3 # boto3 connect sqs = boto3.resource( 'sqs', region_name=AWS_REGION_NAME, aws_access_key_id=AWS_ACESS_KEY_ID, aws_secret_access_key=AWS_SECRET_ACESS_KEY)
In the above code, we are connecting to a sqs resource in a given region, access key id, secret key using boto3.
queue = sqs.create_queue(QueueName='testqueue', Attributes={'DelaySeconds': '5'})
We should give queue name, can also give other attributes such as delay seconds(number of seconds to wait before an item may be processed), ApproximateNumberOfMessages, MaximumMessageSize.
It returns unique queue url though which we can access queue and its messages
After connecting to a service, we are connecting a SQS Queue by giving queue name with get_queue_by_name method.
queue = sqs.get_queue_by_name(QueueName=AWS_QUEUE_NAME)
In SQS, we can create single, bulk messages in a queue using send_message and send_messages command.
response = queue.send_message( QueueUrl=url, MessageBody='message1', MessageAttributes={ 'Type': { 'name': 'String' } ) print response
It returns a message id, message body for generated message. We can alse user defined attrubutes to a individual message.
response = queue.send_messages(Entries=[{ QueueUrl=url, MessageBody='message1', MessageAttributes={ 'Type': { 'length': '09' }, {QueueUrl=url, MessageBody='message2', MessageAttributes={ 'Size': { 'size': '20' }] ) print response
Response will contain all successful message and failed messages information in a queue.
message = queue.receive_messages()[0] message = queue.receive_messages(MessageAttributeNames=['Type'])[0]
sqs message will be processed in batches, we can retrieve all messages or filter particular messages based on attribute types in a queue. We will get the message information in an xml format. We can convert it to json using xmltodict package. Here you can find relevant information to process information.
response = client.delete_message( QueueUrl=url, ReceiptHandle=MESSAGE_ID )
response = queue.delete_messages( Entries=[ { 'Id': MESSAGE_ID, 'ReceiptHandle': MESSAGE_BODY }, ] )
Here we give the queue url, message id for the message to be deleted.
response = client.delete_queue( QueueUrl=url )
When you delete a queue, you must wait at least 60 seconds to delete the queue and creating a queue with the same... | https://micropyramid.com/blog/how-to-process-message-queuing-system-by-amazon-sqs/ | CC-MAIN-2019-51 | refinedweb | 443 | 54.22 |
How Companies are Reinventing their Idea to Launch Methodologies
- Cordelia Sherman
- 1 years ago
- Views:
Transcription
1 How Companies are Reinventing their Idea to Launch Methodologies By Robert G. Cooper This article appeared in Research Technology Management March April 2009, Vol 52, No 2, pp Stage Gate International Stage Gate is a registered trademark of Stage Gate Inc. Innovation Performance Framework is a trademark of Stage Gate Inc.
2 HOW COMPANIES ARE REINVENTING THEIR IDEA TO LAUNCH METHODOLOGIES Next-generation Stage-Gate systems are proving more fl exible, adaptive and scalable. Robert G. Cooper OVERVIEW: The Stage-Gate system introduced in the mid-1980s has helped many fi rms drive new products to market. But leaders have adjusted and modifi ed the original model considerably and built in many new best practices. They have made the system more fl exible, adaptive and scalable; they have built in better governance; integrated it with portfolio management; incorporated accountability and continuous improvement; automated the system; bolted on a proactive front-end or discovery stage; and fi nally, adapted the system to include open innovation. All of these improvements have rendered the system faster, more focused, more agile and leaner, and far better suited to today s rapid pace of product innovation. KEY CONCEPTS: Stage-Gate, next-generation Stage- Gate, idea-to-launch process, best practices. The Stage-Gate process has been widely adopted as a guide to drive new products to market ( 1,2 ). The original Stage-Gate model, introduced in the mid-1980s, was based on research that focused on what successful project teams and businesses did when they developed winning new products. Using the analogy of North American Robert Cooper is emeritus professor at the DeGroote School of Business, McMaster University, Hamilton, Ontario, Canada. He is also ISBM Distinguished Research Scholar at Penn State s Smeal College of Business Administration and president of the Product Development Institute. fi eld of innovation management, he is a Fellow of the Product Development Management Association, and creator of the Stage-Gate new product process used by many fi rms. He received his Ph.D. in business administration from the University of Western Ontario. ; football, Stage-Gate is the playbook that the team uses to drive the ball down the field to a touchdown; the stages are the plays, and the gates are the huddles. The typical Stage-Gate system is shown in Figure 1 for major product development projects. With so many companies using the system, invariably some firms began to develop derivatives and improved approaches; indeed, many leading firms have built in dozens of new best practices, so that today s stage-andgate processes are a far cry from the original model of 20 years ago. Here are some of the ways that companies have modified and improved their idea-to-launch methods as they have evolved to the next-generation Stage- Gate system ( 3 ). Focus on Effective Governance Making the gates work Perhaps the greatest challenge that users of a stageand-gate process face is making the gates work. As go the gates, so goes the process, declared one executive, noting that the gates in her company s process were ineffectual. In a robust gating system, poor projects are spotted early and killed; projects in trouble are also detected and sent back for rework or redirect put back on course. But as quality control check points, the gates aren t effective in too many companies; gates are rated one of the weakest areas in product development with only 33 percent of firms having tough, rigorous gates throughout the idea-to-launch process ( 4 ). Gates with teeth A recurring problem is that gates are either non-existent or lack teeth. The result is that, once underway, projects are rarely killed at gates. Rather, as one senior manager exclaimed, Projects are like express trains, speeding down the track, slowing down at the occasional station [gate], but never stopping until they reach their ultimate destination, the marketplace. March April
3 Figure 1. Many fi rms use a Stage-Gate system to drive development projects to commercialization. Shown here is a fi ve-stage, fi ve-gate process typically used for major new product projects. Such a model provides a guide to project teams, suggesting best-practice activities within stages, and defi ning essential information or deliverables for each gate. Gatekeepers meet at gates to make the vital Go/Kill and resource commitment decisions. Example: In one major high-tech communications equipment manufacturer, once a project passes Gate 1 (the idea screen), it is placed into the business s product roadmap. This means that the estimated sales and profits from the new project are now integrated into the business unit s financial forecast and plans. Once into the financial plan of the business, of course, the project is locked-in: there is no way that the project can be removed from the roadmap or killed. In effect, all gates after Gate 1 are merely rubber stamps. Management in this firm missed the point that the idea-to-launch process is a funnel, not a tunnel, and that gates after Gate 1 are also Go/Kill points; this should not be a one-gate, five-stage process! In too many firms, like this example, after the initial Go decision, the gates amount to little more than a project update meeting or a milestone check-point. As one executive declared: We never kill projects, we just wound them! Thus, instead of the well-defined funnel that is so often used to shape the new product process, one ends up with a tunnel where everything that enters comes out the other end, good projects and bad. Yet management is deluded into believing they have a functioning Stage-Gate process. In still other companies, the gate review meeting is held and a Go decision is made, but resources are not committed. Somehow management fails to understand that approval decisions are rather meaningless unless a check is cut and the project leader and team leave the gate meeting with the resources they need to advance their project. Instead, projects are approved, but resources are not a hollow Go decision, and one that usually leads to too many projects in the pipeline and projects taking forever to get to market. If gates without teeth and hollow gates describe your company s gates, then it s time for a rethink. Gates are not merely project review meetings or milestone checks! Rather, they are Go/Kill and resource allocation meetings: Gates are where senior management meets to decide whether the company should continue to invest in the project based on latest information, or to cut one s losses and bail out of a bad project. And gates are a resource commitment meeting where, in the event of a Go decision, the project leader and team receive a commitment of resources to move their project forward. Example ( 5 ): Cooper Standard Automotive (no relation to the author) converted its gates into decision factories. Previously management had failed to make many Kill decisions, with most gates merely automatic Go s. The result was a gridlocked pipeline with over 50 major projects, an almost-infinite time-to-market, and no or few launches. By toughening the gate meetings making them rigorous senior management reviews with solid data available and forcing more kills, management 48 Research. Technology Management
4 dramatically reduced the number of projects passing each gate. The result was a reduction today to eight major, high-value projects, time-to-market down to 1.6 years, and five major launches annually. Leaner and simpler gates Most companies new product processes suffer from far too much paperwork delivered to the gatekeepers at each gate. Deliverables overkill is often the result of a project team that, because they are not certain what information is required, prepare an overly comprehensive report, and in so doing, attempt to bullet-proof themselves. The fault can also be the design of the company s idea-to-launch system itself, which often includes elaborate templates that must be filled out for every gate. While some of the information that gating systems demand may be interesting, often much of it is not essential to the gate decision. Detailed explanations of how the market research was done or sketches of what the new molecule looks like, add no value to the decision. Restrict the deliverables and their templates to the essential information needed to make the gate decisions: Example ( 6 ): Lean gates is a positive feature of Johnson & Johnson Ethicon Division s Stage-Gate process. Previously, the gate deliverables package was a 30-to- 90-page presentation, and a lot of work for any project team to prepare. Today, it s down to the bare essentials: one page with three back-up slides. The expectation is that gatekeepers arrive at the gate meeting knowing the project, having read and understood the deliverables package prepared by the project team (the gate meeting is not an educational session to bring a poorly prepared gatekeeping group up to speed). Senior management is simply informed at the gate review about the risks and the commitments required. Finally, there is a standardized presentation format. The result is that weeks of preparation work have been saved. Example ( 7 ): One of the compelling features of Procter & Gamble s latest release of SIMPL (its Stage-Gate process) is much leaner gates a simpler SIMPL. Previously, project teams had decided which deliverables they would prepare for gatekeepers. Desirous of showcasing their projects and themselves, the resulting deliverables package was often very impressive but far too voluminous. As one astute observer remarked, it was the corporate equivalent of publish or perish. The deliverables package included up to a dozen detailed attachments, plus the main report. In the new model, the approach is to view the gates from the decision-makers perspective. In short, what do the gatekeepers need to know in order to make the Go/Kill decision? The gatekeepers requests boiled down to three key items: The greatest challenge stage-andgate users face is making the gates work. Have you done what you should have are the data presented based on solid work? What are the risks in moving forward? What are you asking for? Now the main gate report is no more than two pages, and there are four required attachments, most kept to a limit of one page. The emphasis in lean gates is on making expectations clear to project teams and leaders that they are not required to prepare an information dump for the gatekeepers. The principles are that: Information has a value only to the extent it improves a decision; and The deliverables package should provide the decisionmakers only that information they need to make an effective and timely decision. Page restrictions, templates with text and field limits, and solid guides are the answer favored by progressive firms. Who are the gatekeepers? Many companies also have trouble defining who the gatekeepers are. Every senior manager feels he or she should be a gatekeeper, and so the result is too many gatekeepers more of a herd than a tightly-defined decision group and a lack of crisp Go/Kill decisions. Defining governance roles and responsibilities is an important facet of Stage- Gate. At gates, the rule is simple: The gatekeepers are the senior people in the business who own the resources required by the project leader and team to move forward. For major new product projects, the gatekeepers should be a cross-functional senior group the heads of Technical, Marketing, Sales, Operations and Finance (as opposed to just one function, such as Marketing or R&D, making the call). Because resources are required from March April
5 many departments, the gatekeeper group must involve executives from these resource-providing areas so that alignment is achieved and the necessary resources are in place. Besides, a multi-faceted view of the project leads to better decisions than a single-functional view. And because senior people s time is limited, consider beginning with mid-management at Gate 1, and for major projects, ending up with the leadership team of the business at Gates 3, 4 and 5 in Figure 1. For smaller, lowerrisk projects, a lower-level gatekeeping group and fewer gates usually suffices. Fostering the right behavior A recurring complaint concerns the behavior of senior management when in the role of gatekeepers. Some of the bad gatekeeping behaviors consistently seen include: Executive pet projects receiving special treatment and by-passing the gates (perhaps because no one had the courage to stand up to the wishes of a senior person a case of the emperor wearing no clothes ). Gate meetings cancelled at the last minute because the gatekeepers are unavailable (yet they complain the loudest when projects miss milestones). Gate meetings held, but decisions not made and resources not committed. Key gatekeepers missing the meeting and not delegating their authority to anyone. Gate meeting decisions by executive edict the assumption that one person knows all. Using personal Go/Kill criteria (rather than robust and transparent decision-making criteria). Gatekeepers are members of a decision-making team. And decision teams need rules of engagement. Senior people often implement Stage-Gate in the naïve belief that it will shake up the troops and lead to much different The new approach is to view gates from the decision-makers perspective. behavior in the ranks. But quite the opposite is true: the greatest change in behavior takes place at the top! The leadership team of the business must take a close look at their own behaviors often far from ideal and then craft a set of gatekeeper rules of engagement and commit to live by these. Table 1 lists a typical set. Portfolio Management Built In Portfolio management should dovetail with your Stage- Gate system ( 8 ). Both decision processes are designed to make Go/Kill and resource allocation decisions, and hence ideally should be integrated into a unified system. There are subtle differences between portfolio management and Stage-Gate, however: Gates are an evaluation of individual projects in depth and one-at-a-time. Gatekeepers meet to make Go/Kill and resource allocation decisions on an on-going basis (in real time) and from beginning to end of a project (Gate 1 to Gate 5 in Figure 1). By contrast, portfolio reviews are more holistic, looking at the entire set of projects, but obviously less in-depth per project than gates do. Portfolio reviews two to four times per year are the norm ( 9 ). They deal with such issues as achieving the right mix and balance Table 1. Rules of the Game: Sample Set from a Major Flooring-Products Manufacturer. All projects must pass through the gates. There is no special treatment or bypassing of gates for pet projects. Once a gate meeting date is agreed (calendars checked), gatekeepers must make every effort to be there. If the team cannot provide deliverables in time for the scheduled gate, the gate may be postponed and rescheduled, but timely advance notice must be given. If a gatekeeper cannot attend, s/he can send a designate who is empowered to vote and act on behalf of that gatekeeper (including committing resources). Gatekeepers can attend electronically (phone or video conference call). Pre-gate decision meetings should be avoided by gatekeepers don t prejudge the project. There will be new data presented and a Q&A at the gate meeting. Gatekeepers should base their decisions on the information presented and use the scoring criteria. Decisions must be based on facts, not emotion and gut feel! A decision must be made the day of the gate meeting (Go/Kill/Hold/Recycle). The project team must be informed of the decision face to face, and reasons why. When resource commitments are made by gatekeepers (people, time or money), every effort must be made to ensure that these commitments are kept. Gatekeepers must accept and agree to abide by these Rules of the Game. 50 Research. Technology Management
6 of projects in the portfolio, project prioritization, and whether the portfolio is aligned with the business s strategy. Besides relying on traditional financial criteria, here are methods that companies use to improve portfolio management within Stage-Gate ( 10 ); 1. Strategic buckets to achieve the right balance and mix of projects The business s product innovation and technology strategy drives the decision process and helps to decide resource allocation and strategic buckets. Using the strategic buckets method, senior management makes a priori strategic choices about how they wish to spend their R&D resources. The method is based Table 2. A Typical Scorecard for Gate 3, Go to Development: An Effective Tool for Rating Projects ( 10 ). Factor 1: Strategic Fit and Importance Alignment of project with our business s strategy. Importance of project to the strategy. Impact on the business. Factor 2: Product and Competitive Advantage Product delivers unique customer or user benefits. Product offers customer/user excellent value for money (compelling value proposition). Differentiated product in eyes of customer/user. Positive customer/user feedback on product concept (concept test results). Factor 3: Market Attractiveness Market size. Market growth and future potential. Margins earned by players in this market. Competitiveness - how tough and intense competition is (negative). Factor 4: Core Competencies Leverage Project leverages our core competencies and strengths in: technology production/operations marketing distribution/sales force. Factor 5: Technical Feasibility Size of technical gap (straightforward to do). Technical complexity (few barriers, solution envisioned). Familiarity of technology to our business. Technical results to date (proof of concept). Factor 6: Financial Reward versus Risk Size of financial opportunity. Financial return (NPV, ECV, IRR). Productivity Index (PI). Certainty of financial estimates. Level of risk and ability to address risks. Projects are scored by the gatekeepers (senior management) at the gate meeting, using these six factors on a scorecard (0 10 scales). The scores are tallied and displayed electronically for discussion. The Project Attractiveness Score is the weighted or unweighted addition of the six factor scores (averaged across gatekeepers), and taken out of 100. A score of 60/100 is usually required for a Go decision. on the premise that strategy becomes real when you start spending money. So make those spending decisions! Most often, resource splits are made across project types (new products, improvements, cost reductions, technology developments, etc.), by market or business area, by technology (base, pacing, embryonic) or by geography. Once these splits are decided each year, projects and resources are tracked. Pie charts reveal the actual split in resource (year to date) versus the target split based on the strategic choices made. These pie charts are reviewed at portfolio reviews to ensure that resource allocation does indeed mirror the strategic priorities of the business. The method has proven to be an effective way to ensure that the right balance and mix of projects is achieved in the development pipeline that the pipeline is not overloaded with small, short-term and low-risk projects. 2. Scorecards to make better Go/Kill and prioritization decisions Scorecards are based on the premise that qualitative criteria or factors are often better predictors of success than financial projections. The one thing we are sure of in product development is that the numbers are always wrong, especially for more innovative and step-out projects. In use, management develops a list of about 6 8 key criteria, known predictors of success (Table 2). Projects are then scored on these criteria right at the gate meeting by senior management. The total score becomes a key input into the Go/Kill gate decision and, along with other factors, is used to rank or prioritize projects at portfolio review meetings. A number of firms (for example, divisions at J&J, P&G, Emerson Electric and ITT Industries) use scorecards for early-stage screening (for Gates 1, 2 and 3 in Figure 1). Note that different scorecards and criteria are used for different types of projects. 3. Success criteria at gates Another project selection method for use at gates, and one employed with considerable success at firms such as P&G, is the use of success criteria : Specific success criteria for each gate relevant to that stage are defined for each project. Examples include: expected profitability, launch date, expected sales, and even interim metrics, such as test results expected in a subsequent stage. These criteria, and targets to be achieved on them, are agreed to by the project team and management at each gate. These success criteria are then used to evaluate the project at successive gates ( 11 ). For example, if the project s estimates fail on any agreed-to criteria at successive gates, the project could be killed. March April
7 4. The Productivity Index helps prioritize projects and allocate resources This is a powerful extension of the NPV (net present value) method and is most useful at portfolio reviews to prioritize projects when resources are constrained. The Productivity Index is a financial approach based on the theory of constraints ( 12 ): in order to maximize the value of your portfolio subject to a constraining resource, take the factor that you are trying to maximize e.g., the NPV and divide it by your constraining resource, for example the person-days (or costs) required to complete the project: Productivity Index PI = Forecasted NPV = Person-Days to Complete Project Forecasted NPV Cost to Complete Project Then rank your projects according to this index until you run out of resources. Those projects at the top of the list are Go projects, are resourced, and are accelerated to market. Those projects beyond the resource limit are placed on hold or killed. The method is designed to maximize the value of your development portfolio while staying within your resource limits. Make the System Lean, Adaptive, Flexible and Scalable A leaner process Over time, most companies product development processes have become too bulky, cumbersome and bureaucratic. Thus, smart companies have borrowed the concept of value stream analysis from lean manufacturing, and have applied it to their new product process in order to remove waste and inefficiency. A value stream is simply the connection of all the process steps with the goal of maximizing customer value ( 13 ). In NPD, a value stream represents the linkage of all value-added and non-value-added activities associated with the creation of a new product. The value stream map is used to portray the value stream or product development process, and helps to identify both value-added and non-value-added activities; hence, it is a useful tool for improving your process ( 14 ). or Most companies product development processes have become too bulky and bureaucratic. In employing value stream analysis, a task force creates a map of the value stream your current idea-to-launch process for typical development projects in your business. All the stages, decision points and key activities are mapped out, with time ranges for each activity and decision indicated. Once the value stream is mapped, the task force lowers the microscope on the process and dissects it. All procedures, activities and tasks, required deliverables, documents and templates, committees and decision processes are examined, looking for problems, time-wasters and non-value-added activities. Once these are spotted, the task force works to remove them. Example : In one B2B company, field trials were found to be a huge time waster, taking as much as 18 months and often having to be repeated because they failed. A value stream analysis revealed this unacceptable situation, and a subsequent root cause analysis showed that there were huge delays largely because field trials could be done only when the customer undertook a scheduled plant shut-down in this case a paper machine, costing in excess of $100 million and further there was little incentive for the customer to agree to a field trial, especially one that did not work. The lack of early involvement of technical people (the first phases of the project were handled largely by sales and business development people) meant that technical issues were often not understood until too late in the project and after commitments had been made to the customer. Solutions were sought and included: first field trials on a pilot paper machine (several universities rented time on these in their pulp and paper institutes); involving technical people from the beginning of the project; and offering the customer incentives such as limited exclusivity and preferential pricing. Value stream analysis can result in leaner gates, a topic mentioned earlier, but it goes well beyond gates and looks for efficiency improvements in all facets of the process as noted in the example. The result of a solid value stream analysis invariably is a much more streamlined, less bulky idea-to-launch system. An adaptable, agile process Stage-Gate has also become a much more adaptable innovation process, one that adjusts to changing conditions and fluid, unstable information. The concept of spiral or agile development is built in, allowing project teams to move rapidly to a final product design through a series of build test feedback and revise iterations ( 15 ). Spiral development bridges the gap between the 52 Research. Technology Management
8 need for sharp, early and fact-based product definition before development begins versus the need to be flexible and to adjust the product s design to new information and fluid market conditions as development proceeds. Spiral development allows developers to continue to incorporate valuable customer feedback into the design even after the product definition is locked-in before going into Stage 3. Spiral development also deals with the need to get mock-ups in front of customers earlier in the process (in Stage 2 rather than waiting until Stage 3). A fl exible process Stage-Gate is a flexible guide that suggests best practices, recommended activities and likely deliverables. No activity or deliverable is mandatory. The project team has considerable discretion over which activities they execute and which they choose not to do. The project team presents its proposed go-forward plan what needs to be done to make the project a success at each gate. At these gates, the gatekeepers commit the necessary resources, and in so doing, approve the go-forward plan. But note that it is the project team s plan, not simply a mechanistic implementation of a standardized process. Another facet of flexibility is simultaneous execution. Here, key activities and even entire stages overlap, not waiting for perfect information before moving forward. For example, it is acceptable to move activities from one stage to an earlier one and, in effect, overlap stages. Example: At Toyota, the rule is to synchronize processes for simultaneous execution ( 16 ). Truly effective concurrent engineering requires that each subsequent function maximizes the utility of the stable information available from the previous function as it becomes available. That is, development teams must do the most they can with only that portion of the design data that is not likely to change. Each function s processes are designed to move forward simultaneously, building on stable data as they become available. Simultaneous execution usually adds risk to a project. For example, the decision to purchase production equipment before field trials are completed, thereby avoiding a long order lead-time, may be a good application of simultaneous execution. But there is risk too that the project may be cancelled after dedicated production equipment is purchased. Thus, the decision to overlap activities and stages is a calculated risk, but it must be calculated. That is, the cost of delay must be weighed against the cost and probability of being wrong. Scaled to suit different risks Stage-Gate has become a scalable process, scaled to suit very different types and risk levels of projects from very risky and complex platform developments through to lower-risk extensions and modifications, and even to handle simple sales force requests ( 17 ). When first implemented, there was only one version of Stage-Gate in a company, typically a five-stage, fivegate model. But some projects were too small to put through the full five-stage model, and so circumvented it. The problem was that these smaller projects line extensions, modifications, sales-force requests while individually small, collectively consumed the bulk of resources. Thus, a contradictory situation existed whereby projects that represented the majority of development resources were outside the system. Each of these projects big and small has risk, consumes resources, and thus must be managed, but not all need to go through the full five-stage process. The process has thus morphed into multiple versions to fit business needs and to accelerate projects. Figure 2 shows some examples: Stage-Gate XPress for projects of moderate risk, such as improvements, modifications and extensions; and Stage-Gate Lite for very small projects, such as simple customer requests. Multiple versions for platform/technology development projects There is no longer just Stage-Gate for new product projects. Other types of projects platform developments, process developments, or exploratory research projects compete for the same resources, need to be managed, and thus also merit their own version of a stage-and-gate process. For example, ExxonMobil Chemical has designed a three-stage, three-gate version of its Stage-Gate process to handle upstream research projects ( 18 ); while numerous other organizations have adopted a four-stage, four-gate system to handle fundamental research, technology development or platform projects (more on this topic later). Add a Robust Post-Launch Review ( 19 ): H aving performance metrics in place that measure how well a specific new product project performed. For example, were the product s profits on target? Was it launched on time? Establishing team accountability for results, with all members of the project team fully responsible for March April
9 performance results when measured against these metrics. Building-in learning and improvement, namely, when the project team misses the target, focus on fixing the cause rather than putting a band-aid on the symptom, or worse yet, punishing the team. Example ( 20 ): At Emerson Electric, the traditional postlaunch reviews were absent in most divisions new product efforts. But in the new release of Emerson s idea-to-launch process (NPD 2.0), a post-launch review is very evident. Here, project teams are held accountable for key financial and time metrics that were established and agreed to much earlier in the project. When gaps or deficiencies between forecasts and reality are identified, root causes for these variances are sought and continuous improvement takes place. Emerson benefits in three way. First, estimates of sales, profits and time-to-market are much more realistic now that project teams are held accountable for their attainment. Second, with clear objectives, the project team can focus and work diligently to achieve them expectations are clear. Finally, if the team misses the target, causes are sought and improvements to the process are made so as to prevent a recurrence of the cause closed loop feedback and learning. It works much the same way at Procter & Gamble ( 21 ): Winning in the marketplace is the goal. In many firms, too much emphasis is on getting through the process, that Next-generation Stage-Gate systems build-in a rigorous post-launch review. is, getting one s project approved or preparing deliverables for the next gate. In the past, P&G was no different. By contrast, this principle emphasizes winning in the marketplace as the goal, not merely going through the process. Specific success criteria for each project are defined and agreed to by the project team and management at the gates; these success criteria are then used to evaluate the project at the post-launch review, and the project team is held accountable for achieving results when measured against these success criteria. Figure 2. The next-generation Stage-Gate is scalable. Use Stage-Gate Full, XPress and Lite for different project types. Major new product projects go through the full fi ve-stage process (top); moderate risk projects (extensions, modifi cations and improvements) use the XPress version (middle); and sales-force and marketing requests (very minor changes) use the Lite process (bottom). 54 Research. Technology Management
10. Example (22): EXFO Engineering boasts a solid Stage- Gate system coupled with a strong portfolio management process. EXFO has added an additional gate in its process Gate 5 whose purpose is to ensure the proper closing of the project (Launch is Gate 4.1 in this company s numbering scheme). At this final gate meeting, management ascertains that all the outstanding issues (manufacturing, quality, sales ramp-up, and project) have been addressed and closed. Feedback is presented based on a survey of initial customers; the project postmortem is reviewed, which highlights the project s good and bad points; and the recommendations for improvement from the team are examined. Typically, Gate 5 occurs about three months after initial product delivery to customers. Additionally, sales performance and profitability (ROI) of the project are monitored for the first two years of the product s life. Build-In a Discovery Stage To Feed Innovation Funnel Feeding the innovation funnel with a steady stream of a new product ideas and opportunities has become the quest in many companies as they search for the next blockbuster new product. Traditionally, the idea has been shown as a light-bulb at the beginning of the new product process, with ideas assumed to happen magically or serendipitously. No longer. Now, progressive firms such as P&G, Swarovski AG, ITT Industries, and Emerson Electric, have replaced the light-bulb with a new and proactive Stage 0 called Discovery (see Figure 1). Discovery encompasses some of the following activities: Fundamental research and technology development Organizations like ExxonMobil Chemical, Timex, Donaldson, and Sandia Labs recognize that technology development projects where the deliverable is new knowledge, a new technical capability, or even a technology platform are quite different in terms of risk, uncertainty, scope, and cost from the typical new product project found in the Stage-Gate model of Figure 1. Moreover, these technology development projects are often the platform that spawns a number of new product (or new process) development projects and hence acts as a trigger or feed to the new product process. Thus, such organizations have modified the front end of their Stage-Gate process and in effect bolted on a technology development process that then feeds the new product process, as shown in Figure 3 ( 23 ). The Stage-Gate TD process is technologically driven and features quite different stages with more opportunity for experimentation and iterating back, and the system relies on less financial and more strategic Go/Kill criteria at the gates. Other Discovery stage elements In addition to technology development projects, progressive firms have redefined Discovery to include many other ideation activities, including: Voice-of-customer methods, such as ethnographic research ( 24 ), site visits with depth interviews, customer focus groups to identify customer points of pain, and lead-user analysis ( 25 ). Strategically driven ideation, including crafting a product innovation strategy for the business in order to delineate the search fields for ideation, exploiting disruptive technologies ( 26 ), peripheral visioning ( 27 ), competitive analysis, and patent mining. Stimulating internal ideation, such as installing elaborate systems to capture, incubate and enhance internal ideas from employees, much as Swarovski has done ( 28 ). Open innovation as a source of external ideas, outlined next. Make Your Process an Open System Stage-Gate now accommodates open innovation, handling the flow of ideas, IP, technology, and even fully developed products into the company from external sources, and also the flow outward ( 29 ). Kimberly Clark, Air Products & Chemicals, P&G, and others have modified their Stage- Gate processes built in the necessary flexibility capability and systems in order to enable this network of partners, alliances and vendors from idea generation right through to launch. For example, P&G s SIMPL 3.0 version of its system is designed to handle externally-derived ideas, IP, technologies, and even fully developed products. Innovation via partnering with external firms and people has been around for decades joint ventures, venture groups, licensing arrangements and even venture nurturing. Open innovation is simply a broader concept that includes not only these traditional partnering models, but all types of collaborative or partnering activities, and with a wider range of partners than in the past. March April
11 Figure 3. The Technology Development Process handles fundamental science, technology development, and technology platform projects. It typically spawns multiple commercial projects which feed the new product process at Gates 1, 2 or 3. Note that the TD Process (top) is very fl exible: it is iterative and features loops within stages and potentially to previous stages. Gates rely less on fi nancial criteria and more on strategic criteria ( 23 ). In the traditional or closed innovation model, inputs come from internal and some external sources customer inputs, marketing ideas, marketplace information, or strategic planning inputs. Then, the R&D organization proceeds with the task of inventing, evolving and perfecting technologies for further development, immediately or at a later date ( 30 ). By contrast, in open innovation, companies look insideout and outside-in, across all three aspects of the innovation process, including ideation, development and commercialization. In doing so, much more value is created and realized throughout the process (see Figure 1): Discovery stage: Here, not only do companies look externally for customer problems to be solved or unmet needs to be satisfied, but to inventors, start-ups, small entrepreneurial firms, partners, and other sources of available technologies that can be used as a basis for internal or joint development. Development stage: Established companies seek help in solving technology problems from scientists outside the corporation, or they acquire external innovations that have already been productized. They also out-license internallydeveloped intellectual property that is not being utilized. Launch or commercialization stage: Companies sell or out-license already-developed products where more value can be realized elsewhere; or they in-license they acquire already commercialized products that provide immediate sources of new growth for the company. Automate Your Stage-Gate System Progressive companies recognize that automation greatly increases the effectiveness of their new product processes. With automation, everyone from project leaders to executives finds the process much easier to use, thereby enhancing buy-in. Another benefit is information management: the key participants have access to effective displays of relevant information what they need to advance the project, cooperate globally with other team members on vital tasks, help make the Go/Kill decision, or stay on top of a portfolio of projects. Examples of certified automation software for Stage-Gate are found in Ref. 31. The Path Forward This article has outlined new approaches that firms have built into their next-generation Stage-Gate systems. If your idea-to-launch system is more than five years old, if it s burdened with too much make work and bureaucracy, or if it s getting a bit creaky and cumbersome, the time is ripe for a serious overhaul. Design 56 Research. Technology Management
12 your innovation process for today s innovation requirements a faster, leaner, more agile, and more focused system. Reinvent your process to build-in the latest thinking, approaches and methods outlined above and move to the next-generation Stage-Gate system. References and Notes 1. Stage-Gate is a registered trademark of the Product Development Institute Inc ( ), and the term was coined by the author. 2. PDMA and APQC studies show that about 70 percent of product developers in North America use a Stage-Gate or similar system. See: The PDMA Foundation s 2004 Comparative Performance Assessment Study (CPAS), Product Development & Management Association, Chicago, IL. Also: Cooper, R.G., S.J. Edgett and E.J. Kleinschmidt, New Product Development Best Practices Study: What Distinguishes the Top Performers. Houston: APQC (American Productivity & Quality Center), Parts of this article are based on previous publications by the author. See: Cooper, R.G. and S.J. Edgett, Lean, Rapid and Profi table New Product Development, Product Development Institute, www. stage-gate.com, 2005; Cooper, R.G., The Stage-Gate Idea-to-Launch Process Update, What s New and NexGen Systems, J. Product Innovation Managements 25, 3, May 2008, pp ; and: Cooper, R.G., NexGen Stage-Gate What Leading Companies Are Doing to Re-Invent Their NPD Processes, PDMA Visions, XXXII, No 3, Sept. 2008, pp See APQC study ref. 2; also: Cooper, R.G., S.J. Edgett and E.J. Kleinschmidt, Benchmarking Best NPD Practices-2: Strategy, Resources and Portfolio Management Practices, Research-Technology Management 47, 3, May-June 2004, pp Osborne, S. Make More and Better Product Decisions For Greater Impact. Proceedings, Product Development and Management Association Annual International Conference, Atlanta, GA, Oct Belair, G. Beyond Gates: Building the Right NPD Organization. Proceedings, First International Stage-Gate Conference, St. Petersburg Beach, FL, Feb Private discussions with M. Mills at P&G; used with permission. 8. Cooper, R.G., S.J. Edgett and E.J. Kleinschmidt. Optimizing the Stage-Gate Process: What Best Practice Companies Do Part II. Research-Technology Management 45, 6, Nov.-Dec , pp Edgett, S. (subject matter expert). Portfolio Management: Optimizing for Success, Houston: APQC (American Productivity & Quality Center), These portfolio tools are explained in: Cooper, R.G. and S.J. Edgett, Ten Ways to Make Better Portfolio and Project Selection Decisions, PDMA Visions, XXX, 3, June 2006, pp ; also Cooper, R.G., S.J. Edgett and E.J. Kleinschmidt, Portfolio Management for New Products, 2 nd edition. New York, NY: Perseus Publishing, Cooper, R.G. and M. Mills. Succeeding at New Products the P&G Way: A Key Element is Using the Innovation Diamond. PDMA Visions, XXIX, 4, Oct. 2005, pp The Productivity Index method is proposed by the Strategic Decisions Group (SDG). For more information, refer to Matheson, D., Matheson, J.E. and Menke, M.M., Making Excellent R&D Decisions, Research- Technology Management, Nov.-Dec. pp , 1994; and Evans, P., Streamlining Formal Portfolio Management, Scrip Magazine, February, Fiore, C. Accelerated Product Development. New York, NY: Productivity Production Press, 2005, p For more information on value stream mapping, plus examples, see: Cooper & Edgett, ref. 3, ch Spiral development is described in Cooper, R.G. and S.J. Edgett. Maximizing Productivity in Product Innovation. Research- Technology Management, March-April 2008, pp Morgan, J. Applying Lean Principles to Product Development. Report from SAE International Society of Mechanical Engineers, Cooper, R.G. Formula for Success. Marketing Management Magazine (American Marketing Association), March-April 2006, pp ; see also Cooper & Edgett ref Cohen, L.Y., P.W. Kamienski and R.L. Espino. Gate System Focuses Industrial Basic Research. Research-Technology Management, July- August 1998, pp See Cooper & Edgett RTM article in ref Ledford, R.D. NPD 2.0, Innovation, St. Louis: Emerson Electric, 2006, p. 2: and NPD 2.0: Raising Emerson s NPD Process to the Next Level, Innovation, St. Louis: Emerson Electric, 2006, pp Cooper and Mills, ref Bull, S. Innovating for Success: How EXFO s NPDS Delivers Winning New Product. Proceedings, First International Stage-Gate Conference, St. Petersburg Beach, FL, Feb Cooper, R.G. Managing Technology Development Projects - Different Than Traditional Development Projects, Research- Technology Management, Nov.-Dec. 2006, pp R.G. Cooper and S.J. Edgett. Ideation for Product Innovation: What Are the Best Sources? PDMA Visions, XXXII, 1, March 2008, pp More on lead user analysis in Von Hippel, E. Democratizing Innovation, MIT Press, Cambridge MA, 2005; and Thomke, S. and E. Von Hippel. Customers As Innovators: A New Way to Create Value, Harvard Business Review, April 2002, pp Christensen, C.M. The Innovator s Dilemma. New York: Harper Collins, Day, G. and P. Shoemaker. Scanning the Periphery. Harvard Business Review, Nov. 2005, pp Erler, H. A Brilliant New Product Idea Generation Program: Swarovski s I-Lab Story, Second International Stage-Gate Conference, Clearwater Beach, FL, Feb Chesbrough, H. Open Innovation: The New Imperative for Creating and Profi ting from Technology. Cambridge, MA: Harvard Business School Press, This section is based on material in Cooper and Edgett, ref. 3, ch Docherty, M. Primer on Open Innovation : Principles and Practice, PDMA Visions, XXX, No. 2, April A number of software products have been certified for use with Stage-Gate. See March April
13 Receive the latest articles, research, and tips on product innovation Visit to join the Stage-Gate International Knowledge Community Connect With Us to Build Your Innovation Capability United States Corporate Head Office Andean, and
Entrepreneurship. What is meant by entrepreneurship? The. principles of. >>>> 1. What Is Entrepreneurship?
>>>> 1. What Is? What is meant by entrepreneurship? The concept of entrepreneurship was first established in the 1700s, and the meaning has evolved ever since. Many simply equate it with starting one...
Six Sigma Black Belts: What Do They Need to Know?
This paper was presented during the Journal of Quality Technology Session at the 45th Annual Fall Technical Conference of the Chemical and Process Industries Division and Statistics Division of the American
fs viewpoint
fs viewpoint 02 15 19 21 27 31 Point of view A deeper dive Competitive intelligence A framework for response How PwC can help Appendix Where have you been all my life? How the financial
Starting up. Achieving success with professional business planning
Achieving success with professional business planning Contents The New Venture Business Plan Competition 4 Preface 7 Acknowledgements 9 About this manual 11 Part 1: a company - how companies grow 17
building a performance measurement system
building a performance measurement system USING DATA TO ACCELERATE SOCIAL IMPACT by Andrew Wolk, Anand Dholakia, and Kelley Kreitz A Root Cause How-to Guide ABOUT THE AUTHORS Andrew Wolk Widely recognized
A simple guide to improving services
NHS CANCER DIAGNOSTICS HEART LUNG STROKE NHS Improvement First steps towards quality improvement: A simple guide to improving services IMPROVEMENT. PEOPLE. QUALITY. STAFF. DATA. STEPS. LEAN. PATIENTS.
A Framework for Success for All Students
A Framework for Success for All Students Collected Papers from the Technical Support Team for the Schools for a New Society Initiative and Carnegie Corporation of New York A Framework for Success for All
Retaining and Developing High Potential Talent. Promising Practices in Onboarding, Employee Mentoring & Succession Planning
Retaining and Developing High Potential Talent Promising Practices in Onboarding, Employee Mentoring & Succession Planning TABLE OF CONTENTS Executive Summary 3 Diversity and Inclusion 4 Building a Talent
Basic Marketing Research: Volume 1
Basic Marketing Research: Volume 1 Handbook for Research Professionals Official Training Guide from Qualtrics Scott M. Smith Gerald S. Albaum Copyright 2012, Qualtrics Labs, Inc. ISBN: 978-0-9849328-1-8
Take Action. Focus On Prevention
Make A Difference Take Action Reach Out Red Ribbon Your Community Focus On Prevention U.S. DEPARTMENT OF HEALTH AND HUMAN SERVICES Substance Abuse and Mental Health Services Administration
EMC ONE: A Journey in Social Media Abstract
EMC ONE: A Journey in Social Media Abstract This white paper examines the process of establishing and executing a social media proficiency strategy at EMC Corporation. December 2008 Copyright 2008 EMC
Leading Business by Design Why and how business leaders invest in design
Leading Business by Design Why and how business leaders invest in design ARE YOU MISSING SOMEONE? Design Council Design Council champions great design. For us that means design which improves lives and
Work the Net. A Management Guide for Formal Networks
Work the Net A Management Guide for Formal Networks Work the Net A Management Guide for Formal Networks Contact Michael Glueck Programme Co-ordinator Natural Resource Management Programme D-108 An | http://docplayer.net/33058-How-companies-are-reinventing-their-idea-to-launch-methodologies.html | CC-MAIN-2017-09 | refinedweb | 7,894 | 52.49 |
#include <getquotarootjob.h>
Detailed Description
Gets the quota root and resource limits for a mailbox.arootjob.h.
Member Function Documentation
Get a map containing all resource limits for a quota root.
- Parameters
-
- Returns
- a map from resource names to limits
Definition at line 155 of file getquotarootjob.cpp.
Get a map containing all resource usage figures for a quota root.
- Parameters
-
- Returns
- a map from resource names to usage figures
Definition at line 138 of file getquotarootjob.cpp.
Get the current limit for a resource.
- Parameters
-
- Returns
- the resource limit in appropriate units, or -1 if the limit is unknown or there is no limit on the resource
Definition at line 126 of file getquotarootjob.cpp.
The mailbox that the quota roots will be fetched for.
Definition at line 103 of file getquotarootjob.cpp.
The quota roots for the mailbox.
Definition at line 109 of file getquotarootjob.cpp.
Set the mailbox to get the quota roots for.
- Parameters
-
Definition at line 97 of file getquotarootjob.cpp.
Get the current usage for a resource.
Note that if there is no limit for a resource, the server will not provide information about resource usage.
- Parameters
-
- Returns
- the resource usage in appropriate units, or -1 if the usage is unknown or there is no limit on the resource
Definition at line 115 of file getquotarootjob.cpp.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2020 The KDE developers.
Generated on Sun Feb 16 2020 05:20:32 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/kdepim/kimap2/html/classKIMAP2_1_1GetQuotaRootJob.html | CC-MAIN-2020-10 | refinedweb | 269 | 57.98 |
This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.
My idea is use Maven2 repository and services similar to, http:
//, or own repository with mapping java
package path to API doc URLs.
When I need some functionality, use search in javadoc APIs. Find any packages, which
contain what I need. Choose it in my project properties in Netbeans. Netbeans
automatically download it from maven repository or prompt me for URL. Or when I know
package name, I only write in Netbeans for example 'org.apache.commons.lang' and NB
import necessary library to project (I only specify version).
Javadoc will be open from online. Download all documentation and sources for every 3
rd party project, which I use, its most disagreeable work.
I wrote about my idea also here:
I see, that some stuff already exists, for example:
It enable create library from maven repository, but still exist the same problem.
Sharing NB project. If will be exist support for adding library not to Library
manager, but directly to Project, will be great. In project configuration will be
only reference to library.
Dupe of issue #44035 for last comment.
*** This issue has been marked as a duplicate of 44035 *** | https://bz.apache.org/netbeans/show_bug.cgi?id=75117 | CC-MAIN-2021-39 | refinedweb | 218 | 67.25 |
From: Jonathan Wakely (cow_at_[hidden])
Date: 2005-05-05 06:24:08
On Tue, May 03, 2005 at 10:43:36PM +0200, Thorsten Ottosen wrote:
> Hi,
>
> When I look at something like
>
>
>
> I assume the error is not in my library. Should/Can we do anything to fix it?
> Unsubscribe & other changes:
A number of tests have this error when running with _GLIBCXX_DEBUG:
That is because there is a fwd decl of a std type, which causes problems
in debug mode since std::vector is really declared in another namespace
and pulled into std with GCC's "strong using" extension.
You can fix the problem by not using the fwd decl if _GLIBCXX_DEBUG (or
_GLIBCPP_DEBUG for GCC 3.3) is defined. Just include the relevant
header instead of the fwd decl (if someone's running with debug mode on
they are concerned about performance, since debug mode violates the
performance guarantess of the std lib anyway)
I reported a similar problem, and fix, in the lambda lib here:
I'll repost that, since it's not been picked up by anyone.
jon
-- Those who taste, know - Sufi saying
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2005/05/85901.php | CC-MAIN-2020-16 | refinedweb | 210 | 69.11 |
#include <hallo.h> Jérôme Marant wrote on Fri May 03, 2002 um 01:11:57PM: > > maintianing stuff), I look for new maintainers for the following > > packages. > > Are these all your packages ? Almost all official ones. I want to keep cloop and maybe pppoeconf, but when someone has more time to work work on pppoeconf, (s)he should take it. > > I know. Let's rephrase it to "I would like to make NMUs more quickly than others when I see that my successor is MIA". I just don't want that any package comes into the state of cdrtools (with the previous maintainer). > to find backup maintainers (unless you really don't want to > maintain them any more) In my current RL situation, I have to concentrate on my studies, I use only few things of the stuff that I packaged or maintained before, and if other people can do better job, they should. Gruss/Regards, Eduard. -- Everything should be made as simple as possible, but not simpler. -- Albert Einstein -- To UNSUBSCRIBE, email to debian-devel-request@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org | http://lists.debian.org/debian-devel/2002/05/msg00159.html | CC-MAIN-2013-48 | refinedweb | 190 | 62.27 |
Recently, I was building
django-single-instance-model. The package ensures that at all times there is exactly one instance of a model.
One of the tasks for building this package was ensuring that an instance of the model existed.
I wanted to run this code as early as possible, once the database connection had been made.
How did I do it?
In the
__init__.py of the main app, I hooked into the
connection_created signal. Here's how:
from django.dispatch import receiver from django.db.backends.signals import connection_created @receiver(connection_created) def my_receiver(connection, **kwargs): with connection.cursor() as cursor: pass # your startup code here
Hope this helps you in the future! Follow me for more Django / Python tips!
Discussion (1)
great tip, thanks! | https://dev.to/connorbode/perform-an-action-every-time-django-starts-ok8 | CC-MAIN-2022-05 | refinedweb | 126 | 62.54 |
Getting started with Hadoop: My First Try
Given the growing popularity of Hadoop, I decided to give it a try by myself. As normal, I searched for a tutorial first and got one by Yahoo, which is based on Hadoop 0.18.0 virtual machine. I knew the current stable version is 1.x, but that is OK because I just wanted to get a big picture and I didn’t want to refuse the convenience of ready-to-use Hadoop virtual machine.
The tutorial is not that long so I just tried to walk through it. Because I’ve have Java and Eclipse set up, so I just downloaded the Hadoop virtual machine and ran it on VMware Player. Then I got stuck because the Eclipse plug-in required in the tutorial could not be found – I didn’t have the CD mentioned in the tutorial. It took me a while but I found a newer version of the plug-in.
After installing the plug-in, I could add a new Hadoop location in the Map/Reduce Locations view. The Hadoop location also showed up in the Eclipse Project Explorer view under the DFS Locations root node, but when it’s expanded I got error node “Error: null.” Later one I found out that the command line can do most works.
Then came the WordCount sample code which was the fun part for me. Before that, I copied the hadoop-0.18.0 directory under the hadoop-user home directory to the machine where my Eclipse runs. I then created a new project using the MapReduce project wizard (coming with Hadoop plug-in) and specify Hadoop library location there. The Hadoop plug-in simply adds all the required libraries (jar files) in Java build path so you don’t need to worry about them. If you don’t have Hadoop plug-in installed, you can manually add them, the most important one of which is the hadoop-0.18.0-core.jar.
After the project is created, I typed in the source code from the tutorial. Somehow it didn’t compile right away, I had to search around and found a similar code from Cloudera Hadoop tutorial.
With a few tweaks, the application compiled. The following are the three java files:.TextOutputFormat; public class WordCount { public static void main(String[] args) throws Exception {CombinerClass(WordCountReducer.class); conf.setInputFormat(TextInputFormat.class); conf.setOutputFormat(TextOutputFormat.class); FileInputFormat.setInputPaths(conf, new Path("input")); FileOutputFormat.setOutputPath(conf, new Path("output")); JobClient.runJob(conf); } }.toLowerCase()); while(itr.hasMoreTokens()) { word.set(itr.nextToken()); output.collect(word, one); } } }
import java.io.IOException; import java.util.Iterator; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text;<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { int sum = 0; while(values.hasNext()) { IntWritable value = (IntWritable) values.next(); sum += value.get(); } output.collect(key, new IntWritable(sum)); } }
I then jarred it up as wordcount.jar and sent it to the Hadoop virtual machine. Finally, I created a new directory in the HDFS and copied a text file so that the sample can read in it to count words.
The following are a few commands I used in the virtual machine:
hadoop-user@hadoop-desk:~ $ ./init-hdfs hadoop-user@hadoop-desk:~ $ ./start-hadoop hadoop-user@hadoop-desk:~ $ hadoop fs -mkdir input hadoop-user@hadoop-desk:~ $ hadoop fs -put ../foo.txt /user/hadoop-user/input hadoop-user@hadoop-desk:~ $ hadoop fs -ls input/ hadoop-user@hadoop-desk:~/hadoop-0.18.0$ hadoop jar wordcount.jar WordCount hadoop-user@hadoop-desk:~ /hadoop-0.18.0$ hadoop fs -ls output/ hadoop-user@hadoop-desk:~ /hadoop-0.18.0$ hadoop fs -get output/part-00000 hadoop-user@hadoop-desk:~/hadoop-0.18.0$ hadoop fs -rmr /user/hadoop-user/output
After trying the WordCount sample and reading through two tutorials, I got a good understanding of MapReduce and Hadoop at a very high level. To get some real work done, I think I need to study more. That is why I order the book Hadoop – the definitive guide. I will write more after reading through the book in about one month. Stay tuned.
after two months, I could solve my problem. I`m happiest man in the world now !!!!!!
I was using commons-logging-1.1.1 It doesn’t work for hadoop-0.18.0
If you download hadoop-0.20.2 for example, use their lib, it works.
thanks god 😀
I am trying to run tutorial by Yahoo, which is based on Hadoop 0.18.0 virtual machine. I am getting error on eclipse – Call to /192.168.94.9000 fail on local exception: java.io.EOFException – What might be missing in configuration on eclipse side?
Hi…Thanks for the brief explanation of your experience. I face the same problem in eclipse configuration. I am getting the Error code “Error : null”. Could you please tell me how you managed to get the configuration as given in the tutorial. Waiting for your valuable feedback.
Hi,
I am also getting the same error on eclipse – Call to /192.168.94.9000 fail on local exception: java.io.EOFException.
Please guide us to resolve this issue.
Error:Null
problem resolved | http://www.doublecloud.org/2012/06/getting-started-with-hadoop-my-first-try/ | CC-MAIN-2017-39 | refinedweb | 869 | 60.51 |
XSP, Taglibs and Pipelines
Introducing AxKitby Barrie Slaymaker
April 16, 2002
In the first article in this series, we saw how to install, configure and test AxKit, and we took a look at a simple processing pipeline. In this article, we will see how to write a simple 10-line XSP taglib and use it in a pipeline along with XSLT to build dynamic pages in such a way that gets the systems architects and coders out of the content maintenance business. Along the way, we'll discuss the pipeline processing model that makes AxKit so powerful.
First though, let us catch up on some changes in the AxKit world.
CHANGES
Matt and company have released AxKit v1.5.1, and 1.5.2 looks as though it will be out soon. 1.5.1 provides a few improvements and bug fixes, especially in the XSP engine discussed in this article. The biggest additions are the inclusion of a set of demonstration pages that can be used to test and experiment with AxKit's capabilities, and another module to help with writing taglibs (Jorge Walter's SimpleTaglib, which we'll look at in the next article).
There has also been a release policy change: The main AxKit distributions (AxKit-1.5.1.tar.gz for instance) will no longer contain the minimal set of prerequisites; these will now be bundled in a separate tarball. This policy change enables people to download just AxKit (when upgrading it, for instance) and recognizes the fact that AxKit is an Apache project while the prerequisites aren't. Until the prerequisite tarball gets released, the source tarball that accompanies this article contains them all (though it installs them in a local directory only, for testing purposes). The main AxKit tarball still includes all the core AxKit modules.
XSP and taglibs
We touched on eXtensible Server Pages and taglibs in the last article; this time we'll take a deeper look and try to see how XSP, taglibs and XSLT can be combined in a powerful and useful way.
A quick taglib refresher: A taglib is a collection of XML tags in an XML namespace that act like conditionals or subroutine calls. Essentially, taglibs are a way of encoding logic and dynamic content in to pages without including raw "native" code (Perl, Java, COBOL; any of the popular Web programming languages) in the page; each taglib provides a set of related services and multiple taglibs may be used in the same XSP page.
For our example, we will build a "data driven" processing pipeline (clicking on a blue icon or arrow will take you to the relevant documentation or section of this article):
NOTE: A little icon like this will be shown with the description of each piece of the pipeline with that piece hilighted. Clicking on those icons will bring you back to this diagram.NOTE: A little icon like this will be shown with the description of each piece of the pipeline with that piece hilighted. Clicking on those icons will bring you back to this diagram.
This pipeline has five stages:
- the XSP document (
weather1.xsp) defines what chunks of raw data (current time and weather readings) are needed for this page in a simple XML format,
- the XSP processor applies taglibs to assemble the raw data for the page,
- the first XSLT processor and stylesheet (
weather.xsl) format the raw data in to usable content,
- the second XSLT processor and stylesheet (
as_html.xsl) lays out and generates the final page ("Result Doc"), and
- the Gzip compressor automatically compresses the output if the client can comprehend compressed content (even when an older browser is bashful and does not announce that it can cope with gzip-encoded content).
This multipass approach mimics those found in real applications; each stage has a specific purpose and can be designed and tested independently of the others.
In a real application, there might well be more filters: Our second XSLT filter might be tweaked to build document without any "look and feel," and an additional filter could be used to implement the "look and feel" of the presentation after the layout is complete. This would allow look and feel to be altered independently of the layout.
We call this a "data driven" pipeline because the document feeding the pipeline defines what data is needed to serve the page; it does not actually contain any content. Later stages add the content and format it. We'll look at a "document driven" pipeline, which feeds the pipeline with the document to be served, in the next article.
Why pipelines?
The XML pipeline-processing model used by AxKit is a powerful approach that brings technical and social advantages to document-processing systems lsuch as Web applications.
On the social side, the concept of a pipeline or assembly line is simple enough to be taught to and grasped by nonprogrammers. Some of the approaches used by HTML-oriented tools (like many on CPAN) are not exactly designer-friendly: They rely on programmer-friendly concepts such as abstract data structures, miniature programming languages, "catch-all" pages and object-oriented techniques such as method dispatch. To be fair, some designers can and do learn the concepts, and others have Perl implementors who deploy these tools in simple patterns that are readily grasped.
The reason that HTML-oriented tools have a hard time something as simple as a pipeline model is that HTML is a presentation language and does not lend itself to describing data structures or to incremental processing. The advantage of incremental processing is that the number of stages can be designed to fit the application and organization; with other tools, there's often a one- or two-stage approach where the first stage is almost entirely in the realm of the Perl coders ("prepare thedata") and the second is halfway in the realm of the coders and halfway in the designer's realm.
XML can be used both to describe data structures and mark up prose documents; this allows a pipeline to mingle data and prose in flexible ways. Each processing stage is XML-in, XML-out (except for the first and last, which often consume and generate other formats). However, the stages aren't limited to dealing purely with XML: The taglibs we're using show one way that stages can use Perl code (and, by extension, many other languages; see the Inline series of modules), external libraries, and almost any non-XML data as needed. Not only does My::WeatherTaglib integrate with Perl code, it's also requesting data over the Internet from a remote site.
The use of XML as the carrier for data between the stages also allows each stage to be debugged and unit tested. The documents that forwarded between the stages are often passed as internal data structures instead of XML strings for efficiency's sake, but they can be rendered (like the examples shown in this article) and examined, both for teaching purposes and debugging purposes. The uniform look and feel of XML, whatever it's disadvantages, is at least readily understood by any competent Web designer.
Pipelines are also handy mechanisms in that the individual stages are
chosen at request time; different stages can be selected to
deliver different views of the same content. AxKit provides several
powerful mechanisms for configuring pipelines, the
<xml-stylesheet ...> approach used in this article is
merely the simplest; future articles will explore building flexible
pipelines.
Pipelines also make a useful distinction between the manager (AxKit) and the processing technologies. Not only does this allow you to mix and match processing techniques as needed (AxKit ships with nine "Languages", or types of XML processor, and several "Providers", or data sources), it also allows new technologies to be incorporated in to existing sites when a new technology is needed.
Moreover, technologies like XML and XSLT are standardized and are becoming widely accepted and supported. This means that most, if not all the documents in our example pipeline can be handed off to non-Perl coders without (much) fear of them mangling the code. When they do mangle it; tools such as xsltproc (shipped with libxslt, one of the AxKit XSLT processors) can be used to give the designers a first line of defense before calling in the programmers. Even taglibs, nonstandard though they are, leverage the XML standard to make logic and data available to noncoders in a (relatively) safe manner. For instance, here's an excellent online tutorial provided by ZVON.org.
Mind you, XML and XSLT have their rough spots; the trick is that you don't need to know all the quirky ins and outs of the XML specification or use XSLT for things that are painful to do in it. I mean, really, when was the last time you dealt with a notation declaration? Most uses of XML use a small subset of the XML specification, and other tools such as XSP, XPathScript and various schema languages can be used where XSLT would only make the problem more difficult.
What stages are appropriate depends on the application's requirements and those of the organization(s) involved in building, operating and maintaining it. In the next article, we'll examine a "document driven" pipeline and a taglib better suited for this approach that uses different stages.
All that being said there will always be a place for the non-XML and non-pipelined solutions: XML and pipelines are not panaceas. I still use other solutions when the applications or organizations I work with would not benefit from XML.
httpd.conf: the AxKit
configuration
Before we start in to the example code, let's glance at the AxKit
configuration. Feel free to skip ahead
to the code if you like; otherwise, here's the configuration we'll
use in
httpd.conf:
## ## AxGzipOutput Off AxAddXSPTaglib AxKit::XSP::Util AxAddXSPTaglib AxKit::XSP::Param AxAddXSPTaglib My::WeatherTaglib AxAddStyleMap application/x-xsp Apache::AxKit::Language::XSP AxAddStyleMap text/xsl Apache::AxKit::Language::LibXSLT </Directory>
This is the same configuration from the the last
article—most of the directives and the processing model used
by Apache and AxKit for them are described in detail there. The two
directives in bold have been added. The key directives for our
example are
AxAddXSPTaglib and
AxAddStyleMap.
The
AxAddXSPTaglib directives load three tag libraries:
Kip Hampton's Util
and Param
taglibs and our very own WeatherTaglib.
Util will allow our example to get at the system time; Param will allow it
to parse URLs; and WeatherTaglib will allow us to fetch the current weather
conditions for that zip code.
The two
AxAddStyleMap directives map a pair of mime
types to an XSP and an XSLT processor. Our example source document will
refer to these mime types to configure instances of XSP and XSLT
processors in to the processing pipeline.
We're using Apache::AxKit::Language::LibXSLT to perform XSLT transforms, which uses the GNOME project's libxslt library under the hood. Different XSLT engines offer different sets of features. If you prefer, then you can also use Apache::AxKit::Language::Sablot for XSLT work. You can even use them in the same pipeline by assigning them to different mime types.
My::WeatherTaglib
Here's a taglib that uses Geo::Weather module on CPAN to take a zip code and fetch some weather observations from weather.com and convert them to XML:
package My::WeatherTaglib; $NS = ""; @EXPORT_TAGLIB = ( 'report($zip)' ); use strict; use Apache::AxKit::Language::XSP::TaglibHelper; use Geo::Weather; sub report { Geo::Weather->new->get_weather( @_ ) } 1;
This taglib uses Steve Willer's TaglibHelper (included with AxKit) to automate the drudgery of dealing with XML. Because of this, our example taglib distills a lot of power into a few lines of Perl. Don't be fooled, though, there's a lot going on behind the curtains with this module.
When a tag like
<weather:report is
encountered in an XSP page, it will be translated into a call to
report( "15206" ), the result of the call will be converted
to XML and will replace the original tag in the XSP output document.
The
$NS variable sets the namespace URI for the
taglib; this configures XSP to direct all elements within the namespace to
My::WeatherTaglib, as we'll see in a bit.
When used in an XSP page, all XML elements supplied by a taglib will have a namespace prefix. For instance, the prefix
weather:is mapped to My::WeatherTaglib's namespace in the XSP page below. This prefix is not determined by the taglib&8212;we could have chosen another; this section assumes that the prefix
weather:is used for the sake of clarity.
The
@EXPORT_TAGLIB specifies what functions will be exported as
elements in this namespace and what parameters they accept (see the
documentation for details). The
report($zip) export specification
exports a tag that is invoked like
<weather:report
or
<weather:report> <weather:zip>15206</weather:zip> </weather:report>
The words "report" and "zip" in the
@TAGLIB_EXPORT
definition are used to determine the taglib element and attribute names;
the order of the parameters in the definition determines the order they
are passed in to the function. When invoking a taglib, the XML may
specify the parameters in any order in the XML. The names they are
specified with are not important to or visible from the Perl code by
default (see the *argument
function specification for how to accept an XML tree if need be).
All that's left for us to do is to write the "body" of the taglib by
use()ing
Geo::Weather and writing the
report() subroutine.
There are two major conveniences provided by TaglibHelper. The first
is welding Perl subroutines to XML tags (via the
@EXPORT_TAGLIB definitions). The second is converting the
Perl data structure returned by
report(), a hash reference
like
{ city => "Pittsburgh", state => "PA", cond => "Sunny", temp => 76, pic => "", url => "", ... }
in to well balanced XML like:
<city>Pittsburgh</city> <state>PA</state> <cond>Sunny</cond> <temp>76</temp> <pic></pic> <url></url> ...
TaglibHelper allows plain strings, data structures and strings of well-balanced XML to be returned. By writing a single one-line subroutine that returns a Perl data structure, we've written a taglib that requires no Perl expertise to use (any XHTML jock could use it safely using their favorite XML, HTML or text editor) and that can be used to serve up "data" responses for XML-RPC-like applications or "text" documents for human consumption.
The Data::Dumper module that ships with Perl is a good way to peer inside the data structures floating around in a request. Wehn run in AxKit, a quick
warn Dumper( $foo );will dump the data structure referred to by
$footo the Apache error log.
The output XML from a taglib replaces the orignal tag in the result document. In our case, the replacement XML is not too useful as-is, it's just data that looks a bit XMLish. Representing data structures as XML may seem awkward to Perl gurus, but it's quite helpful if you want to get the Perl code safely out of the way and allow others to use XML tools to work with the data.
The "data documents" from our XSP processor will upgraded in later processing stages to content that is presentable to the client; our XSP page neither knows nor cares how it is to be presented. Exporting the raw data as XML here is intended to show how to get the Perl gurus out of the critical path for content development by allowing site designers and content authors to do it.
Emitting structured data from XSP pages is just one approach. Taglibs can return whatever the "best fit" is for a given application, whether that be raw data, pieces of content or an entire article.
Cocoon, the system AxKit was primarily inspired by, uses a different approach to writing taglibs. AxKit also supports that approach, but it tends to be more awkward, so it will be examined in the next article. We'll also look at Jorge Walter's new SimpleTaglib module then, which is newer and more flexible, but less portable than the TaglibHelper module we're looking at here.
weather1.xsp: Using My::WeatherTaglib
Here's a page (
weather1.xsp) that uses the My::WeatherTaglib and
the "standard" XSP util taglib
we used in the previous article:
<>
When
weather1.xsp is requested, AxKit parses the
<?xml-stylesheet ...?> processing instructions and uses
the
AxAddStyleMap
directives to build the processing chain shown above.
The XSP processor is the first processor in the pipeline. As it parses
the page, it sends all elements with
util:,
param: or
weather: prefixes to the Util,
Param
,
and WeatherTaglib taglibs. This
mapping is defined by the
xmlns:... attributes and by the
namespace URIs that are hardcoded into each taglib's implementation
(see the
$NS variable in My::WeatherTaglib).
In this page, the
<util:time> element results in a
call to Util's
get_date() and the value of the
format= attribute is passed in as a parameter. The string
returned by
get_date() is converted to XML and emitted
instead of the
<util:time> element in the output
page. This shows how to pass simple constant parameters to a taglib.
We're getting slightly trickier with the
<weather:report> element: This construct fetches the
zip parameter from the request URI's query string (or form field) and
passes it to WeatherTaglib's
report() as the
$zip parameter. Thanks to Kip Hampton for the help in
using AxKit::XSP::Param in this manner.
Because we have the
AxDebugLevel set to
10,
you can see these calls the compiled version of
weather1.xsp; the generated Perl code is written to Apache's
error log—usually
$SERVER_ROOT/logs/error_log.
The
<a name="title"/> in the
<title> element is a contrivance put in this page to
show off a feature later in the processing chain. Be glad it's not the
dreaded
<blink> tag!
The XML document that is outputted from the XSP processor and fed to the first XSLT processor looks like (taglib output in bold):
<?xml version="1.0" encoding="UTF-8"?> <data> <title><a name="title"/>My weather report</title> <time>16:11:55</time> <weather> <state>PA</state> <heat>N/A</heat> <page>/search/search?where=15206</page> <wind>From the Southwest at 10</wind> <city>Pittsburgh</city> <temp>76</temp> <cond>Sunny</cond> <uv>2</uv> <visb>Unlimited</visb> <url></url> <dewp>53</dewp> <zip>15206</zip> <baro>29.75</baro> <pic></pic> <humi>31</humi> </weather> </data>
This data is largely presentation-neutral—kindly overlook the U.S. centric temperature scale—it can be styled as needed.
To generate this intermediate document, just commenting out all but the first
<?xml-stylesheet ... ?>processing instruction and request the page like so:
$ lynx -source localhost:8080/02/weather1.xsp?zip=15206 | xmllint --format -
xmllintis installed with the GNOMDE
libxsmllibrary used by various parts of AxKit.
When we cover how to build pipelines in more dynamic ways than using these stodgy old xml-stylesheet PIs, those techniques can be used to allow intermediate documents to be shown by varying the request URI.
weather.xsl: Converting Data to Content
Here's how we can convert the data document emitted by the XSP processor into more human-readable text. As described above, we're taking a two-step approach to simulate a "real-world" scenario of turning our data into chunks of content in one (reusable) step and then laying the HTML out in a second step.
weather.xsl is an XSLT stylesheet that uses several
templates to convert the XSP output in to something more readable:
<xsl:stylesheet <xsl:template <time>Hi! It's <xsl:value-of</time> </xsl:template> <xsl:template <weather>The weather in <xsl:value-of, <xsl:value-of is <xsl:value-of and <xsl:value-ofF (courtesy of <a href="{/data/weather/url}">The Weather Channel</a>). </weather> </xsl:template> <xsl:template <!-- Copy the rest of the doc verbatim --> <xsl:copy> <xsl:apply-templates </xsl:copy> </xsl:template> </xsl:stylesheet>
This is applied by the first XSLT processor in the pipeline.
The interesting thing here is that we are using two templates (shown in bold) to process different bits of the source XML. These templates "blurbify" the time and weather data in to presentable chunks and, as a side-effect, throw away unused data from the weather report.
The third template just passes the rest through (XSLT has some annoying qualities, one of which is that it takes a complex bit of code to simple pass things through "as is"). However, this is boilerplate&8212;right from the XSLT specification, in fact&8212;and need not interfere with designers creating the two templates we actually want in this stylesheet.
Another annoying quality is that XSLT contructs look visually similar to the templates themselves. This violates the language design principle "different things should look different," which is used in Perl and many other languages. This can be ameliorated by using an XSLT-aware editor or syntax hilighting to make the differences between XSLT statements and "payload" XML clear.
The output from the first XSLT processor looks like (template output in bold):
<?xml version="1.0"?> <data> <title><a name="title"/>My weather report</title> <time>Hi! It's 16:50:36</time> <weather>The weather in Pittsburgh, PA is Sunny and 76F (courtesy of <a href="">The Weather Channel</a>) </weather> </data>
Now we have a set of chunks that can be placed on a Web page. This technique can be used to build sidebars, newspaper headlines, abstracts, contact lists, navigation cues, links, menus, etc., in a reusable fashion.
as_html.xsl: Laying out the page
The final step in this example is to insert the chunks we've built
into a page of HTML using the
as_html.xslstylesheet:
<xsl:stylesheet xmlns: <xsl:output <xsl:template <html> <head> <title><xsl:value-of</title> </head> <body> <h1><xsl:copy-of</h1> <p ><xsl:copy-of</p> <p ><xsl:copy-of</p> </body> </html> </xsl:template> </xsl:stylesheet>
To generate the final HTML:
<html> <head> <meta content="text/html; charset=UTF-8" http- <title>My weather report</title> </head> <body> <h1><a name="title"/>My weather report</h1> <p>Hi! It's 17:05:08</p> <p>The weather in Pittsburgh, PA is Sunny and 76F (courtesy of <a href="">The Weather Channel</a>). </p> </body> </html>
Using the
/data/title from the data document in two
places in the result document is a minor example of the benefit of
separating the original data generation from the final presentation. In
the
<title> element, we're using
xsl:value-of, which returns just the textual content; in
the
<h1> element, we're using
xsl:copy-of, which copies the tags and the text. This
allows the title to contain markup that we strip in one place and use in
another.
This is similar to the situation often found in real applications where things like menus, buttons, location tell-tales ("Home >> Articles >> Foo" and the like) and links to related pages often occur in multiple places. Widgets like these make ideal "chunks" that the layout page can place as needed.
This is only one example of a "final rendering" stage; different filters could be used instead to deliver different formats. For instance, we could use XSLT to deliver XML, XHTML, and/or plain text versions, or we could use an AxKit-specific processor, XPathScript, to convert to things like RTF, nroff, and miscellaneous documentation formats that XML would otherwise have a hard time delivering.
AxKit optimizes this two-stage XSLT processing by passing the internal representation used by
libxsltdirectly between the two stages. This means that output from one stage goes directly to the next stage without having to be reparsed.
Relating
weather1.xsp to the real
world
If you squint a little at the code in My::WeatherTaglib, then you can imagine using a DBI query instead of having Geo::Weather query a remote Web site (Geo::Weather is used instead of DBI in this article to keep the example code and tarball relatively simple).
Writing queries and other business logic in to taglibs has several major advantages:
- the XML taglib API puts a designer friendly face on the queries, allowing the XSP page to be tweaked or maintained by non-Perl literate folks with their preferred tools, hopefully getting you off the "please tweak the query parms" critical path.
- Since the taglib API is XML, standard XML editors will catch basic syntax errors without needing to call in the taglib maintainer.
- Schema validators and XSLT tools can also be used to allow the designers to check the higher-level syntax before pestering the taglib maintainer.
- The query parameters and output can be touched up with Perl, making the "high level" XML interface simpler and more idiot-proof.
- The output is XML, so other XML processors can be used to enhance the content and style. This allows, for instance, XSLT literate designers to work on the presentation without needing to learn or even see (and possibly corrupt) any Perl code or a new language (as is required with most HTML templating solutions).
- It's quite difficult to accidently generate malformed XML using XSP: a well-formed XSP usually generates well-formed output.
- The queries are decoupled from the source XML, so they can be maintained without touching the XSP pages.
- The taglibs can be unit tested, unlike embedded code.
- Taglibs can be wrappers around existing modules, so the same Perl code can be shared by both the web front end and any other scripts or tools that need them.
- The plug-in nature of taglibs allows using many public and private XSP taglibs facilities for rapid prototyping. CPAN's chock full of 'em.
- In addition to the "function" tags like the two demostrated above, you can program "conditional" tags that control whether or not a block of the XSP page is included; this gives you the ability to respond to user preferences or rights, for instance.
The DBI module lets you work with almost any database ranging from comma separated value files (with SQL JOIN support no less) through MySQL, PostgreSQL, Oracle, etc, etc., and returns Perl data structures just crying out to be returned from a taglib function and turned in to XML.
For quick one-off pages and prototypes, the ESQL taglib allows you to embed SQL directly in XSP pages. This is not recommended practice because it's not efficient enough for heavily trafficed sites (the database connection is rebuilt each time), and because mixing programming code in with the XML leads to some pretty unreadable and hard-to-maintain pages, but it is good for one-off pages and prototypes.
Help and thanks
In case of trouble, have a look at some of the helpful resources we listed last time.
Thanks to Kip Hampton, Jeremy Mates and Martin Oldfield, for their thorough reviews, though I'm sure I managed to sneak some bugs by them. AxKit and many of the Perl modules it uses are primarily written by Matt Sergeant with extensive contributions from these good folks and others, so many thanks to all contributors as well.
| http://www.perl.com/pub/a/2002/04/16/axkit.html | crawl-002 | refinedweb | 4,550 | 56.69 |
In an earlier article, we looked at an overview of caching in Django and took a dive into how to cache a Django view along with using different cache backends. This article looks closer at the low-level cache API in Django.
--
Django Caching Articles:
- Caching in Django
- Low-Level Cache API in Django (this article!)
Contents
Objectives
By the end of this article, you should be able to:
- Set up Redis as a Django cache backend
- Use Django's low-level cache API to cache a model
- Invalidate the cache using Django database signals
- Simplify cache invalidation with Django Lifecycle
- Interact with the low-level cache API
Django Low-Level Cache
For more on the different caching levels in Django, refer to the Caching in Django article.
If Django's per-site or per-view cache aren't granular enough for your application's needs, then you may want to leverage the low-level cache API to manage caching at the object level.
You may want to use the low-level cache API if you need to cache different:
- Model objects that change at different intervals
- Logged-in users' data separate from each other
- External resources with heavy computing load
- External API calls
So, Django's low-level cache is good when you need more granularity and control over the cache. It can store any object that can be pickled safely. To use the low-level cache, you can use either the built-in
django.core.cache.caches or, if you just want to use the default cache defined in the settings.py file, via
django.core.cache.cache.
Project Setup
Clone down the base project from the django-low-level-cache repo on GitHub:
$ git clone -b base $ cd django-low-level-cache
Create (and activate) a virtual environment and install the requirements:
$ python3.9 -m venv venv $ source venv/bin/activate (venv)$ pip install -r requirements.txt
Apply the Django migrations, load some product data into the database, and the start the server:
(venv)$ python manage.py migrate (venv)$ python manage.py seed_db (venv)$ python manage.py runserver
Navigate to in your browser to check that everything works as expected.
Cache Backend
We'll be using Redis for the cache backend., the django-redis dependency is required. It's already been installed, so you just need to
Turn to the code. The
HomePageView view in products/views.py simply lists all products in the database:
class HomePageView(View): template_name = 'products/home.html' def get(self, request): product_objects = Product.objects.all() context = { 'products': product_objects } return render(request, self.template_name, context)
Let's add support for the low-level cache API to the product objects.
First, add the import to the top of products/views.py:
from django.core.cache import cache
Then, add the code for caching the products to the view:
class HomePageView(View): template_name = 'products/home.html' def get(self, request): product_objects = cache.get('product_objects') # NEW if product_objects is None: # NEW product_objects = Product.objects.all() cache.set('product_objects', product_objects) # NEW context = { 'products': product_objects } return render(request, self.template_name, context)
Here, we first checked to see if there's a cache object with the name
product_objects in our default cache:
- If so, we just returned it to the template without doing a database query.
- If it's not found in our cache, we queried the database and added the result to the cache with the key
product_objects.
With the server running, navigate to in your browser. Click on "Cache" in the right-hand menu of Django Debug Toolbar. You should see something similar to:
There were two cache calls:
- The first call attempted to get the cache object named
product_objects, resulting in a cache miss since the object doesn't exist.
- The second call set the cache object, using the same name, with the result of the queryset of all products.
There was also one SQL query. Overall, the page took about 313 milliseconds to load.
Refresh the page in your browser:
This time, you should see a cache hit, which gets the cache object named
product_objects. Also, there were no SQL queries, and the page took about 234 milliseconds to load.
Try adding a new product, updating an existing product, and deleting a product. You won't see any of the changes at until you manually invalidate the cache, by pressing the "Invalidate cache" button.
Invalidating the Cache
Next let's look at how to automatically invalidate the cache. In the previous article, we looked at how to invalidate the cache after a period of time (TTL). In this article, we'll look at how to invalidate the cache when something in the model changes -- e.g., when a product is added to the products table or when an existing product is either updated or deleted.
Using Django Signals
For this task we could use database signals:
Django includes a “signal dispatcher” which helps decoupled applications get notified when actions occur elsewhere in the framework. In a nutshell, signals allow certain senders to notify a set of receivers that some action has taken place. They’re especially useful when many pieces of code may be interested in the same events.
Saves and Deletes
To set up signals for handling cache invalidation, start by updating products/apps.py like so:
from django.apps import AppConfig class ProductsConfig(AppConfig): name = 'products' def ready(self): # NEW import products.signals # NEW
Next, create a file called signals.py in the "products" directory:
from django.core.cache import cache from django.db.models.signals import post_delete, post_save from django.dispatch import receiver from .models import Product @receiver(post_delete, sender=Product, dispatch_uid='post_deleted') def object_post_delete_handler(sender, **kwargs): cache.delete('product_objects') @receiver(post_save, sender=Product, dispatch_uid='posts_updated') def object_post_save_handler(sender, **kwargs): cache.delete('product_objects')
Here, we used the
receiver decorator from
django.dispatch to decorate two functions that get called when a product is added or deleted, respectively. Let's look at the arguments:
- The first argument is the signal event in which to tie the decorated function to, either a
saveor
delete.
- We also specified a sender, the
Productmodel in which to receive signals from.
- Finally, we passed a string as the
dispatch_uidto prevent duplicate signals.
So, when either a save or delete occurs against the
Product model, the
delete method on the cache object is called to remove the contents of the
product_objects cache.
To see this in action, either start or restart the server and navigate to in your browser. Open the "Cache" tab in the Django Debug Toolbar. You should see one cache miss. Refresh, and you should have no cache misses and one cache hit. Close the Debug Toolbar page. Then, click the "New product" button to add a new product. You should be redirected back to the homepage after you click "Save". This time, you should see one cache miss, indicating that the signal worked. Also, your new product should be seen at the top of the product list.
Updates
What about an update?
The
post_save signal is triggered if you update an item like so:
product = Product.objects.get(id=1) product.title = 'A new title' product.save()
However,
post_save won't be triggered if you perform an
update on the model via a
QuerySet:
Product.objects.filter(id=1).update(title='A new title')
Take note of the
ProductUpdateView:
class ProductUpdateView(UpdateView): model = Product fields = ['title', 'price'] template_name = 'products/product_update.html' # we overrode the post method for testing purposes def post(self, request, *args, **kwargs): self.object = self.get_object() Product.objects.filter(id=self.object.id).update( title=request.POST.get('title'), price=request.POST.get('price') ) return HttpResponseRedirect(reverse_lazy('home'))
So, in order to trigger the
post_save, let's override the queryset
update() method. Start by creating a custom
QuerySet and a custom
Manager. At the top of products/models.py, add the following lines:
from django.core.cache import cache # NEW from django.db import models from django.db.models import QuerySet, Manager # NEW from django.utils import timezone # NEW
Next, let's add the following code to products/models.py right above the
Product class:)
Here, we created a custom
Manager, which has a single job: To return our custom
QuerySet. In our custom
QuerySet, we overrode the
update() method to first delete the cache key and then perform the
QuerySet update per usual.
For this to be used by our code, you also need to update
Product like so:
class Product(models.Model): title = models.CharField(max_length=200, blank=False) price = models.CharField(max_length=20, blank=False) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) objects = CustomManager() # NEW class Meta: ordering = ['-created']
Full file:
from django.core.cache import cache from django.db import models from django.db.models import QuerySet, Manager(models.Model): title = models.CharField(max_length=200, blank=False) price = models.CharField(max_length=20, blank=False) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) objects = CustomManager() class Meta: ordering = ['-created']
Test this out.
Using Django Lifecycle
Rather than using database signals, you could use a third-party package called Django Lifecycle, which helps make invalidation of cache easier and more readable:
This project provides a @hook decorator as well as a base model and mixin to add lifecycle hooks to your Django models. Django's built-in approach to offering lifecycle hooks is Signals. However, my team often finds that Signals introduce unnecessary indirection and are at odds with Django's "fat models" approach.
To switch to using Django Lifecycle, kill the server, and then update products/app.py like so:
from django.apps import AppConfig class ProductsConfig(AppConfig): name = 'products'
Next, add Django Lifecycle to requirements.txt:
Django==3.1.13 django-debug-toolbar==3.2.1 django-lifecycle==0.9.1 # NEW django-redis==5.0.0 redis==3.5.3
Install the new requirements:
(venv)$ pip install -r requirements.txt
To use Lifecycle hooks, update products/models.py like so:
from django.core.cache import cache from django.db import models from django.db.models import QuerySet, Manager from django_lifecycle import LifecycleModel, hook, AFTER_DELETE, AFTER_SAVE # NEW(LifecycleModel): # NEW title = models.CharField(max_length=200, blank=False) price = models.CharField(max_length=20, blank=False) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) objects = CustomManager() class Meta: ordering = ['-created'] @hook(AFTER_SAVE) # NEW @hook(AFTER_DELETE) # NEW def invalidate_cache(self): # NEW cache.delete('product_objects') # NEW
In the code above, we:
- First imported the necessary objects from Django Lifecycle
- Then inherited from
LifecycleModelrather than
django.db.models
- Created an
invalidate_cachemethod that deletes the
product_objectcache key
- Used the
@hookdecorators to specify the events that we want to "hook" into
Test this out in your browser by-
- Navigating to
- Refreshing and verifying in the Debug Toolbar that there's a cache hit
- Adding a product and verifying that there's now a cache miss
As with
django signals the hooks won't trigger if we do update via a
QuerySet like in the previously mentioned example:
Product.objects.filter(id=1).update(title="A new title")
In this case, we still need to create a custom
Manager and
QuerySet as we showed before.
Test out editing and deleting products as well.
Low-level Cache API Methods
Thus far, we've used the
cache.get,
cache.set, and
cache.delete methods to get, set, and delete (for invalidation) objects in the cache. Let's take a look at some more methods from
django.core.cache.cache.
cache.get_or_set
Gets the specified key if present. If it's not present, it sets the key.
Syntax
cache.get_or_set(key, default, timeout=DEFAULT_TIMEOUT, version=None)
The
timeout parameter is used to set for how long (in seconds) the cache will be valid. Setting it to
None will cache the value forever. Omitting it will use the timeout, if any, that is set in
setting.py in the
CACHES setting
Many of the cache methods also include a
version parameter. With this parameter you can set or access different versions of the same cache key.
Example
>>> from django.core.cache import cache >>> cache.get_or_set('my_key', 'my new value') 'my new value'
We could have used this in our view instead of using the if statements:
# current implementation product_objects = cache.get('product_objects') if product_objects is None: product_objects = Product.objects.all() cache.set('product_objects', product_objects) # with get_or_set product_objects = cache.get_or_set('product_objects', product_objects)
cache.set_many
Used to set multiple keys at once by passing a dictionary of key-value pairs.
Syntax
cache.set_many(dict, timeout)
Example
>>> cache.set_many({'my_first_key': 1, 'my_second_key': 2, 'my_third_key': 3})
cache.get_many
Used to get multiple cache objects at once. It returns a dictionary with the keys specified as parameters to the method, as long as they exist and haven't expired.
Syntax
cache.get_many(keys, version=None)
Example
>>> cache.get_many(['my_key', 'my_first_key', 'my_second_key', 'my_third_key']) OrderedDict([('my_key', 'my new value'), ('my_first_key', 1), ('my_second_key', 2), ('my_third_key', 3)])
cache.touch
If you want to update the expiration for a certain key, you can use this method. The timeout value is set in the timeout parameter in seconds.
Syntax
cache.touch(key, timeout=DEFAULT_TIMEOUT, version=None)
Example
>>> cache.set('sample', 'just a sample', timeout=120) >>> cache.touch('sample', timeout=180)
cache.incr and cache.decr
These two methods can be used to increment or decrement a value of a key that already exists. If the methods are used on a nonexistent cache key it will return a
ValueError.
In the case of not specifying the delta parameter the value will be increased/decreased by 1.
Syntax
cache.incr(key, delta=1, version=None) cache.decr(key, delta=1, version=None)
Example
>>> cache.set('my_first_key', 1) >>> cache.incr('my_first_key') 2 >>> >>> cache.incr('my_first_key', 10) 12
cache.close()
To close the connection to your cache you use the
close() method.
Syntax
cache.close()
Example
cache.close()
cache.clear
To delete all the keys in the cache at once you can use this method. Just keep in mind that it will remove everything from the cache, not just the keys your application has set.
Syntax
cache.clear()
Example
cache.clear()
Conclusion
In this article, we looked at the low-level cache API in Django. We extended a demo project to use low-level caching and also invalidated the cache using Django's database signals and the Django Lifecycle hooks third-party package.
We also provided an overview of all the available methods in the Django low-level cache API together with examples of how to use them.
You can find the final code in the django-low-level-cache repo.
--
Django Caching Articles:
- Caching in Django
- Low-Level Cache API in Django (this article!) | https://testdriven.io/blog/django-low-level-cache/ | CC-MAIN-2021-43 | refinedweb | 2,436 | 50.73 |
How
Java read properties file
Java read properties file
In this section, you will learn how to read properties file.
Description of code:
There are different tools to access different... the above code, you can read any properties file.
Output
Read the File
Read the File
As we have read about the BufferedInputStream
class... the data is read in form
of a byte through the read( ) method. The read
J2ME Read File
J2ME Read File
In this J2ME application, we are going to read the specified file.
This example shows you how to read the data of the specified file. To
implement this type
JavaMail API usage - JavaMail
JavaMail API usage Hello sir,
i wrote a program to send a message....
3 > Pastes "mail.jar" file into "jdk1.5.0_02/jre/lib/ext/" directory.
4 > Pastes "activation.jar" file into "jdk1.5.0_02/jre/lib/ext/" directory
java - JavaMail
java How to send out an email using simple java How to send out an email with attachment using simple java Hi Friend,
Please visit the following links:
Thanks
java code - JavaMail
that randomly generates rupee and dollar objects and write them into a file using object serialization.write another program to read that file,convert to rupee...);
System.out.println("Reading from file:");
for(int i=1;i<=10;i++){
curr
mvc coding - JavaMail
mvc coding Hi friends,
I am facing this problem,
I created .vm files and class files in eclipse but velocity.log file show
this problem...; Hi friend,
Read for more information about mvc
http
JavaMail flagging problems - JavaMail
JavaMail flagging problems Hi everyone, I am working on receiving... and read only those mails which have this as the subject line.
My problem... the mails as 'read' and read only the unread mails the next time i run the program
read image
read image java code to read an image in the form of an array.
import java.io.*;
import javax.imageio.*;
import java.awt.image.*;
class... Exception
{
File input = new File("C:/rose.jpg");
BufferedImage
sending mail with attachment in jsp - JSP-Servlet
sending mail with attachment in jsp Hi,
Can any one pls tell me how to send email with attachment in jsp. I know how to send mail without attachment but with attachment its not working for me. If u hve any idea please
Mailing - JavaMail
Mailing Dear Friends,
How can i send mails to yahoo,gmail,other mail servers using javamail using a jsp file.
Please help me
read a file
read a file read a file byte by byte
import java.io.File... static void main(String[] args) {
File file = new File("D://Try.txt");
try {
FileInputStream fin = new FileInputStream(file);
byte
file read
file read hi i am reaing from a file which has punjabi words. can some one help with me some code
Read Multipart mail using Java Mail
Read Multipart mail using Java Mail
This Example shows you how to read a multipart message using javamail
api. When we read a multipart mail firstly create
extract content from javamail - JavaMail
extract content from javamail Hi,
I currently try the example(Read Multipart mail using Java Mail) on Javamail from the site to receive the e-mail. Then If I going to extract text the message content. What can I do
file uploading - JavaMail
file uploading Hi thi is swathi.Thank s for giving the answers to previous questions.I am getting the problem with below code
FileItemFactory factory = new DiskFileItemFactory();
ServletFileUpload upload = new
How to read the contents in a file which is of input type file
How to read the contents in a file which is of input type file I have a jsp page wherein i have to passs the file as input and read the contentsn of a file in servlet.
upload.jsp
<FORM METHOD=POST ENCTYPE="multipart/form
Sending Email with Attachment
Sending Email with Attachment
This Example shows you how to send a Attachment in the message using...
attachment in the content and Finally, it sends the Message by invoking
PROGRAMMING - JavaMail
PROGRAMMING could show me how to write a psuedocode to read a sequence of numbers terminated by 999 the psuedocode should count and print the number of negative values and zero for this assessment
PLZZZ
java - JavaMail
java sir I want some java file codings pls send it......... Hi Friend,
Please visit the following code:
Thanks
Read file in java
Read file in java Hi,
How to Read file in java?
Thanks
Hi,
Read complete tutorial with example at Read file in java.
Thanks
java read file
java read file Hello i need some help...
i want to read an MS Excel file in java so how to read that file
Read and write file
Read and write file HI,
How to read and write file from Java program?
Thanks
Hi,
See the following tutorials:
Java Write To File
Read File
Thanks
Read external file in PHP
Read external file in PHP How to read external files in PHP in particular time duration and save that file with the latest date.
How is it possible? Please explain with an example.
Thanks
down load jar file - JavaMail
Java read binary file
Java read binary file I want Java read binary file example code that is easy to read and learn.
Thanks
Hi,
Please see the code at Reading binary file into byte array in Java.
Thanks
Hi,
There is many
regarding email - JavaMail
, port: 25"
is that you've incorrectly configured your properties and JavaMail... or not?
If not, then something has happened to the Properties that JavaMail...
that object.
For read more information on Java Mail visit to :
http
Role based login ..Need the concepts.. - JavaMail
points remember
"login.jsp"
1)Code have two file "login.jsp" and "welcome.jsp... have any more problem then give in details.
Read for more information
Read text File
Read text File Hi,How can I get line and keep in a String in Java
java mail api - JavaMail
in the classpath.
3.Start the mail server.
For read more information,Examples...-in-jsp.shtml
Thanks
java complilation error - JavaMail
java complilation error Hi
I was trying to send the mails using the below code,This coding is giving errors that java.mail.* does not exists,i am.../Downloadantjavamailjar.htm
and add this jar file into the lib folder in ur netbeans
read restricted pdf file
read restricted pdf file i have restricted file.
package Encryption;
import java.io.*;
import java.util.*;
import com.lowagie.text.*;
import com.lowagie.text.pdf.*;
public class EncryptionWithCertificate {
public
read restricted pdf file
read restricted pdf file i have restricted file.
package Encryption;
import java.io.*;
import java.util.*;
import com.lowagie.text.*;
import com.lowagie.text.pdf.*;
public class EncryptionWithCertificate
read and write a file using javascript
read and write a file using javascript How to read and write a file using javascript
read doc and docx file in javascript
read doc and docx file in javascript How i read doc and docx file in javascript
JAVA MAIL - JavaMail
desposition is Part.ATTACHMENT. If it is then you can get these attachment files
Read file zip in J2ME - MobileApplications
Read file zip in J2ME I would like to read file.zip in J2ME,I don't know how to extract it. Please help me
In this section, you will learn how to read the
key-value of properties files in Java. The properties file provides
read paragraph
read paragraph how to read paragraph from one file to another file
Java read file in memory
Java read file in memory
In this section, you will learn how to read a file... and then read the file... for the file, and then call map( )
to produce a MappedByteBuffer, which
Java Read username, password and port from properties file
Java Read username, password and port from properties file
In this section, you will come to know how to read username, password and port no from.... In the properties file, data has been stored as a key value pair in the form of string.
load
How to Read a File in Java
How to Read a File in Java?
In this section we are going to know, How to read a file in Java. We
have to follow three step to read a File.
First get... to read the Content of file .
Read A File: Reading a file is nothing
Java read lines from file
to read the whole file in one go. So, in my case reading file line by line is only...Java read lines from file Any code example related to Java read lines from file? In my project there is requirement of reading the file line by line
Read PDF file
Read PDF file
Java provides itext api to perform read and write operations with pdf file. Here we are going to read a pdf file. For this, we have used PDFReader class. The data is first converted into bytes and then with the use
Read bufferedreader
Read bufferedreader Tell me the example of Read file using bufferedreader.
Thanks
Read the tutorial Reading file using BufferedReader class.
Thanks
how to read this xml file - XML
how to read this xml file i want to read this xml file using java...
read
i have tried lot more , but i am not able to read this xml file...
name=client
menu=client
action=read
user
employee
add
Read file into String
Read file into String
In this section, you will learn how to read a file... data from the file using it's read()
method. This data is then stored... we have used FileReader and BufferedReader class to read the file into memory
PDF reader in android - JavaMail
PDF reader in android Hi, iam trying to open and perform basic operations on PDF file in android.
But i unable to get it.
IS there any possibilities to solve that problem?
Waiting for reply
How to save form data to a csv file using jquery or ajax
How to save form data to a csv file using jquery or ajax Please let... this.
i am able to read the csv file using this code
if (window.XMLHttpRequest...=data.responseText;
Now the problem is ,i should write form data to a csv file using ajax
Java Mail API - JavaMail
Java Mail API Hi!
I wish u advanced happy new year to all of ur team members....
My question is..
I want to read attachments in my gmail account using java mail api... i got code from our roseindia.net i.e.
call from java - JavaMail
-----------------------------------------------
Read for more information.
http
sending automatic email - JavaMail
.
For this you need some third party tool.
Create an exe file of your applcation.
put
Sending mail - JavaMail
()); message.setText(content); File f = new File("Message.properties"...;+messageIdKey); message.setHeader(messageIdKey,"1000"); /* File f = new File("Message.properties"); Properties pro = new Propertiesavamail
Reading a File into a Byte Array
a file in the form of a
byte array. We can do it very easily. Firstly we... to read, in the constructor of
File class. We will use DataInputStream to read...
.style1 {
color: #FFFFFF;
}
Reading a File into a Byte Array
Java read latest file
Java read latest file
Java provides IO package to perform file operations. Here we are going to read the last modified file. For this, we have used..., read that particular file.
Here is the code:
import java.io.*;
import
xl read
:
Insert excel file data into database
Read Excel File...xl read hi, i have read excel sheet data using poi api and printed on console, now i have to store the same data which is printed on the console
Read Write
Read Write Hi;
How can I read certain line of say 10 text files and write to one text file
Java Read Multiple Files and store the data into another text file
The given code reads all the text files of the directory
C file read example
C file read example
This section demonstrates you to read a line from the file. You can see in
the given example, we prompt the user to enter the name of the file to read
php send mail with attachment
php send mail with attachment Syntax of sending email with attachment in PHP
read xml
read xml hi all, i want to ask about how to read an xml in java ME..
here is the xml file
<data>
<value>
<struct>
<member>
<name>
User_Name
file attach
file attach how pick up the file from database and attach it to the mail in java
Hi,
Here is the code to read image (file) from... the image you can save into file or attach as attachment in your email.
Thanks
objective c read file line by line
objective c read file line by line Please explain how to read files in Objective C line by line
Java read entire file into byte array
Java read entire file into byte array Where is the example of Java read entire file into byte array on your website?
Thanks
Hi,
Its simple you can use insputStream.read(bytes); method of InputStream class.
Read
Java read file contents into byte array
Java read file contents into byte array Hello share the code of java read file contents into byte array with me. It's urgent.
Thanks
Hi,
This is simple process if you use the InputStream class.
Read example
read excel file from Java - Java Beginners
read excel file from Java How we read excel file data with the help of java? Hi friend,
For read more information on Java POI visit to :
Thanks
Read Text file from Javascript - JSP-Servlet
Read Text file from Javascript plz send the code How to Retrieve the data from .txt file thru Javascript? And how to find the perticular words in that file
get from address using javamail api?
get from address using javamail api? i want to get from address of particular mail using the javamail api?
actually i had done it in one .jsp file using the particular mail number.
message[messageno].getFrom()[0
Java read file line by line
Java read file line by line
In this section, you will learn how to read a file... are going to
read a file line by line. For reading text from a file it's better... class is used to read text from a file
line by line using it's readLine method
Connecting to Unix through Java - JavaMail
Connecting to Unix through Java Could you please tell a sample code, where i connect to the unix server and run a script and write the results in a file and mail that file back to me
SOAP with Attachments API for Java
having a text attachment.
Then it retrieves the content of the attachment file and display it on the screen
Solution...
Write the source code
Make a attatchment text file
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/tutorialhelp/comment/89985 | CC-MAIN-2013-20 | refinedweb | 2,535 | 70.63 |
Machine Learning (basic): the Iris dataset¶
While
vaex.ml does not yet implement predictive models, we provide wrappers to powerful libraries (e.g. Scikit-learn, xgboost) and make them work efficiently with
vaex.
vaex.ml does implement a variety of standard data transformers (e.g. PCA, numerical scalers, categorical encoders) and a very efficient KMeans algorithm that take full advantage of
vaex.
The following is a simple example on use of
vaex.ml. We will be using the well known Iris dataset, and we will use it to build a model which distinguishes between the three Irish species (Iris setosa, Iris virginica and Iris versicolor).
Lets start by importing the common libraries, load and inspect the data.
[1]:
import vaex import vaex.ml import pylab as plt df = vaex.ml.datasets.load_iris() df
[1]:
Splitting the data into train and test steps should be done immediately, before any manipulation is done on the data.
vaex.ml contains a
train_test_split method which creates shallow copies of the main DataFrame, meaning that no extra memory is used when defining train and test sets. Note that the
train_test_split method does an ordered split of the main DataFrame to create the two sets. In some cases, one may need to shuffle the data.
If shuffling is required, we recommend the following:
df.export("shuffled", shuffle=True) df = vaex.open("shuffled.hdf5) df_train, df_test = df.ml.train_test_split(test_size=0.2)
In the present scenario, the dataset is already shuffled, so we can simply do the split right away.
[2]:
# Orderd split in train and test df_train, df_test = df.ml.train_test_split(test_size=0.2)
/Users/jovan/PyLibrary/vaex/packages/vaex-core/vaex/ml/__init__.py:209: UserWarning: Make sure the DataFrame is shuffled warnings.warn('Make sure the DataFrame is shuffled')
As this is a very simple tutorial, we will just use the columns already provided as features for training the model.
[3]:
features = df_train.column_names[:4] features
[3]:
['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
PCA¶
The
vaex.ml module contains several classes for dataset transformations that are commonly used to pre-process data prior to building a model. These include numerical feature scalers, category encoders, and PCA transformations. We have adopted the scikit-learn API, meaning that all transformers have the
.fit and
.transform methods.
Let’s use apply a PCA transformation on the training set. There is no need to scale the data beforehand, since the PCA also normalizes the data.
[4]:
pca = vaex.ml.PCA(features=features, n_components=4) df_train = pca.fit_transform(df_train) df_train
[4]:
The result of pca
.fit_transform method is a shallow copy of the DataFrame which contains the resulting columns of the transformation, in this case the PCA components, as virtual columns. This means that the transformed DataFrame takes no memory at all! So while this example is made with only 120 sample, this would work in the same way even for millions or billions of samples.
Gradient boosting trees¶
Now let’s train a gradient boosting model. While
vaex.ml does not currently include this type of models, we support the popular boosted trees libraries xgboost, lightgbm, and catboost. In this tutorial we will use the
lightgbm classifier.
[9]:
import lightgbm import vaex.ml.sklearn # Features on which to train the model train_features = df_train.get_column_names(regex='PCA_.*') # The target column target = 'class_' # Instantiate the LightGBM Classifier booster = lightgbm.sklearn.LGBMClassifier(num_leaves=5, max_depth=5, n_estimators=100, random_state=42) # Make it a vaex transformer (for the automagic pipeline and lazy predictions) model = vaex.ml.sklearn.SKLearnPredictor(features=train_features, target=target, model=booster, prediction_name='prediction') # Train and predict model.fit(df=df_train) df_train = model.transform(df=df_train) df_train
[9]:
Notice that after training the model, we use the
.transform method to obtain a shallow copy of the DataFrame which contains the prediction of the model, in a form of a virtual column. This makes it easy to evaluate the model, and easily create various diagnostic plots. If required, one can call the
.predict method, which will result in an in-memory
numpy.array housing the predictions.
Automatic pipelines¶
Assuming we are happy with the performance of the model, we can continue and apply our transformations and model to the test set. Unlike other libraries, we do not need to explicitly create a pipeline here in order to propagate the transformations. In fact, with
vaex and
vaex.ml, a pipeline is automatically being created as one is doing the exploration of the data. Each
vaex DataFrame contains a state, which is a (serializable) object containing information of all transformations
applied to the DataFrame (filtering, creation of new virtual columns, transformations).
Recall that the outputs of both the PCA transformation and the boosted model were in fact virtual columns, and thus are stored in the state of
df_train. All we need to do, is to apply this state to another similar DataFrame (e.g. the test set), and all the changes will be propagated.
[6]:
state = df_train.state_get() df_test.state_set(state) df_test
[6]:
Production¶
Now
df_test contains all the transformations we applied on the training set (
df_train), including the model prediction. The transfer of state from one DataFrame to another can be extremely valuable for putting models in production.
Performance¶
Finally, let’s check the model performance.
[7]:
from sklearn.metrics import accuracy_score acc = accuracy_score(y_true=df_test.class_.values, y_pred=df_test.prediction.values) acc *= 100. print(f'Test set accuracy: {acc}%')
Test set accuracy: 100.0%
The model get perfect accuracy of 100%. This is not surprising as this problem is rather easy: doing a PCA transformation on the features nicely separates the 3 flower species. Plotting the first two PCA axes, and colouring the samples according to their class already shows an almost perfect separation.
[8]:
plt.figure(figsize=(8, 4)) df_test.scatter(df_test.PCA_0, df_test.PCA_1, c_expr=df_test.class_, s=50) plt.show()
| http://docs.vaex.io/en/latest/example_ml_iris.html | CC-MAIN-2020-05 | refinedweb | 970 | 50.53 |
Scripts as Behaviour Components
Проверено с версией:: 4
-
Сложность: Базовая
What are Scripts in Unity? Learn about the behaviour component that is a Unity script, and how to Create and Attach them to objects.
Scripts as Behaviour Components
Базовая Scripting
Транскрипты
- 00:00 - 00:02
Scripts should be considered as behaviour
- 00:02 - 00:04
components in Unity.
- 00:04 - 00:06
As with other components in Unity they can
- 00:06 - 00:08
be applied to objects and are seen
- 00:08 - 00:09
in the inspector.
- 00:10 - 00:12
With this particular example, this cube
- 00:12 - 00:15
has a rigidbody component which gives it
- 00:15 - 00:18
a physical mass. And when you press play
- 00:18 - 00:19
the cube falls to the ground
- 00:19 - 00:21
as it uses gravity.
- 00:22 - 00:25
We also have added an examples script.
- 00:25 - 00:28
This behaviour script has code in it
- 00:28 - 00:30
which changes the colour of the cube
- 00:30 - 00:32
by effecting the colour value of the
- 00:32 - 00:35
default material attached to that object.
- 00:36 - 00:38
When we press the R key on the keyboard
- 00:39 - 00:41
the colour gets changed to red.
- 00:41 - 00:43
When we press G the colour gets
- 00:43 - 00:46
changed to green. And when we press B
- 00:46 - 00:47
it gets changed to blue.
- 00:47 - 00:49
By attaching this script to the object
- 00:49 - 00:51
when we refer to game object
- 00:51 - 00:54
we're referring to this particular item.
- 00:54 - 00:56
We then drill down to the value that we want
- 00:56 - 00:59
and effect it. Here we're addressing the
- 00:59 - 01:01
game object this script is attached to,
- 01:01 - 01:03
we're then addressing the renderer,
- 01:04 - 01:06
which is the component seen here,
- 01:06 - 01:08
mesh renderer. We're then addressing the
- 01:08 - 01:10
material attached to that renderer and
- 01:10 - 01:12
finally the colour of that material.
- 01:13 - 01:16
And we're setting it to a shortcut called red
- 01:16 - 01:18
which is part of the colour class.
- 01:18 - 01:19
Let's see this in action.
- 01:20 - 01:23
If I press play, then use the R, G or B
- 01:23 - 01:26
keys on the keyboard I can change the colour.
- 01:26 - 01:28
And you can see that the material
- 01:28 - 01:29
is being effected.
- 01:30 - 01:32
So this material is applied to the renderer,
- 01:33 - 01:35
default diffuse, you can see that listed there,
- 01:36 - 01:38
and we're then effecting the main colour value
- 01:38 - 01:40
and setting it to a certain value in here.
- 01:41 - 01:43
The same as it would if I was actually
- 01:43 - 01:45
doing it by hand in the editor.
- 01:45 - 01:48
Scripts can be created in the project panel
- 01:48 - 01:50
by choosing Create and then choosing
- 01:50 - 01:52
a language of your choice.
- 01:52 - 01:53
For example,
- 01:57 - 01:59
they can then be attached to objects either
- 01:59 - 02:00
by dragging and dropping
- 02:03 - 02:06
or by choosing the Add Component button
- 02:06 - 02:08
at the bottom of the Component menu
- 02:08 - 02:10
and then choosing from the list of scripts
- 02:10 - 02:12
in your current project.
- 02:13 - 02:15
Scripts can also be created using the
- 02:15 - 02:16
Add Component button by choosing
- 02:16 - 02:18
New Script from the bottom and naming
- 02:18 - 02:21
the script and selecting a language type
- 02:21 - 02:22
from the drop-down menu.
- 02:22 - 02:26
This can then be created and added in one step.
- 02:31 - 02:33
The final way to add a script to your
- 02:33 - 02:36
object is to select the object in the hierarchy
- 02:36 - 02:39
and then choose Components - Scripts
- 02:39 - 02:41
and then choose from the list of scripts
- 02:41 - 02:43
in your current project. Of course you can
- 02:43 - 02:45
apply scripts to do all manner of other
- 02:45 - 02:47
behaviours of objects. Try to think of
- 02:47 - 02:49
scripts as components that you create
- 02:49 - 02:51
yourself, allowing you to create
- 02:51 - 02:53
behaviour for different game objects in
- 02:53 - 02:55
your game, this could be characters,
- 02:55 - 02:57
it could be environments or it could be
- 02:57 - 02:59
scripts that manage the functionality
- 02:59 - 02:59
of the game.
- 03:00 - 03:02
This example script that we've looked at
- 03:02 - 03:04
is written in C# but in Unity
- 03:04 - 03:08
you can write in Javascript, C# and Boo.
- 03:08 - 03:11
Boo is a derivative of Python, though
- 03:11 - 03:13
it's not as commonly used as the other two.
- 03:13 - 03:16
So you'll likely see Javascript or C#
- 03:16 - 03:18
examples when you see scripting
- 03:18 - 03:20
from Unity around the web. The videos
- 03:20 - 03:22
that you see in this learning area
- 03:22 - 03:24
will be written in C# but we'll also
- 03:24 - 03:26
provide the Javascript equivalent
- 03:26 - 03:27
beneath the video.
ExampleBehaviourScript
Code snippet
using UnityEngine; using System.Collections; public class ExampleBehaviourScript : MonoBehaviour { void Update() { if (Input.GetKeyDown(KeyCode.R)) { GetComponent<Renderer> ().material.color = Color.red; } if (Input.GetKeyDown(KeyCode.G)) { GetComponent<Renderer>().material.color = Color.green; } if (Input.GetKeyDown(KeyCode.B)) { GetComponent<Renderer>().material.color = Color.blue; } } }
#pragma strict function Update () { if(Input.GetKeyDown(KeyCode.R)) { GetComponent(Renderer).material.color = Color.red; } if(Input.GetKeyDown(KeyCode.G)) { GetComponent(Renderer).material.color = Color.green; } if(Input.GetKeyDown(KeyCode.B)) { GetComponent(Renderer).material.color = Color.blue; } }
Дополнительная документация
- Reference Homepage (Справка по скриптам)
- Scripting in Unity (Руководство) | https://unity3d.com/ru/learn/tutorials/modules/beginner/scripting/scripts-as-behaviour-components?playlist=17117 | CC-MAIN-2019-26 | refinedweb | 1,062 | 66.17 |
Zipper @ HackTheBoxxct
This post is a walkthrough of Zipper, an interesting machine on hackthebox.eu featuring the zabbix network monitoring application. It involves the application of known zabbix exploits, manipulation of database entries and light custom exploitation of a privileged binary.
User & Root Flag
The initial scan (
nmap -Pn -n -sC -sV -p- 10.10.10.108 -oA 10.10.10.108) shows the following results:
22/tcp open ssh OpenSSH 7.6p1 Ubuntu 4 (Ubuntu Linux; protocol 2.0) | ssh-hostkey: | 2048 59:20:a3:a0:98:f2:a7:14:1e:08:e0:9b:81:72:99:0e (RSA) | 256 aa:fe:25:f8:21:24:7c:fc:b5:4b:5f:05:24:69:4c:76 (ECDSA) |_ 256 89:28:37:e2:b6:cc:d5:80:38:1f:b2:6a:3a:c3:a1:84 (ED25519) 80/tcp open http Apache httpd 2.4.29 ((Ubuntu)) |_http-server-header: Apache/2.4.29 (Ubuntu) |_http-title: Apache2 Ubuntu Default Page: It works 10050/tcp open tcpwrapped
A first look on port 80 shows just a default apache2 installation website so we look for more web content with gobuster:
gobuster -u -w ~/tools/SecLists/Discovery/Web-Content/quickhits.txt
Running for a few seconds shows that a directory named zabbix exists:
Trying some default credentials we eventually enter
zapper:zapper which does work, but gives the error ‘GUI Access disabled’. This suggests there might be some way to interact with the service without actually using the gui.
A quick google search reveals that there is an API that can be used to talk to zabbix. An important information to get out of zabbix is which hosts it is monitoring. The following short python program connects to the API and prints the configured host ids:
from pyzabbix import ZabbixAPI zapi = ZabbixAPI("") zapi.login("Zapper", "zapper") print("Connected to Zabbix API Version %s" % zapi.api_version()) for h in zapi.host.get(output="extend"): print(h['hostid'])
Result:
Connected to Zabbix API Version 3.0.21 10105 10106
Looking for public exploits with searchsploit we find 39937.py which needs a host id to exploit the application. Since we now do have these ids we can use them if we change the path, ip, credentials and hostid in the exploit code. Running the modified exploit gives a command shell:
[zabbix_cmd]>>: whoami zabbix [zabbix_cmd]>>: id uid=103(zabbix) gid=104(zabbix) groups=104(zabbix)
Since this shell is missing features and convenience we upgrade to a perl reverse tcp shell:
perl -e 'use Socket;$&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/sh -i");};'
Looking around on the host we find “/usr/lib/zabbix/externalscripts/backup_script.sh”:
#!/bin/bash # zapper wanted a way to backup the zabbix scripts so here it is: 7z a /backups/zabbix_scripts_backup-$(date +%F).7z -pZippityDoDah /usr/lib/zabbix/externalscripts/* &>/dev/null
In
/backups we see 2 backup files:
zabbix_scripts_backup-2019-02-22.7z zapper_backup-2019-02-22.7z
We can unpack
zabbix_scripts_backup-2019-02-22.7z with the password from the script but it just contains the backup_script.sh itself so it wont help much at this point.
Another interesting file is
/etc/zabbix/web/zabbix.conf.php:
... DBUser=zabbix DBName=zabbixdb DBPassword=f.YMeMd$pTbpY3-449 ...
With these credentials we can connect to the database and dump the users and their hashes with the following query:
mysql -u zabbix -p'f.YMeMd$pTbpY3-449' -D zabbixdb -e "select name, alias, passwd from users; > out.txt"
Zabbix Admin 65e730e044402ef2e2f386a18ec03c72 guest d41d8cd98f00b204e9800998ecf8427e zapper zapper 16a7af0e14037b567d7782c4ef1bdeda
Since cracking the admin password didn’t give any immediate results we change the password of the admin user to something we know:
mysql -u zabbix -p'f.YMeMd$pTbpY3-449' -D zabbixdb -e "update users set passwd=md5('xct') where alias='Admin';" > out.txt
In addition we want to enable the gui access to see what we can do in the app:
mysql -u zabbix -p'f.YMeMd$pTbpY3-449' -D zabbixdb -e "update usrgrp set gui_access = 1 where name = 'administrators';" > out.txt
Looking at zabbix docs we see that we can start scripts on the configured hosts. As seen before with the host ids we can see here again that it is indeed 2 different hosts:
Zabbix 127.0.0.1: 10050 Zipper 172.17.0.1: 10050
We see in the scripts section that this is basically what the exploit has been doing all along:
On this url we can execute these scripts by clicking on the hostname.
However one detail is still missing. The scripts will be executed on the “server”-side, which means no matter which host we execute it on, it will be executed on the server. By changing it to “agent” and executing it we can actually get a shell on zapper:
$ id uid=107(zabbix) gid=113(zabbix) groups=113(zabbix) $ ifconfig docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 inet6 fe80::42:2aff:fe9c:e446 prefixlen 64 scopeid 0x20<link> ether 02:42:2a:9c:e4:46 txqueuelen 0 (Ethernet) RX packets 67919 bytes 5366647 (5.3 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 68574 bytes 5135469 (5.1 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
One of the first things on a new host is to look for suid binaries with
find / -perm -u=s -type f 2>/dev/null, which returns the suid program
/home/zapper/utils/zabbix-service, which happens to be a 32-bit ELF executable.
Executing the file prints the string
start or stop?: and is waiting for user input. With
strings -n8 /home/zapper/utils/zabbix-service we find the following strings inside the binary, which are the commands it actually executes:
systemctl daemon-reload && systemctl start zabbix-agent systemctl stop zabbix-agent
Since it doesn’t use the absolute path for systemctl we can abuse that to run our own code and get root! Placing a script called systemctl at the location where we are running the tool will execute it in the context of root. We use it to get a root shell, grab user and root flags and are done with the box:
echo '#!/bin/bash' > /tmp/systemctl echo '/bin/bash' >> /tmp/systemctl chmod +x /tmp/systemctl export PATH=/tmp:$PATH /home/zapper/utils/zabbix-service start
uid=0(root) gid=0(root) groups=0(root),113(zabbix) | https://www.vulndev.io/2019/02/22/zipper-hackthebox/ | CC-MAIN-2022-40 | refinedweb | 1,078 | 55.24 |
in reply to Re: LibXML, XPath and Namespacesin thread LibXML, XPath and Namespaces :-)
"What I really wanted to achieve was for the system to assume the default namespace was 'gt' so I didn't have to include it in the prefix in all XPath expressions."
Well, that would break XPath spec compliance. As per the XPath spec, node names with no colon always reference nodes with no namespace at all.
Otherwise, if you could somehow set "gt" to be the default namespace for XPaths, you wouldn't be able to distinguish between the following two attributes:
<gt:foo gt:
[download]
"It's fine when it's just one level deep e.g. EnvelopeVersion, but when you want to pick up a number of nodes 3 or 4 levels deep and keep having to repeat that 'gt:' at every level its a PITA."
I enjoy golf as much as the next man, but is three characters per name really so bad? (You could always bind the namespace to just "g" so it was two characters.) I saved you having to construct XML::LibXML::XPathContext objects, didn't I??
If your XPaths are fairly simple, you could take a look at XML::LibXML::QuerySelector which allows you to select nodes using CSS selectors. I wrote it for use with (X)HTML, but I don't see any reason it shouldn't roughly work with arbitrary XML....... | http://www.perlmonks.org/index.pl?node_id=1024898 | CC-MAIN-2016-44 | refinedweb | 234 | 66.07 |
Queries for metadata records. QUERY is of the form:
<name1>=’<value1>’ AND <name2>=’<value2>’ OR ...
See the examples below for formatting of various types of values. The OID and any specified fields of metadata records that match the query are printed to stdout.
<name> should be specified in the format <namespace>.<attribute>.
Note that names that are keywords need to be enclosed in escaped double quotes ("\"<name>\"=’<value>’"). Refer to the list of keywords in Chapter 4, Sun StorageTek 5800 System Query Language, in Sun StorageTek 5800 System Client API Reference Guide. Also note that some shells such as csh might not accept the escaped quotes because they are embedded in other quotes. | http://docs.oracle.com/cd/E19851-01/819-7558/agkcx/index.html | CC-MAIN-2013-48 | refinedweb | 113 | 74.59 |
# Sidecar for a Code splitting

Code splitting. Code splitting is everywhere. However, why? Just because there is **too much of javascript** nowadays, and not all are in use at the same point in time.
JS is a very *heavy* thing. Not for your iPhone Xs or brand new i9 laptop, but for millions(probably billions) of *slower* devices owners. Or, at least, for your watches.
So — JS is bad, but what would happen if we **just disable it** — the problem would be gone… for some sites, and be gone "with sites" for the React-based ones. But anyway — there are sites, which could work without JS… and there is something we should learn from them...
Code splitting
==============
Today we have two ways to go, two ways to make it better, or to not make it worse:
1. Write less code
------------------
That's the best thing you can do. While `React Hooks` are letting you ship a bit less code, and solutions like `Svelte` let you generate just less code than *usual*, that's not so easy to do.
It's not only about the code, but also about *functionality* — to keep code "compact" you have to keep it "compact". There is no way to keep application bundle small if it's doing so many things (and got shipped in 20 languages).
There are ways to write *short and sound* code, and there are ways to write the opposite implementation — *the bloody enterprise*. And, you know, both are legit.

But the main issue — the code itself. A simple react application could easily bypass "recommended" 250kb. And you might spend a month optimizing it and make it smaller. "Small" optimizations are well documented and quite useful — just get `bundle-analyzer` with `size-limit` and get back in shape.
There are many libraries, which fight for every byte, trying to keep you in your limits — [preact](https://preactjs.com/) and [storeon](https://github.com/storeon/storeon), to name a few.
But our application is a bit beyond 200kb. It's closer to **100Mb**. Removing kilobytes makes no sense. Even removing megabytes makes no sense.
> After some moment it's impossible to keep your application small. It will grow bigger in time.
2. Ship less code
-----------------
Alternatively, `code split`. In other words — **surrender**. Take your 100mb bundle, and make twenty 5mb bundles from it. Honestly — that's the only possible way to handle your application if it got big — create a pack of smaller apps from it.
But there is one thing you should know right now: whatever option you choose, it's an implementation detail, while we are looking for something more reliable.
The Truth about Code Splitting
==============================
The truth about code splitting is that it's nature is **TIME SEPARATION**. You are not just *splitting* your code, you are splitting it in a way where you will **use** as little as possible in a single point of time.
Just don't ship the code you don't need right now. Get rid of it.

Easy to say, hard to do. I have a few heavy, but not adequately split applications, where any page loads like 50% of everything. Sometimes `code splitting` becomes `code separation`, I mean — you may move the code to the different chunks, but still, use it all. Recal that *"Just don't ship the code you don't need right now"*,– I *needed* 50% of the code, and that was the real problem.
> Sometimes just adding `import` here and there is not enough. Till it is not **time** separation, but only **space** separation — it does not matter at all.
There are 3 common ways to code split:
1. Just dynamic `import`. Barely used alone these days. It's more about issues with tracking a *state*.
2. `Lazy` Component, when you might postpone rendering and loading of a React Component. Probably 90% of "react code splitting" these days.
3. *Lazy* `Library`, which is actually `.1`, but you will be given a library code via React render props. Implemented in [react-imported-component](https://github.com/theKashey/react-imported-component#library-level-code-splitting) and [loadable-components](https://www.smooth-code.com/open-source/loadable-components/docs/library-splitting/). Quite useful, but not well known.
Component Level Code Splitting
------------------------------
This one is the most popular. As a per-route code splitting, or per-component code splitting. It's not so easy to do it and maintain good *perceptual results* as a result. It's death from `Flash of Loading Content`.
The good techniques are:
* load `js chunk` and `data` for a route in parallel.
* use a `skeleton` to display something similar to the page before the page load (like Facebook).
* `prefetch` chunks, you may even use [guess-js](https://github.com/guess-js/guess) for a better prediction.
* use some delays, loading indicators, `animations` and `Suspense`(in the future) to soften transitions.
And, you know, that's all about *perceptual* performance.

> Image from [Improved UX with Ghost Elements](https://blog.angularindepth.com/https-medium-com-thomasburleson-animated-ghosts-bfc045a51fba)
That doesn't sound good
=======================
You know, I could call myself an expert in code splitting — but I have my own failures.
Sometimes I could fail to reduce the bundle size. Sometimes I could fail to improve resulting performance, as long as `the _more_ code splitting you are introducing - the more you spatially split your page - the more time you need to _reassemble_ your page back`\*. It's called a **loading waves**.
* without SSR or pre-rendering. Proper SSR is a game changer at this moment.

Last week I've got two failures:
* I've lost in [one library comparison](https://github.com/mui-org/material-ui/issues/15450), as long as my library was better, but MUCH bigger than another one. I have failed to **"1. Write less code"**.
* optimize a small site, made in React by my wife. It was using route-based component splitting, but the `header` and `footer` were kept in the main bundle to make transitions more "acceptable". Just a few things, **tightly coupled** with each other skyrocketed bundle side up to 320kb(before gzip). There was nothing important, and nothing I could really remove. **A death by a thousand cuts**. I have failed to **Ship less code**.
> React-Dom was 20%, core-js was 10%, react-router, jsLingui, react-powerplug… 20% of own code… We are already done.

The solution
------------
I've started to think about how to solve my problem, and why *common solutions* are not working properly for my use case.
> What did I do? I've listed all crucial location, without which application would not work at all, and tried to understand why I have the rest.
It was a surprise. But my problem was in CSS. In vanilla CSS transition.
Here is the code
* a *control* variable — `componentControl`, eventually would be set to something `DisplayData` should display.
* once value is set — `DisplayData` become visible, changing `className`, thus triggering fancy transition. Simultaneusly `FocusLock` become active making `DisplayData` a **modal**.
```
{componentControl.value && }
// ^ it's does not exists. Also dead
// ^ that is just not visible, but NOT dead
```
I would like to code split this piece as a whole, but this is something I could not do, due to two reasons:
1. the information should be visible immediately, once required, without any delay. A business requirement.
2. the information "chrome" should exist before, to property handle transition.
This problem could be partially solved using [CSSTransitionGroup](https://github.com/reactjs/react-transition-group) or [recondition](https://github.com/theKashey/recondition). But, you know, fixing *one code* adding *another code* sounds weird, even if actually *enought*. I mean adding more code could help in removing even more code. But… but...
> There should be a better way!
TL;DR — there are two key points here:
* `DisplayData` has to be **mounted**, and exists in the DOM prior.
* `FocusLock` should also exist prior, not to cause `DisplayData` remount, but it's **brains are not needed** in the beginning.
---
So let's change our mental model
Batman and Robin
================
Let assume that our code is Batman and Robin. Batman can handle most the bad guys, but when he can't, his sidekick Robin comes to the rescue..
> Once again Batman would engage the battle, Robin will arrive later.
This is Batman:
```
+
- {componentControl.value && }
+
+
```
This is his sidekick, Robin::
```
-
+ {componentControl.value && }
-
-
```
Batman and Robin could form a *TEAM*, but they actually, are two different persons.
And don't forget — we are still talking about **code splitting**. And, in terms of code splitting, where is the sidekick? Where is Robin?

> in a sidecar. Robin is waiting in a **sidecar chunk**.
Sidecar
=======
* `Batman` here is all visual stuff your customer must see as soon as possible. Ideally instantly.
* `Robin` here is all logic, and fancy interactive features, which may be available a second after, but not in the very beginning.
It would be better to call this a **vertical code splitting** where code branches exist in a parallel, in opposite to a common **horizontal code splitting** where code branches are *cut*.
In [some lands](https://github.com/respond-framework/rudy), this trio was known as `replace reducer` or other ways to lazy load redux logic and side effects.
In [some other lands](https://developers.facebook.com/videos/2019/building-the-new-facebookcom-with-react-graphql-and-relay/), it is known as `"3 Phased" code splitting`.
> It's just another separation of concerns, applicable only to cases, where you can defer loading some part of a component, but not another part.

> image from [Building the New facebook.com with React, GraphQL and Relay](https://developers.facebook.com/videos/2019/building-the-new-facebookcom-with-react-graphql-and-relay/), where `importForInteractions`, or `importAfter` **are the `sidecar`**.
And there is an **interesting** observation — while `Batman` is more valuable for a customer, as long as it's something customer might *see*, he is always in shape… While `Robin`, you know, he might be a bit *overweight*, and require much more bytes for living.
As a result — Batman alone is something much be bearable for a customer — he provides more value at a lower cost. You are my hero Bat!
What could be moved to a sidecar:
---------------------------------
* majority of `useEffect`, `componentDidMount` and friends.
* like all *Modal* effects. Ie `focus` and `scroll` locks. You might first display a modal, and **only then** make Modal *modal*, ie "lock" customer's attention.
* Forms. Move all logic and validations to a sidecar, and block form submission until that logic is loaded. The customer could start filling the form, not knowing that it's only `Batman`.
* Some animations. A whole `react-spring` in my case.
* Some visual stuff. Like [Custom scrollbars](https://github.com/theKashey/React-stroller), which might display fancy scroll-bars a second later.
Also, don't forget — Every piece of code, offloaded to a sidecar, also offload things like core-js poly- and ponyfills, used by the removed code.
Code Splitting can be smarter than it is in our apps today. We must realize there is 2 kinds of *code* to split: 1) visual aspects 2) interactive aspects. The latter can come a few moments later. `Sidecar` makes it seamless to split the two tasks, giving the *perception that everything loaded faster*. And it will.
The oldest way to code split
----------------------------
While it may still not be quite clear when and what a `sidecar` is, I'll give a simple explanation:
> `Sidecar` is **ALL YOUR SCRIPTS**. Sidecar is the way we *codesplit* before all that frontend stuff we got today.
I am talking about Server Side Rendering(**SSR**), or just plain **HTML**, we all were used to just yesterday. `Sidecar` makes things as easy as they used to be when pages contained HTML and logic lived separately in embeddable external scripts (separation of concerns).
We had HTML, **plus** CSS, **plus** some scripts inlined, **plus** the rest of the scripts extracted to a `.js` files.
`HTML`+`CSS`+`inlined-js` were `Batman`, while external scripts were `Robin`, and the site was able to function without Robin, and, honestly, partially without Batman (he will continue the fight with both legs(inlined scripts) broken). That was just yesterday, and many "non modern and cool" sites are the same today.
---
If your application supports SSR — try to **disable js** and make it work without it. Then it would be clear what could be moved to a sidecar.
If your application is a client-side only SPA — try to imagine how it would work, if SSR existed.
> For example — [theurge.com](https://theurge.com/), written in React, is fully functional **without any js enabled**.
There is a lot of things you may offload to a sidecar. For example:
* comments. You might ship code to `display` comments, but not `answer`, as long as it might require more code(including WYSIWYG editor), which is not required initially. It's better to delay a *commenting box*, or even just hide code loading behind animation, than delay a whole page.
* video player. Ship "video" without "controls". Load them a second later, them customer might try to interact with it.
* image gallery, like `slick`. It's not a big deal to **draw** it, but much harder to animate and manage. It's clear what could be moved to a sidecar.
> Just think what is essential for your application, and what is not quite...
Implementation details
======================
(DI) Component code splitting
-----------------------------
The simplest form of `sidecar` is easy to implement — just move everything to a sub component, you may code split using an "old" ways. It's almost a separation between Smart and Dumb components, but this time Smart is not *contaniting* a Dumb one — it's opposite.
```
const SmartComponent = React.lazy( () => import('./SmartComponent'));
class DumbComponent extends React.Component {
render() {
return (
// <-- move smart one inside
// <-- the "real" stuff is here
}
}
```
That also requires moving *initialization* code to a Dumb one, but you are still able to code-split the *heaviest* part of a code.
> Can you see a `parallel` or `vertical` code splitting pattern now?
useSidecar
----------
[Building the New facebook.com with React, GraphQL and Relay](https://developers.facebook.com/videos/2019/building-the-new-facebookcom-with-react-graphql-and-relay/), I've already mentioned here, had a concept of `loadAfter` or `importForInteractivity`, which is quite alike sidecar concept.
In the same time, I would not recommend creating something like `useSidecar` as long you might intentionally try to use `hooks` inside, but code splitting in this form would break *rule of hooks*.
Please prefer a more declarative component way. And you might use `hooks` inside `SideCar` component.
```
const Controller = React.lazy( () => import('./Controller'));
const DumbComponent = () => {
const ref = useRef();
const state = useState();
return (
<>
)
}
```
Prefetching
-----------
Dont forget — you might use [loading priority hinting](https://medium.com/webpack/link-rel-prefetch-preload-in-webpack-51a52358f84c) to preload or prefetch `sidecar` and make it shipping more transparent and invisible.
Important stuff — prefetching scripts would load it via **network**, but not execute (and spend CPU) unless it actually required.
SSR
---
Unlike *normal* code splitting, no special action is required for SSR. `Sidecar` might not be a part of the SSR process and not required before `hydration` step. It's could be postponed "by design".
Thus — feel free to use `React.lazy`(ideally something **without** `Suspense`, you don't need any failback(loading) indicators here), or any other library, with, but better without SSR support to *skip* sidecar chunks during SSR process.
The bad parts
=============
But there are a few bad parts of this idea
Batman is not a production name
-------------------------------
While `Batman`/`Robin` might be a good mind concept, and `sidecar` is a perfect match for the technology itself — there is no "good" name for the `maincar`. There is no such thing as a `maincar`, and obviously `Batman`, `Lonely Wolf`, `Solitude`, `Driver` and `Solo` shall not be used to name a non-a-sidecar part.
Facebook have used `display` and `interactivity`, and that might be the best option for all of us.
> If you have a good name for me — leave it in the comments
Tree shaking
------------
It's more about the separation of concerns from *bundler* point of view. Let's imagine you have `Batman` and `Robin`. And `stuff.js`
* `stuff.js`
```
export * from `./batman.js`
export * from `./robin.js`
```
Then you might try *component based* code splitting to implement a sidecar
* `main.js`
```
import {batman} from './stuff.js'
const Robin = React.lazy( () => import('./sidecar.js'));
export const Component = () => (
<>
// sidecar
// main content
)
```
* `sidecar.js`
```
// and sidecar.js... that's another chunk as long as we `import` it
import {robin} from './stuff.js'
.....
```
In short — the code above would work, but will not do "the job".
* if you are using only `batman` from `stuff.js` — tree shaking would keep only it.
* if you are using only `robin` from `stuff.js` — tree shaking would keep only it.
* **but** if you are using both, even in different chunks — both will be bundled in a **first** occurrence of `stuff.js`, ie the **main bundle**.
> Tree shaking is not code-splitting friendly. You have to separate concerns by files.
Un-import
---------
Another thing, forgotten by everybody, is the cost of javascript. It was quite common in the jQuery era, the era of `jsonp` payload to load the script(with `json` payload), get the payload, and **remove** the script.
> Nowadays we all `import` script, and it will be forever imported, even if no longer needed.
As I said before — there is too much JS, and sooner or later, with *continuous navigation* you will load all of it. We should find a way to un-import no longer need chunk, clearing all internal caches and freeing memory to make web more reliable, and not to crush application with out of memory exceptions.
Probably the ability to `un-import` (webpack [could do it](https://github.com/theKashey/wipeWebpackCache)) is one of the reasons we should stick with *component-based* API, as long as it gives us an ability to handle `unmount`.
So far — ESM modules standards have nothing about stuff like this — nor about cache control, nor about reversing import action.
Creating a sidecar-enabled Library
----------------------------------
By today there is only one way to create a `sidecar`-enabled library:
* split your component into parts
* expose a `main` part and `connected` part(not to break API) via `index`
* expose a `sidecar` via a separated entry point.
* in the target code — import the `main` part and the `sidecar` — tree shaking should cut a `connected` part.
This time tree shaking should work properly, and the only problem — is how to name the `main` part.
* `main.js`
```
export const Main = ({sidecar, ...props}) => (
{sidecar}
....
);
```
* `connected.js`
```
import Main from './Component';
import Sidecar from './Sidecar';
export const Connected = props => (
}
{...props}
/>
);
```
* `index.js`
```
export * from './Main';
export * from './Connected';
```
* `sidecar.js`
```
import * from './Sidecar';
```
In short, the change could be represented via a small comparison
```
//your app BEFORE
import {Connected} from 'library'; //
// -------------------------
//your app AFTER, compare this core to `connected.js`
import {Main} from 'library';
const Sidecar = React.lazy(import( () => import('library/sidecar')));
// ^ all the difference ^
export SideConnected = props => (
}
{...props}
/>
);
// ^ you will load only Main, Sidecar will arrive later.
```
Theoretically `dynamic import` could be used inside node\_modules, making *assemble process* more transparent.
> Anyway — it's nothing more than `children`/`slot` pattern, so common in React.
The future
==========
`Facebook` proved that the idea is right. If you haven't seen that video — do it right now. I've just explained the same idea from a bit different angle (and started writing this article a week before F8 conference).
Right now it requires some code changes to be applied to your code base. It requires a more explicit separation of concerns to actually separate them, and let of codesplit not horizontally, but vertically, shipping *lesser* code for a *bigger* user experience.
`Sidecar`, probably, is the only way, except old school SSR, to handle BIG code bases. Last chance to ship a minimal amount of code, when you have a lot of it.
> It could make a BIG application smaller, and a SMALL application even more smaller.
10 years ago the medium website was "ready" in 300ms, and was *really* ready a few milliseconds after. Today seconds and even more than 10 seconds are the common numbers. What a shame.
Let's take a pause, and think — how we could solve the problem, and make UX great again...

Overall
=======
* Component code splitting is a most powerful tool, giving you the ability to *completely* split something, but it comes with a cost — you might not display anything except a blank page, or a *skeleton* for a while. That's a horizontal separation.
* Library code splitting could help when component splitting would not. That's a horizontal separation.
* Code, offloaded to a sidecar would complete the picture, and may let you provide a far better user experience. But would also require some engineering effort. That's a vertical separation.
**Let's have a conversation about this**.
Stop! So what about the problems you tried to solve?
----------------------------------------------------
Well, that was only the first part. **We are in the endgame now**, it would take a few more weeks to write down the second part of this proposal. Meanwhile… get in the sidecar! | https://habr.com/ru/post/450942/ | null | null | 3,626 | 67.15 |
Most versions of The Operating System Beginning with "W" utilise crash protection to prevent most infinite loops from being executed in full: The machine is likely to crash in a few hours' time, so the loop isn't infinite!
iterative
10 PRINT "FOO BAR BAZ ARE MY BEST BUDDIES"
20 GOTO 10
In other words, GOTO does not remember where it is coming from, uses no stack, and is just the right thing for coding infinite loops.
More to the point, notice that infinite loop has acquired a negative connotation, even if in fact there are systems where you really want the loop to be infinite; for example a telephone switch should just loop and loop.
Infinite Loop is also a street in Cupertino. It is the home of Apple Computers headquarters!"
--The Jargon File version 4.3.1, ed. ESR, autonoded by rescdsk.
An infinite loop is the result of a recursive function with no ending condition. That is, a function which calls itself over and over again. To better understand, read more on infinite loops.
OK, I'll state right now that I'm no good at reviewing novels. This
will contain spoilers, and for all I know, it may even miss the
point. Even the few novels I do read are usually science fiction,
not romance, so I won't make any comparisons to other romance novels.
The reason I feel compelled to write about Infinite Loop despite not
being very qualified to do so is that it is clearly a small run
print. It would be a shame for it to never be discovered by a wider
audience.
Infinite Loop is a romantic lesbian novel. The blurb on the back
of the book describes it as "more than an erotic road novel," but
even that statement seems to over-emphasise the erotic parts of the
story.
What made me notice the book in the first place was that it was about
a geek girl. The protagonist, Regan O'Riley, appears to be at least
somewhat based on the book's author, Meghan O'Brien. I consider
this a good thing, as it's about time that a geeky character was
actually in a story written by a geek. Regan doesn't grow out of
being a geek to become a "normal" person like the geeky characters
in too many Hollywood movies. She programs computers, wears Thinkgeek
t-shirts, watches films like The Princess Bride and beats guys at
video games.
The conversation Regan and Mel have about how midi-chlorians ruined
the Force as an analogy for realising your own potential in the Star
Wars prequels is as good as any dialogue written by Kevin Smith.
Just in case you still weren't convinced of the geek authenticity,
the book even gets its title from the protagonist's own use of a
programming error as an analogy for how people too easily get stuck
in the same daily routine for the rest of their lives.
Unlike a lot of romantic stories, the conflicts in this novel don't
arise from the protagonist's problem attaining love. Instead, she
falls in love and keeps it quite easily, and together, Regan and her
lover conquer their fears and their pasts, to build a happier future
together. They're always there to help each other, and together,
they can overcome anything.
There's a scene in this novel that had me rooting for Regan just as
much as the comparable one in the film Hackers made me cringe: some
sexist guys are hogging an arcade game, and she ends up challenging
them. This is from her point of view, though, and when those boys
insinuate women can't play arcade games properly, and the story
flashes back to how kids picked on Regan in school, it really got
me emotionally invested in her playing the game.
The story is also about Regan's new-found lover, Mel. It's terrifying
to watch her confront the father who disowned her, and the scene
where she's reunited with her estranged brother had me on the verge
of tears for several pages.
Sure, there's quite a few novels and films that present lesbians in
a positive light, but this one presents a geek lesbian as someone
who's happy that way. I just hope more fiction will be written by,
and about, geek girls.
Of course, the erotic parts aren't bad either.
Horrible. Awful! Thousands of programming languages, and you guys only come up with infinite loops in two? Not to mention that we don't even get a whole C program! You think we can just dredge up that #include<stdio.h> int main(int argc, char** argv){} stuff from memory? Really, really unfit behavior.
Well, here's the list. The real list. Infinite loops in every language I deem significant. I plan to eventually include examples in erlang, bc, bash, Mumps, xBase, SQL (some dialect), lua, Javascript, assembler (probably MIX), Tcl, OCaml, Pascal/Delphi and whatever else looks good.
If you're already thinking of this as a waste of time, think of it not as a dictionary of totally useless programs but as a series of fetal Operating Systems.
Operating environment will be Ubuntu. I'll provide install and build instructions where necessary. Feel free to send in more, especially in unrepresented languages. The list will is categorized by programming language, and each may have multiple programs with explanatory titles.
Keep in mind that some of these programs will eventually fill all memory and swap. Don't run them if you don't know how to stop them. Anything that loops or recurses infinitely gets in. Solutions that quickly blow up the stack and die will be pointed out with the words "Stack Overflow" in the title. I will not venture guesses on ones that do not die within a few second; some of them may use constant space, and some may not store stack on the Unix stack area.
And remember, you brought this on yourselves. Basic and C. Christwagons.
Installation: Install package 'gnat'
Run: Put program in loop_forever.adb, run "gnatmake loop_forever", run loop_forever
procedure Loop_Forever isX : Integer := 0;begin loop X := 0; end loop;end Loop_Forever;
procedure Loop_Forever isbegin Loop_Forever;end Loop_Forever;
Installation: Install package 'mzscheme'; go to and follow instructions
Run: See installation instructions for REPL
([_ _] [_ _])
Run with awk -- 'program text'
BEGIN { while(1){} }
Sorry, this is rumor and conjecture for now.
><
<
Installation: install package libncurses-dev and opencobol
Usage: cobc program.cob, run program
IDENTIFICATION DIVISION. PROGRAM-ID. InfiniteLoop. PROCEDURE DIVISION. PERFORM UNTIL 1 < 0 END-PERFORM STOP RUN.
Installation: Install package 'cmucl'
Use: "cmucl -eval 'program'" or "cmucl" and enter program at REPL
(loop)
(do ()(nil))
(format t "~{~^~}" '(1))
(format t "~@{~}" "~:*~@{~}" 1)
Usage: dc program.dc
dxdx
1dd=xdsxx
Usage: Run emacs, enter program text and evaluate each line with Ctrl-j
(require 'cl)(loop)
(while 't)
(Thanks to RPGeek who came in the middle of the night and chased off the ghost of Alonzo Church with a stick before giving me these)
Installation: install gfortran package
Run: Run command 'gfortran program.f90' - file extension is important
program InfiniteLoop 100 goto 100 end program InfiniteLoop
program InfiniteLoop do end doend program InfiniteLoop
recursive subroutine infinity call infinityend subroutine infinityprogram RecursiveLoop call infinityend program RecursiveLoop
(A dozen or so thanks to Resiak who painstakingly crafted these)
Installation: Install package ghc
Run: Run 'ghc program.hs' then run a.out or program, whichever comes out >:|
main = main
main = do print (length 0..)
Usage: m4 program.m4
define(`_',`_')dnl_
define(`chicken',`egg')dnldefine(`egg',`chicken')dnlchicken
Installation: Already installed
Use: perl program.pl or perl -e 'program text'
{redo}
0while(1)
for(;;){}
&{sub{&{$_}($_)}}(sub{&{$_}($_)})
sub f { f() }f()
{ package Recursor; sub new { bless {}, shift } # Subclassable! sub recurse_forever { my $class = ref(shift); # Get my own class (subclassable!) $class->new->recurse_forever; }}Recursor->new->recurse_forever;
a: goto a
Run with python program.py
while 1: pass
def f(): f()f()
(lambda x: x(x))(lambda x: x(x))
Installation: Install package 'irb'
Use: Run ruby program.rb
loop {}
def f();f;endf()
Class C; def f;C.new.f;end;endC.new.f
f = nilcallcc { |x| f = x }f.call()
Installation: Install package 'mzscheme'
Run: Are you actually running all of these? What a rube. I mean, they work, but still.
((lambda (x) (x x)) (lambda (x) (x x)))
(define (f) (f))(f)
(let loop () (loop))
(let ((f '())) (call-with-current-continuation (lambda (x) (set! f x))) (f))
Run with sed -f program.sed and then give at least one line of input before closing input with Ctrl-D
: bb b
Install: Install package squeak
Run: Uh, run Squeak with the image, open the Tools tab, click and drag out a "Workspace", type program text into it, middle-click and click "do it"
true whileTrue
Installation: Screw you, man. I'm tired.
Run: Far and fast. You don't want in on this shit.
Well, there you have it, folks. There's the rundown.
You didn't think I was gonna use some pansy language like Java, did you?
Log in or register
to write something here or to contact authors. | http://everything2.com/title/infinite+loop?author_id=1317915 | CC-MAIN-2013-20 | refinedweb | 1,545 | 62.68 |
Category Listbox
Environment: Visual C++ 6
Description
Categories have the following attributes:
- Are indicated in the list with a grey background.
- Category name must be unique. (They are case sensitive.)
- Can have 0 to N items under them.
- Have open/close buttons to show/hide their items.
- Can be opened/closed by double-clicking them or by pressing the space bar.
Category items have the following attributes:
- Must be assigned to a category.
- Item name does not have to be unique.
- Can have a checkbox displayed next to it. (Microsoft Outlook does not have this feature.)
- Checkboxes can be checked/unchecked by clicking them or by pressing the space bar.
- Items can store DWORD data with them. (CListBox has this feature.)
Other supported features include:
- Sorts categories and their items if the LBS_SORT style has been set.
- Supports selection modes Single, Multiple, Extended, and None.
- SHOULD support unicode. (I haven't verified this.)
Implementation
The category listbox class is derived from the MFC CListBox class. Most of CListBoxs functions can still be used; however, some functions have been protected, thereby forcing you to use this class's functions instead. You cannot use the following CListBox functions with this class:
AddString( LPCTSTR pString ); InsertString( int iIndex, LPCTSTR pString ); DeleteString( int iIndex ); GetItemData( int iIndex ); SetItemData( int iIndex, DWORD dwValue );
The category listbox class has been made as simple as possible to make it easy for you to add this control to your project. You only need to add the files "CatListBox.cpp" and "CatListBox.h" to your project. That's it! You do not have to add any images to your resource file because this class draws its buttons and checkboxes itself.
To add this control to your dialog, do the following:
- Add a listbox to your dialog's resource.
- Set up your listbox's resource for "Owner Draw: Fixed" and check "Has Strings".
- Create a CCatListBox member variable in your dialog's code. For example...
#include "CatListBox.h" class MyDialog : public CDialog { public: // Dialog Data //{{AFX_DATA( MyDialog ) enum { IDD = IDD_MY_DIALOG }; CCatListBox m_lstCategories; // Create your variable here. //}}AFX_DATA } // Subclass the listbox here. // Make sure to replace IDC_LISTBOX_ID with the // one you're using. void MyDialog::DoDataExchange( CDataExchange* pDX ) { CDialog::DoDataExchange( pDX ); //{{AFX_DATA_MAP( MyDialog ) DDX_Control( pDX, IDC_LISTBOX_ID, m_lstCategories ); // Subclass it! //}}AFX_DATA_MAP }
DownloadsDownload demo project - 20 Kb
Download source - 2 Kb
Killer Implementation. Great design.Posted by Legacy on 06/27/2003 12:00am
Originally posted by: Digital Sunrise
Gave me lots of ideas on what to do with an owner drawn list box, even from WIN32.Reply
Can we add drag and drop function in this nice class?Posted by Legacy on 06/18/2003 12:00am
Originally posted by: Yihong Yang
Hi, buddy:
I just use this class in my application. Really nice job! But I'm considering if we can add drag and drop function in this class that make it more powerful. Any suggestion or new work would be highly appreciated.
David YangReply
A bug (not the checkbox issue)???Posted by Legacy on 05/16/2003 12:00am
Originally posted by: Nigel Johnson
GREATAAAAaaaa !!!Posted by Legacy on 04/22/2003 12:00am
Originally posted by: Zvika F.
i realy like the idea and the implementation !!
Suggesion:
make the structure "internal classes" (you'll have to make some modifications but it's more elegant)
Very nice implementationPosted by Legacy on 04/17/2003 12:00am
Originally posted by: Santanu Lahiri
Joshua's Category List class was just what I needed. Pretty much drop in the class, include the header files, create the proper control in the dialog, and off you go. Hard to believe it was that simple. Documentation is also very good indeed. Was easy to change the behaviour just slightly to get the exact look I needed. Of course, I did not go about it the true C++ way by deriving from his class, but what the heck! Hope he forgives me!Reply
Great!Posted by Legacy on 03/19/2003 12:00am
Originally posted by: Rahman
Thanks man. I was just looking for this kind cool stuffReply
Excellent work!Posted by Legacy on 11/14/2002 12:00am
Originally posted by: Dean Jones
Must comment on the quality job Joshua did on the category listbox! It's seldom that I've found something this well written, this well documented (comments are copious and helpful), and this helpful (saved me QUITE a bit of work on one of my projects). Note that his comments in "Checkbox issue found is not a bug" are important (I made sure that all calls to AddCategoryItem() set an item's state to either zero or one, not the default of two).
Kudos!
DJReply
Checkbox issue found is not a bugPosted by Legacy on 11/13/2002 12:00am
Originally posted by: Joshua Quick
Hello all,
Some people are having trouble displaying checkboxes in my category listbox class. Here are some things you should know.
Checkboxes are referred to as item states in the code. Checkboxes are shown for all items by calling the following function:
myList.ShowCategoryItemStates( true );
Each item's checkbox has 3 states:
0 - Unchecked
1 - Checked
2 - No checkbox (The default!)
MFC and VB's checkboxes have similar tri-state behavior.
The item's checkbox state is set by calling the following function. (Set state to either 0, 1, or 2.)
myList.SetCategoryItemState( category, item, state );
The item's checkbox state can also be set when adding the item to the listbox.
myList.AddCategoryItem( category, item, state );
For example, to show an unchecked checkbox for the 2nd item under category "Foo", do the following...
myList.SetCategoryItemState( "Foo", 1, 0 );
To check it, do this...
myList.SetCategoryItemState( "Foo", 1, 1 );
I suppose the real bug is that the item's default checkbox state should not be 2 (no checkbox). Perhaps I should change it to 0 (unchecked)?
I apologize for the confusion caused by this. I have commented all of my functions in the CPP file if you haven't seen them already. However, due to the responses I've received, this shows that I have not documented my work well enough.
Thank you for your feedback!Reply
- Josh
Neat itemPosted by Legacy on 06/22/2002 12:00am
Originally posted by: Garry Birch
Sure as hell looks good Josh. Don't understand a damn thing written but nice to know that your good enough for publishing.
GarryReply | http://www.codeguru.com/cpp/controls/listbox/article.php/c4743/Category-Listbox.htm | CC-MAIN-2015-14 | refinedweb | 1,069 | 66.13 |
I am currently working in a group project and we are trying to interface an OBD-II adapter and LCD monitor to an Uno. The current issue I am having is that OBD-II and TVout are both serial libraries even though TVout does not use the conventional digital 0 and 1 pins. If I comment out the OBD library the monitor displays text and image perfectly but as soon as the OBD-II commands are uncommented the screen remains black.
I was wondering if there is any solution to this. Possibly two Arduinos? Or using SoftwareSerial library? Although, I am not sure how to use it with the other two libraries.
Currently the best way to explain this problem is by showing a snippet of the coding.
The working code:
#include <TVout.h> #include <video_gen.h> #include <DistanceGP2Y0A21YK.h> #include <Wire.h> #include <LiquidCrystal.h> #include <fontALL.h> //#include <OBD.h> //COBD obd; TVout TV; void setup() { TV.begin(0); //128x96 resolution TV.select_font(font4x6); //Set font size lcd.createChar(0, FC); //Creates full bar Dist.begin(0); //Start infared sensor lcd.begin(16,2); //Start LCD with # of columns and rows //obd.begin(); //Start OBD-II adapter communication //while(!obd.init()); //Keep trying till successful connection pinMode(ButtonT, INPUT); pinMode(ButtonP, INPUT); lcd.print("hello"); TV.print("Welcome to MTTS"); }
This code successfully prints the string “Welcome to MTTS” on the monitor.
The not working code would be the same just uncomment out the obd sections.
I appreciate any guidance or ideas | https://forum.arduino.cc/t/obd-ii-adapter-with-lcd-monitor-for-arduino/301800 | CC-MAIN-2021-49 | refinedweb | 254 | 62.75 |
URL: Title: #94: Enable {socket,dbus}-activation for responders
Advertising
fidencio commented: """ On Thu, Dec 1, 2016 at 1:03 PM, Pavel Březina <notificati...@github.com> wrote: > Hi, > > if (ret != EOK) { > #ifdef HAVE_SYSTEMD > if (ret != ENOENT) > #endif > { > DEBUG(SSSDBG_FATAL_FAILURE, "No services configured!\n"); > return EINVAL; > } > } > > can you separate above to: > > #ifdef HAVE_SYSTEMDif (ret != EOK && ret != ENOENT) { > DEBUG(SSSDBG_FATAL_FAILURE, "No services configured!\n"); > return EINVAL; > } > #else > ... > #endif > > Okay, > We should also amend services option man page to describe what happens if > a service is not listed there, that they are automatically activated when > needed > Okay. > . > > Now I have few more comments to the timeouts. > > 1. I believe you can remove "RESPONDER: Shutdown > {dbus,socket}-activated responders in case they're idle" in favor of > "Change the timeout logic". > > The "Change timeout logic" commit is a leftover that must have been squashed to "RESPONDER: Shutdown {dbus,socket}-activated responders in case they're idle". Sorry for messing it up. Squashing it there is okay for you? > > 1. Please use the same logic in sbus code. I think you just want to > pass a pointer to last_request_time into sbus (to remove the need for > function pointers) and let the sbus code update it when appropriate even > for our private communication (in other responders). Because if a > communication is happening between responder and provider, the responder is > still not idle (it may be awaiting reply from data provider so it can be > send to the client). > > Okay, I can pass just the pointer to the last_request_time var into sbus. Please, here I need a bit more pointers in order to be sure I understand your suggestion. Please, correct me if I'm wrong, it's updating the last_request_time even for our private communication, no? Are you talking specifically about IFP provider or the others? Because the others have their last_request_time() updated everytime something goes through their sockets and, as far as I understand, it should be enough, right? If not, why not? > > 1. Resetting the timeout when sbus signal is received is not enough. > You want to reset it everytime any communication on the bus is happening. I > actually think that signals are the only communication that we can skip for > two reasons (One - we don't use any signals except NameOwnerChanged, Second > - those are asynchronous events that do not require any reply). I think you > want to reset the time in sbus_message_handler when you got a valid > handler (interface and method combination. The you don't need the iface > validator. > > Hmmm. I see your point and it makes sense. But I have one question ... We do receive signals from org.freedesktop.sssd.infopipe.* and they should be treated, right? So we can't ignore it completely. In case those signals end up in sbus_message_handler() as well, then I agree with you. Is that the case? > > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <>, or mute > the thread > <> > . > Thanks a lot for the review, Pavel. """ See the full comment at
_______________________________________________ sssd-devel mailing list -- sssd-devel@lists.fedorahosted.org To unsubscribe send an email to sssd-devel-le...@lists.fedorahosted.org | https://www.mail-archive.com/sssd-devel@lists.fedorahosted.org/msg29815.html | CC-MAIN-2017-22 | refinedweb | 528 | 58.58 |
Here is a listing of C++ language interview questions on “Void” along with answers, explanations and/or solutions:
1. Which of the following will not return a value?
a) null
b) void
c) empty
d) free
View Answer
Explanation:None.
2. ____ have the return type void?
a) all functions
b) constructors
c) destructors
d) none of the mentioned
View Answer
Explanation:Constructor creats an Object and Destructor destroys the object. They are not supposed to return anything, not even void.
3. What does the following statement mean?
void a;
a) variable a is of type void
b) a is an object of type void
c) declares a variable with value a
d) flags an error
View Answer
Explantion:There are no void objects.
4. Choose the incorrect option
a) void is used when the function does not return a value.
b) void is also used when the value of a pointer is null.
c) void is used as the base type for pointers to objects of unknown type.
d) void is a special fundamental type.
View Answer
Explanation: void fundamental type is used in the cases of a and c.
5. What is the output of this program?
#include <iostream>
using namespace std;
int main()
{
void a = 10, b = 10;
int c;
c = a + b;
cout << c;
return 0;
}
a) 20
b) compile time error
c) runtime error
d) none of the mentioned
View Answer
Explanation:void will not accept any values to its type.
Sanfoundry Global Education & Learning Series – C++ Programming Language.
Here’s the list of Best Reference Books in C++ Programming Language.
To practice all features of C++ programming language, here is complete set on 1000+ Multiple Choice Questions and Answers on C++. | http://www.sanfoundry.com/c-plus-plus-language-interview-questions-void/ | CC-MAIN-2017-04 | refinedweb | 287 | 72.56 |
"Jakarta Commons Developers List" <commons-dev@jakarta.apache.org> schrieb am 18.04.05
11:29:38:
>
> At 2005-04-18 11:11, Daniel Florey wrote:
> > > My though was to re-use the basename or id used when "installing"
> > > ResourceBundle or XML providers. For example, after issuing
> > > ResourceBundleMessageProvider.install("errorMessages");
> > > I would like to be able to qualify the newly installed messages with
> > > MessageBundle msg = new MessageBundle("errorMessages",
> > "unexpectedError");
> > > but also keep the existing alternative with
> > > MessageBundle msg = new MessageBundle("unexpectedError"); //
> > > "unexpectedError" from any source
> >
> >Do you want this to be able to install different resources with the same
> >message key or is it mainly because of implementation details (performance)?
>
> Primarily multiple entries with same key, secondarily performance.
>
> > > This may seem like a minor change at a first glance, but to also improve
> > > performance my thought was to to change the MessageManager class from
> > > holding a list of provider instances - which in turn can contain multiple
> > > resources (and thus assumes one instance per provider class) - to
> > holding a
> > > Map from basename/id/namespace/qualifier to provider instance, where each
> > > instance only contains a single resource (i.e. XML-file/ResourceBundle).
> > > Though I planned on backwards compatibilty, by looping over the Map values
> > > - instead of the List entries - in the current MessageManager.getText()
> > method.
> > > (Did I make myself clear?)
> >
> >I try my best to get your point...
> >At the moment there is only one MessageProvider holding many resources.
> >You want to change this to many MessageProviders holding one resource each
> >in order to improve performance?
>
> Yes. Instead of having to loop through the providers - catching exceptions
> from those who do not contain the entry, which is quite costly performance
> wise - and get the first match, I want to be able to point out the source
> which I expect to hold the entry. But then again, todays behaviour should
> be kept as the default behaviour when not using a basename/namespace
> qualification.
>
> >I currently don't have access to the sources as I'm on a project in Jordan
>
> (You can browse them online,
> is quite
> easy to navigate)
>
> >but as soon as I'm back home I'll try to have a closer look at this.
>
> Should I try to create a patch suggestion which you could look at then?
Yes, this would be great!
It would be very (very) appreciated, if you could provide some testcases... I started to write
some a while ago, but never managed to complete them. If we would have a complete testsuite
we could refactor without the fear to break something ;-)
If we get the testsuite done and improve the documentation, we hopefully can move the component
to commons proper soon...
Cheers,
Daniel
>
> >Always keep in mind that it should be very simple to use the component.
>
> Sure. As I said, I intend to be backwards compatible with currenty usage.
> Though possibly the installation of resources may look slightly different.
>
> >Do you have sandbox commit access?
>
> Nope.
>
> Mattias Jiderhamn
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: commons-dev-help@jakarta.apache.org
>
__________________________________________________________
Mit WEB.DE FreePhone mit hoechster Qualitaet ab 0 Ct./Min.
weltweit telefonieren!
---------------------------------------------------------------------
To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: commons-dev-help@jakarta.apache.org | http://mail-archives.apache.org/mod_mbox/commons-dev/200504.mbox/%3C1654335796@web.de%3E | CC-MAIN-2016-18 | refinedweb | 550 | 54.73 |
A tutorial walkthrough of Python Pandas Library
For those of you who are getting started with Machine learning, just like me, would have come across Pandas, the data analytics library. In the rush to understand the gimmicks of ML, we often fail to notice the importance of this library. But soon you will hit a roadblock where you would need to play with your data, clean and perform data transformations before feeding it into your ML model.
Why do we need this blog when there are already a lot of documentation and tutorials? Pandas, unlike most python libraries, has a steep learning curve. The reason is that you need to understand your data well in order to apply the functions appropriately. Learning Pandas syntactically is not going to get you anywhere. Another problem with Pandas is that there is that there is more than one way to do things. Also, when I started with Pandas it’s extensive and elaborate documentation was overwhelming. I checked out the cheatsheets and that scared me even more.
In this blog, I am going to take you through Pandas functionalities by cracking specific use cases that you would need to achieve with a given data.
Setup and Installation
Before we move on with the code for understanding the features of Pandas, let’s get Pandas installed in your system. I advise you to create a virtual environment and install Pandas inside the virtualenv.
Create virtualenv
virtualenv -p python3 venv source venv/bin/activate
Install Pandas
pip install pandas
Jupyter Notebook
If you are learning Pandas, I would advise you to dive in and use a jupyter notebook for the same. The visualization of data in jupyter notebooks makes it easier to understand what is going on at each step.
pip install jupyter jupyter notebook
Jupyter by default runs in your system-wide installation of python. In order to run it in your virtualenv follow the link and create a user level kernel
Sample Data
I created a simple purchase order data. It comprises of sales data of each salesperson of a company over countries and their branches at different regions in each country. Here is a link to the spreadsheet for you to download.
Load data into Pandas
With Pandas, we can load data from different sources. Few of them are loading from CSV or a remote URL or from a database. The loaded data is stored in a Pandas data structure called DataFrame. DataFrame’s are usually refered by the variable name df . So, anytime you see df from here on you should be associating it with Dataframe.
From CSV File
import pandas df = pandas.read\_csv("path\_to\_csv")
From Remote URL
You can pass a remote URL to the CSV file in read_csv.
import pandas df = pandas.read\_csv("remote/url/path/pointing/to/csv")
From DB
In order to read from Database, read the data from DB into a python list and use DataFrame() to create one
db = # Create DB connection object cur = db.cursor() cur.execute("SELECT \* FROM \<TABLE\>") **df = pd.DataFrame(cur.fetchall())**
Each of the above snippets reads data from a source and loads it into Pandas’ internal data structure called DataFrame
Understanding Data
Now that we have the Dataframe ready let’s go through it and understand what’s inside it
**# 1. shows you a gist of the data** df.head() **# 2. Some statistical information about your data** df.describe() **# 3. List of columns headers** df.columns.values
Pick & Choose your Data
Now that we have loaded our data into a DataFrame and understood its structure, let’s pick and choose and perform visualizations on the data. When it comes to selecting your data, you can do it with both Indexesor based on certain conditions. In this section, let’s go through each one of these methods.
Indexes
Indexes are labels used to refer to your data. These labels are usually your column headers. For eg., Country, Region, Quantity Etc.,
Selecting Columns
**# 1. Create a list of columns to be selected** columns\_to\_be\_selected = ["Total", "Quantity", "Country"] **# 2. Use it as an index to the DataFrame** df[columns\_to\_be\_selected] **# 3. Using loc method** df.loc[columns\_to\_be\_selected]
Selecting Rows
Unlike the columns, our current DataFrame does not have a label which we can use to refer the row data. But like arrays, DataFrame provides numerical indexing(0, 1, 2…) by default.
**# 1. using numerical indexes - iloc** df.iloc[0:3, :] **# 2. using labels as index - loc** row\_index\_to\_select = [0, 1, 4, 5] df.loc[row\_index\_to\_select]
Filtering Rows
Now, in a real-time scenario, you would most probably not want to select rows based on an index. An actual real-life requirement would be to filter out the rows that satisfy a certain condition. With respect to our dataset, we can filter by any of the following conditions
**1. Total sales \> 200000** df[df["Total"] \> 200000] **2. Total sales \> 200000 and in UK** df[(df["Total"] \> 200000) & (df["Country"] == "UK")]
Playing With Dates
Most of the times when dealing with date fields we don’t use them as it is. Pandas make it really easy for you to project Date/Month/Year from it and perform operations on top of it
In our sample dataset, the Date_of_purchase is of type string, hence the first step would be to convert them to the DateTime type.
\>\>\> type(df['Date of Purchase'].iloc[0]) **str**
Converting Column to DateTime Object
\>\>\> df['Date of Purchase'] = pd.to\_datetime(df['Date of Purchase']) \>\>\> type(df['Date of Purchase'].iloc[0]) **pandas.\_libs.tslibs.timestamps.Timestamp**
Extracting Date, Month & Year
df['Date of Purchase'].dt.date **# 11-09-2018** df['Date of Purchase'].dt.day **# 11** df['Date of Purchase'].dt.month **# 09** df['Date of Purchase'].dt.year **# 2018**
Grouping
Statistical operations
You can perform statistical operations such as min, max, mean etc., over one or more columns of a Dataframe.
df["Total"].sum() df[["Total", "Quantity"]].mean() df[["Total", "Quantity"]].min() df[["Total", "Quantity"]].max() df[["Total", "Quantity"]].median() df[["Total", "Quantity"]].mode()
Now in a real-world application, the raw use of these statistical functions are rare, often you might want to group data based on specific parameters and derive a gist of the data.
Let’s look at an example where we look at the country-wise, country & Region-wise sales.
**# 1. Country wise sales and Quantity** df.groupby("Country").sum() **# 2. Quantity of sales over each country & Region** df.groupby(["Country", "Region"])["Quantity"].sum() **# 3. More than one aggregation** df.groupby(["Country", "Region"]).agg( {'Total':['sum', 'max'], 'Quantity':'mean'})
Pivot Table
Pivot Table is an advanced version of groupby, where you can stack dimensions over both rows and columns. i.e., as the data grows the groupby above is going to grow in length and will become hard to derive insights, hence a well-defined way to look at it would be Pivot tables
import numpy as np df.pivot\_table(index=["Country"], columns=["Region"], values=["Quantity"], aggfunc=[np.sum])
Another advantage of the Pivot Table is that you can add as many dimensions and functions you want. It also calculates a grand total value for you
import numpy as np df.pivot\_table(index=["Country"], columns=["Region","Requester"], values=["Quantity"], aggfunc=[np.sum], **margins=True, margins\_name="Grand Total"** )
Okay, that was a lot of information in 5 minutes. Take some time in trying out the above exercises. In the next blog, I will walk you through some more deeper concepts and magical visualizations that you can create with Pandas.
Every time you start learning Pandas, there is a good chance that you may get lost in the Pandas jargons like index, functions, numpy etc., But don’t let that get to you. What you really have to understand is that Pandas is a tool to visualize and get a deeper understanding of your data.
With that mindset take a sample dataset from your spreadsheet and try deriving some insights out of it. Share what you learn. Here is the link to my jupyter notebook for you to get started.
Did the blog nudge a bit to give Pandas another chance?
Hold the “claps” icon and give a shout out to me on twitter. Follow to stay tuned on future blogs
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/bhavaniravi/get-started-with-python-pandas-in-5-minutes-3d5p | CC-MAIN-2021-25 | refinedweb | 1,396 | 65.83 |
The use of the
Either monad helped us simplify error processing in the last tutorial. I promised to show you how another monad, the state monad, can eliminate explicit symbol-table threading. But before I do that, let's have a short refresher on currying, since it's relevant to the construction of the state monad (there is actually some beautiful math behind the relationship of currying and the state monad).
There are two ways of encoding a two-argument function and, in Haskell, they are equivalent. One is to implement a function that takes two values:
fPair :: a -> b -> c fPair x y = ...
The other is to implement a function that takes one argument and returns another function of one argument (parentheses added for emphasis):
fCurry :: a -> (b -> c) fCurry x = \y -> ...
This might seem like a trivial transformation, but I'll show you how it can help us in coding the evaluator.
Curried Evaluator
Let me remind you what the signature of the function
evaluate was -- to make things simpler, let's consider the version from before the introduction of the
Either monad:
evaluate :: Tree -> SymTab -> (Double, SymTab)
I'm going to parenthesize it the way that highlights the currying interpretation:
evaluate :: Tree -> (SymTab -> (Double, SymTab))
Let's read this signature carefully:
evaluate is a function that takes a
Tree and returns a function, which takes a
SymTab and returns a pair
(Double, SymTab). What if we take this reading to heart and rewrite
evaluate so that it actually returns a function (a lambda).
Let's start with the
UnaryNode evaluator, which used to look like this:
evaluate (UnaryNode op tree) symTab = let (x, symTab') = evaluate tree symTab in case op of Plus -> ( x, symTab') Minus -> (-x, symTab')
and let's try something like this:
evaluate :: Tree -> (SymTab -> (Double, SymTab)) evaluate (UnaryNode op tree) = \symTab -> let (x, symTab') = {-hi-}evaluate{-/hi-} tree symTab --?? in case op of Plus -> ( x, symTab') Minus -> (-x, symTab')
You see what the problem is? In the new scheme, the inner call to
evaluate will no longer return a pair
(x, symTab') but a function
(SymTab -> (Double, SymTab)). Let me call this function
act for action. How can we extract
x and
symTab' from that action? By running it! We do have an argument
symTab to pass to it -- it's the argument of the lambda:
evaluate :: Tree -> (SymTab -> (Double, SymTab)) evaluate (UnaryNode op tree) = \symTab -> let act = evaluate tree (x, symTab') = {-hi-}act symTab{-/hi-} in case op of Plus -> ( x, symTab') Minus -> (-x, symTab')
What have just happened? We called the new
evaluate only to immediately execute the resulting action? Then why even bother with the intermediate step?
First of all, it's a neat idea that evaluation can be separated into two phases: one for creating a network of functions like
evaluate calling each other but not actually evaluating the result; and another phase for excecuting this network, starting with a particular state -- the symbol table in this case. Obviously, if you provide a different starting symbol table, you will obtain a different final result. But the network of functions depends only on the original parse tree.
The second reason is that this form brings us closer to our goal of abstracting away the tedium of symbol-table passing. Symbol table passing is what "actions" are supposed to do;
evaluate should only construct the tracks for the symbol-table train.
Interestingly, this separation between creating an action and running it turned out to be quite useful in C++, as I showed in my old post Monads in C++. There, the actions were constructed at compile time using an EDSL, and then executed at runtime.
Going back to our program, we'll try follow the same procedure we used to derive the
Either monad. The most important part of a monad is the bind function. Remember, bind is the glue that binds the output of one function to the input of another function -- the one we call a continuation. The signature of bind is determined by the definition of the
Monad class. It has the form:
bind :: Blob a -> (a -> Blob b) -> Blob b
where
Blob stands for the type constructor we are trying to monadize. In our case, this type constructor is of the form
(SymTab -> (a, SymTab)), with the type parameter
a nested inside the return type of an action. I'll call this function type the new
Evaluator:
type Evaluator a = SymTab -> (a, SymTab)
We'll standardize it later using a
newtype definition, which is required by
instantiate, but for now let's just work with a type synonym.
So here's what monadic bind should look like for our type (yes, it's exactly the same as for our
Either monad, except that
Evaluator now hides a function):
bindS :: Evaluator a -> (a -> Evaluator b) -> Evaluator b
The client of bind is supposed to pass an evaluator as the first argument and a continuation as the second. The continuation is a function that returns an evaluator. Let's look for this pattern in our implementation of
evaluate of the
UnaryNode:
evaluate :: Tree -> (SymTab -> (a, SymTab)) evaluate (UnaryNode op tree) = (\symTab -> let act = evaluate tree (x, symTab') = act symTab in case op of Plus -> ( x, symTab') Minus -> (-x, symTab'))
We are looking for a piece of code that can be interpreted as "the rest of code." On first attempt we might think of the following lambda as our continuation:
\x' -> case op of Plus -> ( x', symTab') Minus -> (-x', symTab'))
but it's the wrong type. Our continuation is supposed to be returning an
Evaluator, not a pair
(Double, SymTab). How can we turn this value into an evaluator? That's what monadic
return is supposed to do. Its signature, again, is determined by the
Monad class (I'm calling it
returnS for now to avoid name conflicts):
returnS :: a -> Evaluator a
The implementation is a no-brainer, really. We turn
x into a function that returns this
x with a side of
symTab:
returnS x = \symTab -> (x, symTab)
So here's the candidate that fulfills all our requirements for a continuation:
\x' -> case op of Plus -> return x' Minus -> return (-x')
This is indeed a fine monadic function (returning a value of the soon to be monadic
Evaluator), and it fits the type signature of the continuation required by bind; except that we don't see it in the original code. We can't carve it out of the current implementation of
evaluate. If we could only find a way to insert this
returnS and then immediately cancel it. But how can one undo
returnS? Well, how about exectuting its result? Check this out:
(returnsS x) symTab' = (\symTab -> (x, symTab)) symTab' = (x, symTab')
When you execute a lambda, you simply replace it with its body and replace the formal parameter with the actual argument. Here, I replaced
symTab (formal parameter, or bound variable) with
symTab' (the argument). In general, the argument may be a whole expression. You just stick at every place the formal parameter appears in the body. (You have to be careful though not to introduce name conflicts.)
So here's the final rewrite:
data Operator = Plus | Minus data Tree = UnaryNode Operator Tree type SymTab = () -- show type Evaluator a = SymTab -> (a, SymTab) returnS :: a -> Evaluator a returnS x = \symTab -> (x, symTab) evaluate :: Tree -> (SymTab -> (Double, SymTab)) evaluate (UnaryNode op tree) = \symTab -> let act = evaluate tree (x, symTab') = act symTab k = \x' -> case op of Plus -> returnS x' Minus -> returnS (-x') act' = k x in act' symTab' main = putStrLn "It type checks!"
If it type checks, it must be correct, right? To convince yourself that this indeed works, first apply
k to
x -- this will just replace
x' with
x. Then apply the resulting action to
symTab' to cancel out the
returnSs.
Let's continue with our program to define a new monad. To this end, we need to identify the pattern we've been looking for. We want to pick the implementation of
bindS from
evaluate.
We can clearly see the two arguments to bind: one is
act, the result of
evaluate tree, and the other is the continuation
k. The rest must be bind. Here it is, together with
returnS and the new version of
evaluate:
data Operator = Plus | Minus data Tree = UnaryNode Operator Tree type SymTab = () -- show type Evaluator a = SymTab -> (a, SymTab) returnS :: a -> Evaluator a returnS x = \symTab -> (x, symTab) bindS :: Evaluator a -> (a -> Evaluator b) -> Evaluator b bindS act k = \symTab -> let (x, symTab') = act symTab act' = k x in act' symTab' evaluate :: Tree -> (SymTab -> (Double, SymTab)) evaluate (UnaryNode op tree) = bindS (evaluate tree) (\x -> case op of Plus -> returnS x Minus -> returnS (-x)) main = putStrLn "It type checks!"
Symbol Table Monad
Let's formalize what we've done so far using an actual instance of the
Monad typeclass. First, we need to encapsulate our evaluator type in a
newtype declaration. This muddles things a little, but is necessary if we want to use it in an
instance declaration. Here's a type that contains nothing but a function:
newtype Evaluator a = Ev (SymTab -> (a, SymTab))
And here are our return and bind functions in their cleaned up form:
instance Monad Evaluator where return x = Ev (\symTab -> (x, symTab)) (Ev act) >>= k = Ev $ \symTab -> let (x, symTab') = act symTab (Ev act') = k x in act' symTab'
Now that the paperwork is done, we can start using the
do notation. Here's our monadic
UnaryNode evaluator:
evaluate (UnaryNode op tree) = do x <- evaluate tree case op of Plus -> return x Minus -> return (-x)
SumNode is even more spectacular:
evaluate (SumNode op left right) = do lft <- evaluate left rgt <- evaluate right case op of Plus -> return (lft + rgt) Minus -> return (lft - rgt)
Compare it with the original:
evaluate (SumNode op left right) symTab = let (lft, symTab') = evaluate left symTab (rgt, symTab'') = evaluate right symTab' in case op of Plus -> (lft + rgt, symTab'') Minus -> (lft - rgt, symTab'')
All references to the symbol table are magically gone. The code is not only cleaner, but also less error prone. In the original code there were way too many opportunities to use the wrong symbol table for the wrong call. That's all taken care of now.
There are only three places where you'll see explicit use of the symbol table:
lookUp,
addSymbol, and the main loop -- as it should be! I recommend studying the complete code for the calculculator listed at the end of this tutorial, with special attention to those functions.
Now you have seen with your own eyes that all this can be done with pure functions. We managed to manipulate state -- the symbol table -- in a purely functional way.
There is a popular misconception that you must use impure code to deal with mutable state, and that Haskell monads are impure. There are ways to introduce impurities in Haskell -- there's a bunch of functions whose names start with unsafe and there is
trace for debugging, the
ST monad (not to be confused with the
State monad), all of which (carefully) let you inject impurity into your code. Sometimes it's done for debugging, sometimes for performance. In general, though, you can and should stick to the purely functional style.
State Monad
What we have just done is to create our own version of a generic state monad. It was, hopefully, a good learning experience, but one that shouldn't be repeated when writing production code. So let's familiarize ourselves with the
Control.Monad.State version of the state monad (strictly speaking the state monad is defined using a monad transformer, so the actual code in the library may look a bit different from what I present). State monad is defined by a new type
State, which is parameterized by two type variables. The first one is used to represent the state (in our case that would be
SymTab), and the second is the generic type parameter of every monad type constructor.
newtype State s a = State s -> (a, s)
State has one data constructor also called
State. It takes a function as an argument. The interesting thing is that this constructor is not exported from the library so you can't pattern match on it. If you want to create a new monadic
State, use the function
state:
state :: (s -> (a, s)) -> State s a
Instead of extracting an action from
State, which you can't do, and acting with it on some state, you call the function
runState which does it for you:
runState :: State s a -> s -> (a, s)
The
Monad instance declaration for
State looks something like this:
instance Monad (State s) where return x = state (\st -> (x, st)) act >>= k = state $ \st -> let (x, st') = runState act st in runState (k x) st'
Notice that
State s is not a type but a type constructor: it needs one more type variable to become a type. As I mentioned before,
Monad class can only be instantiated with type constructors.
I've shown you how to extract the bind operator from state-threading code, but there is a more general derivation that's based on types. In Haskell you often see functions whose implementation is determined by their signatures. Sometimes it's determined uniquely, more often we pick the simplest non-trivial implementation that type checks. Here's the signature of
>>= that is required by the
Monad class as applied to
State s:
(>>=) :: State s a -> (a -> State s b) -> State s b act >>= k = ...
The first observation is that, in order to run the continuation
k, we need a value of type
a. The only source of such value could be the first argument,
act, and the only way to retrieve it is to call
act with some state. But we don't have any state yet.
But notice that bind itself doesn't produce a value -- it produces a
State object. How do you construct a
State? By calling
state with a function. Bind must therefore define a lambda of the signature
s -> (b, s) and pass it to
state. The outer shell of
>>= must therefore have the form:
act >> k = state $ \st -> ...
Now, inside the lambda, we do have access to a state variable
st and we can use it to run
act.
act >> k = state $ \st -> let (x, st') = runState act st ...
Now we have
x of type
a so we can call the continuation
k:
act >> k = state $ \st -> let (x, st') = runState act st act' = k x ...
The continuation returns an action
act' of the type
State s b. Our lambda, though, must return a pair of the type
(b, s). The only way to generate a value of the type
b is to run
act' with some state. Here we have a choice: we can run it with the original
st or with the new
st'. The first choice would mean that the state never changes and, in fact, doesn't even have to be returned by the action. There is a perfectly good monad built on this assumption: it's called the reader monad (see the exercise at the end of this tutorial). But since here we are modeling mutable state, we choose to use
st' to run
act':
act >> k = state $ \st -> let (x, st') = runState act st act' = k x in runState act' st'
There is one more ingredient necessary to make the state monad usable: the ability to access and modify the state. There are two generic functions
get and
put that provide this functionality:
get :: State s s get = state $ \st -> (st, st) put :: s -> State s () put newState = state $ \_ -> ((), newState)
get returns the value of the state.
put returns unit, but has a "side effect" of injecting new state into subsequent computations.
What Is a Monad?
We've seen two seemingly disparate examples of a monad and I will show you some more in the next tutorial. What do they have in common, other than implementing the functions
return and
>>=? Why are these two functions so important? It's time for some deeper insights.
The basic premise of all programming is that you can decompose a complex computation into a set of simpler ones.
The difference between various programming paradigms is in the mechanics of composing smaller computations into larger ones. For instance, in C you use a combination of functions and side effects. You call a function (procedure) whose effects can be:
- Returning a value
- Modifying an argument (when it's a reference)
- Modifying global variables
- Interacting with the external world
Some of the effects are visible in the signature of the function (types of input and output parameters), others are implicit. The compiler may help with flagging explicit mismatches, but it can't check the implicit ones. So when you're composing functions in C, you have to keep in mind all the hidden interactions between them.
In OO programming, side effects are somewhat tamed with data hiding. Although arguments are mostly passed by reference, including the implicit
this pointer, the things you can do to them are restricted by their interfaces. Still, hidden dependencies make composition fragile. This is especially painful when dealing with concurrency.
The starting point of functional programming is that functions have no side effects whatsoever, so function composition is a straightforward matter of passing the results of one function as the input to the next. This is a great starting point from the point of composability. However, many of the traditional notions of computation don't have straightforward translations into pure functions. This has been a huge problem in the adoption of functional languages.
Two things happened (not necessarily in that order) to change this situation:
- We learned how to translate most computations into functions.
- We use monads to abstract the tedium of this translation.
I tried to emphasize the same two steps when introducing monads.
First, I showed you how to translate partial computations into total functions. These functions encapsulate their results into
Maybe or
Either types. I also showed you how to deal with mutable state by passing it as an additional parameter into and out of a function.
This is a very general pattern: Take a computation that takes input and produces output but does it in a non-functional way, and modify input and output types in such a way that the computation becomes functional.
Next, I showed you a way to do the same thing by modifying only the return types of the computation. If the translation of a computation required adding input parameters to the original signature (passing the symbol table in, for instance), I used currying and turned the output type into a function type. (In Exercise 1 you'll use the same trick used to implement the reader monad.)
So this is lesson one: A computation can be turned into a function that encapsulates the originally non-functional bits into its modified (decorated, fancified, or whatever you call it) output data type.
The great thing about it is that now all this additional information is visible to the compiler and the type checker. There is even a name for this system in type theory: the effect system. A function signature may expose the effects of a function in addition to just turning input into output types. These effects are propagated when composing functions (as was the effect of modifying the symbol table, or being undefined for some values of arguments) and can be checked at compile time.
A potential shortcoming of this approach is that the composition of such fancified functions requires writing some boilerplated code. In the case of
Maybe- or
Either-returning functions, we have to pattern match the results and fork the execution. In case of action-returning functions, we need to run these actions, provide the additional parameters they need, and pass results to the next action.
To our great relief, this highly repetitive (and error-prone) glue code can be abstracted into just two functions:
>>= and
return (optionally
>> and
fail). Now we can test our implementation of the glue code in one place, or still better, use the library code. And to make our lives even better, we have this wonderful syntactic sugar in the shape of the
do notation.
But now, when you look at a do block, it looks very much like imperative code with hidden side effects. The
Either monadic code looks like using functions that can throw exceptions.
State monad code looks as if the state were a global mutable variable. You access it using
get with no arguments, and you modify it by calling
put that returns no value. So what have we gained in comparison to C?
We might not see the hidden effects, but the compiler does. It desugars every do block and type-checks it. The state might look like a global variable but it's not. Monadic bind makes sure that the state is threaded from function to function. It's never shared. If you make your Haskell code concurrent, there will be no data races.
Exercises
Ex 1. Define the reader monad. It's supposed to model computations that have access to some read-only environment. In imperative code such environment is often implemented as a global object. In functional languages we need to pass it as an argument to every function that might potentially need access to it. The reader monad hides this process.
newtype Reader e a = Reader (e -> a) reader :: (e -> a) -> Reader e a reader f = undefined runReader :: Reader e a -> e -> a runReader = undefined ask :: Reader e e ask = reader (\e -> e) instance Monad (Reader e) where ... type Env = Reader String -- curried version of -- type Env a = Reader String a test :: Env Int test = do s <- ask return $ read s + 1 main = print $ runReader test "13"
newtype Reader e a = Reader (e -> a) reader :: (e -> a) -> Reader e a reader f = Reader f runReader :: Reader e a -> e -> a runReader (Reader act) env = act env ask :: Reader e e ask = reader (\e -> e) instance Monad (Reader e) where return x = reader (\_ -> x) rd >>= k = reader $ \env -> let x = runReader rd env act' = k x in runReader act' env type Env = Reader String -- curried version of -- type Env a = Reader String a test :: Env Int test = do s <- ask return $ read s + 1 main = print $ runReader test "13"
Ex 2. Use the
State monad from
Control.Monad.State to re-implement the evaluator. ... addSymbol :: String -> Double -> Evaluator () addSymbol str val = do ... evaluate :: Tree -> Evaluator Double evaluate (SumNode op left right) = ... evaluate (ProdNode op left right) = ... evaluate (UnaryNode op tree) = ... evaluate (NumNode x) = ... evaluate (VarNode str) = ... evaluate (AssignNode str tree) = ... expr = AssignNode "x" (ProdNode Times (VarNode "pi") (ProdNode Times (NumNode 4) (NumNode 6))) main = print $ runState (evaluate expr) (M.fromList [("pi", pi)]) symTab <- get case M.lookup str symTab of Just v -> return v Nothing -> error $ "Undefined variable " ++ str addSymbol :: String -> Double -> Evaluator () addSymbol str val = do symTab <- get put $ M.insert str val symTab return () return v expr = AssignNode "x" (ProdNode Times (VarNode "pi") (ProdNode Times (NumNode 4) (NumNode 6))) main = print $ runState (evaluate expr) (M.fromList [("pi", pi)])
Calculator with the Symbol Table Monad
Here's the complete runnable version of the calculator that uses our Symbol Table Monad.
import Data.Char import qualified Data.Map as M data Operator = Plus | Minus | Times | Div deriving (Show, Eq) data Token = TokOp Operator | TokAssign | TokLParen | TokRParen | TokIdent String | TokNum Double | TokEnd deriving (Show, Eq) operator :: Char -> Operator operator c | c == '+' = Plus | c == '-' = Minus | c == '*' = Times | c == '/' = Div tokenize :: String -> [Token] tokenize [] = [] tokenize (c : cs) | elem c "+-*/" = TokOp (operator c) : tokenize cs | c == '=' = TokAssign : tokenize cs | c == '(' = TokLParen : tokenize cs | c == ')' = TokRParen : tokenize cs | isDigit c = number c cs | isAlpha c = identifier c cs | isSpace c = tokenize cs | otherwise = error $ "Cannot tokenize " ++ [c] identifier :: Char -> String -> [Token] identifier c cs = let (name, cs') = span isAlphaNum cs in TokIdent (c:name) : tokenize cs' number :: Char -> String -> [Token] number c cs = let (digs, cs') = span isDigit cs in TokNum (read (c : digs)) : tokenize cs' ---- parser ---- data Tree = SumNode Operator Tree Tree | ProdNode Operator Tree Tree | AssignNode String Tree | UnaryNode Operator Tree | NumNode Double | VarNode String deriving Show lookAhead :: [Token] -> Token lookAhead [] = TokEnd lookAhead (t:ts) = t accept :: [Token] -> [Token] accept [] = error "Nothing to accept" accept (t:ts) = ts expression :: [Token] -> (Tree, [Token]) expression toks = let (termTree, toks') = term toks in case lookAhead toks' of (TokOp op) | elem op [Plus, Minus] -> let (exTree, toks'') = expression (accept toks') in (SumNode op termTree exTree, toks'') TokAssign -> case termTree of VarNode str -> let (exTree, toks'') = expression (accept toks') in (AssignNode str exTree, toks'') _ -> error "Only variables can be assigned to" _ -> (termTree, toks') term :: [Token] -> (Tree, [Token]) term toks = let (facTree, toks') = factor toks in case lookAhead toks' of (TokOp op) | elem op [Times, Div] -> let (termTree, toks'') = term (accept toks') in (ProdNode op facTree termTree, toks'') _ -> (facTree, toks') factor :: [Token] -> (Tree, [Token]) factor toks = case lookAhead toks of (TokNum x) -> (NumNode x, accept toks) (TokIdent str) -> (VarNode str, accept toks) (TokOp op) | elem op [Plus, Minus] -> let (facTree, toks') = factor (accept toks) in (UnaryNode op facTree, toks') TokLParen -> let (expTree, toks') = expression (accept toks) in if lookAhead toks' /= TokRParen then error "Missing right parenthesis" else (expTree, accept toks') _ -> error $ "Parse error on token: " ++ show toks parse :: [Token] -> Tree parse toks = let (tree, toks') = expression toks in if null toks' then tree else error $ "Leftover tokens: " ++ show toks' ---- evaluator ---- -- show type SymTab = M.Map String Double newtype Evaluator a = Ev (SymTab -> (a, SymTab)) instance Monad Evaluator where (Ev act) >>= k = Ev $ \symTab -> let (x, symTab') = act symTab (Ev act') = k x in act' symTab' return x = Ev (\symTab -> (x, symTab)) lookUp :: String -> Evaluator Double lookUp str = Ev $ \symTab -> case M.lookup str symTab of Just v -> (v, symTab) Nothing -> error $ "Undefined variable " ++ str addSymbol :: String -> Double -> Evaluator Double addSymbol str val = Ev $ \symTab -> let symTab' = M.insert str val symTab in (val, symTab') main = do loop (M.fromList [("pi", pi)]) loop symTab = do str <- getLine if null str then return () else let toks = tokenize str tree = parse toks Ev act = evaluate tree (val, symTab') = act symTab in do print val loop symTab' | https://www.schoolofhaskell.com/school/starting-with-haskell/basics-of-haskell/12-State-Monad | CC-MAIN-2016-50 | refinedweb | 4,376 | 57.1 |
Over the last five years, the Web has been stretched perhaps further than at any other time in its history. What was once a largely text-based medium for software programs called "Web browsers" has become an information source for any device that has connectivity. Initially, mobile phones joined the list of devices that could access Web pages, followed by pagers, handheld devices, personal planners, and anything else that could make a wireless connection to the Web. In more recent years, telephony has entered this fray, and the desire to make Web programs accessible over normal phone lines has come into vogue.
This last category of applications -- where a user accesses an online service through a telephone -- is better called a telephone application. Since phones obviously can't be used to "click on a link," application interactions are almost all handled by voice. Instead of clicking a link, a user says "Account Information," or uses the keypad following pre-recorded instructions.
The ability to serve telephones through existing -- or slightly modified -- Web applications is a powerful idea, and one that many Web developers are eager to explore. The most important thing to know about Web and phone applications is that you can use virtually the same technology stack to create both. HTML, XHTML, and XML are three of the most common technologies underlying Web interfaces, and VoiceXML (or VXML) is a closely related technology that makes Web interactions available to phone clients. JavaServer Pages and servlets, PHP scripts, and Ruby applications can all respond to phone requests as easily as those that come in over a handheld or Web browser. In this article, I focus on using the Java platform to serve simple VoiceXML applications, but you can apply much of the discussion equally to PHP, Perl, or your programming language of choice.
VoiceXML, CCXML, or CallXML?
The most commonly used standard for building voice applications is VoiceXML. Most VXML browsers support VoiceXML 2.0, which is the VXML version used throughout this article. VXML is W3C-specification-compliant and rapidly expanding, but still catching up with v2.1. VXML 3.0 is also on the horizon.
CCXML stands for Call Control XML, and is the newest player that meets the W3C specifications for telephony markup. CCXML is more advanced than most VoiceXML implementations, offering support for callbacks, event listeners, and multi-line and multi-party sessions. Unless you specifically need these features, however, you're probably best served by sticking with VoiceXML, which is more stable and in widespread use.
CallXML is a platform specific to Voxeo. CallXML is extremely easy to learn and provides great support for touchtone input (note that it does not support voice recognition). CallXML's big downside is that it is vendor-specific. While Voxeo is a great site with a ton of resources, it's never a good idea to get locked into a specific vendor. Again, most developers will find that VoiceXML meets their needs.
Before getting into the Java side of the VoiceXML picture, you should have a basic understanding of how a VoiceXML application works. To that end, I'll take you quickly through a very simple VoiceXML application. The example app will get you used to seeing VXML files, as well as ensuring you have access to (and can use) the Voxeo call-assignment service, which is crucial to the rest of this article.
VoiceXML begins with at least one VXML file, the VoiceXML-flavored version of XML used to inform telephony applications of what they should (and can) do. Listing 1 is a very simple VXML file. Save this file on your local machine (you can download the complete example source code from Download, but you should get used to working with these files yourself, anyway).
Listing 1. A very simple VXML file
This is about as basic as VoiceXML gets; if you're unclear on the syntax check out some of the other VoiceXML articles listed in Resources. The VXML file in Listing 1 consists of a single prompt and doesn't offer any interactivity; you'll see much more advanced uses of VoiceXML in the sections on working with Java code. For now, use this simple test case to ensure your environment is working correctly.
Next, put your VXML file somewhere publicly accessible. If you have an ISP, just upload the VXML file to your Web site; you might want to create a directory for your VoiceXML files off of your Web root, like /voicexml or /voice. Make sure the directory and file are Web accessible (consult your system administrator or ISP if you're unclear on how to do this).
In the event that you don't have access to an ISP, you can sign up at Voxeo to use the site's File Manager. You should already have created a Voxeo account, and it comes with 10 MB of hosting space, so this is a nice free option. (Ten MB is a lot of VXML files!)
Once your VXML app is online, you might want to ensure you can access it by entering the URL into your Web browser. Depending on your browser, you might be asked to download the XML file, or see it rendered in some form by your browser. This is just a test to ensure the VXML is available, so don't be concerned if your computer doesn't start speaking to you. Once the VXML is online, you're ready to link it to a phone number.
Assign a phone number to your application
Unlike traditional Web applications, you can't just open up a Web browser and surf on over to your VXML file; at least, not if you want a voice response. To test out a phone-based application, you obviously need a phone, and that implies a number to call. There are plenty of high-dollar approaches to mapping numbers to VoiceXML applications, but for testing, staging, and development, Voxeo offers a great free mapping service.
Navigate over to Voxeo.com, log in (using the fields on the upper left side of the page). Under Account menu, select Application Manager, as shown in Figure 1.
Figure 1. Getting to the Voxeo Application Manager
Choose Add Application, and then select VoiceXML 2.0 as your development platform.
Next, provide the URL for your VXML file, as well as a name for your application; you can use anything you want for the name, as it's for your own reference. Figure 2 shows the settings to access my VXML file. From the Application Phone Number drop-down list, choose the Staging option. This assigns a temporary staging phone number to the application, so you can actually call in from your own telephone.
Figure 2. Mapping a VXML file to a phone number
Click Create Application and Voxeo will assign several phone numbers to your application. Figure 3 shows the resulting screen (scrolled down a bit), with all the different access points to the VXML file.
Figure 3. A successful mapping!
This feature alone is worth the time it takes to sign up for Voxeo; you can now access your VXML file through a toll number, an 800-number, and Skype, just to list a few. This is nice, since you don't have to use the Voxeo tools to test your application. Even better, you can let your boss test things out without needing an account on the Voxeo site!
All that's left is to call one of the numbers that Voxeo supplies. Once you dial, your VXML application should pick up, and let you know (in an unexciting mechanical voice) that "Things are working correctly! Congratulations."
And that's it: in about five minutes, you had your phone talking to an XML file. Now you're ready to get into some Java code, and learn about generating VXML dynamically.
At this point, most Java developers try to hand-code VXML from within their Java servlets, add in hundreds of
out.println() statements, worry about the content type of the output, and generally add a lot of unnecessary complexity to many applications. Before you begin those more complex programming tasks -- all of which are useful when used properly -- get your feet wet with some very basic VoiceXML servlet programming in this section.
To begin with, develop your VXML file. Don't open up an IDE or start coding Java; instead, just fire up a text editor, and resist the urge to immediately add
package and
import statements. Instead, build a simple VXML file, much as you did earlier in this article.
For example, Listing 2 is another pretty basic VXML file. It's a voice-recognition VXML file that takes in a favorite instrument and offers some commentary on the caller's choices.
Listing 2. Another basic VXML file
Write this VXML, save it, upload it to your ISP, and assign a number to it. Only after you do all of these steps -- ensuring that your VXML works -- are you ready to even begin thinking about coding Java.
If you jump right into Java, you'll probably make mistakes in your output, as well as mistakes in your code. The result is trying to simultaneously debug a VXML file (XML) and a servlet (Java), all within a Web framework, which is notoriously hard to debug in the first place. Rather than add all these variables -- no pun intended -- ensure that you start with a working VXML file. Then you're ready to get your Java code running.
With your VXML ready to use, you're finally set to get into some code. First, begin with a servlet that simply loads the VXML file. Listing 3 is a servlet that does just that -- loads up the VXML developed in Listing 2. There's no output, so don't expect much yet.
Listing 3. Loading a VXML file
This code is pretty straightforward. It loads up an XML file -- specified through a directory in the servlet's configuration context and a constant filename -- and then iterates over the content of that file. You might hard-code the file's path in the servlet, but it's a much better idea to store at least the directory name in your Web.xml file, located in the WEB-INF/ directory of your servlet's context. Listing 4 shows the context parameter in Web.xml.
Listing 4. The context parameter for the servlet
If you compile your servlet and try to load it in a Web browser, you'll only get a blank screen; still, you should make sure you get at least that. If you get any errors you'll need to correct them. For example, it's common to run into file-access problems or typos in the VXML file path. Once you get a blank screen, you're ready to actually output the VXML file.
Outputting VXML from a servlet
First, you need to get access to an output object, so you can send content to the browser. This is easy enough:
Spitting out the content from the file itself is easy; you just use a single line of code:
While this might look like it's enough, you still need to let the browser know that you're sending XML to it. Remember, browsers are used to HTML, and some don't happily accept XML. You can set the content type, as well as the length of that content, by using the
HttpServletResponse object again:
Listing 5 shows all of this code added into the servlet you saw back in Listing 3.
Listing 5. The VoiceXMLServlet, completed and ready to load VXML files
Testing servlet-loaded VoiceXML
Compile your servlet with these changes, and restart your servlet engine if you need to. Browse to the servlet, and you should see output similar to Figure 4. Success!
Figure 4. The VoiceXML servlet outputting VXML
If you don't get output like this, try to ensure that your file is where you indicated it to be, and that you don't have any permission issues. You might also check your servlet engine's logs, or ask a system administrator for help.
Now you're ready to map a phone number to your servlet. Head back over to Voxeo.com's Application Manager and add a new application (you'll probably see the applications you worked with earlier). Make sure you select VoiceXML 2.0, and then enter a name for the new application and the URL for your servlet. Voxeo will create your application and assign it a phone number.
Dial in to this new number and you should hear the prompt from the VXML back in Listing 2. Congratulations! You've just coded a Java servlet that outputs VXML, and hooked a phone number into it.
You might want to make a couple minor additions to your servlet code. Neither are required, but both add a bit of robustness and documentation to the existing version.
First, you might want to allow users to access the VXML through a POST request. This could occur if a user clicked a button on a form, and that form made a POST request to the
VoiceXMLServlet. It's a pretty simple operation to handle in the servlet; just write a version of
doPost() that delegates to the
doGet() method you already have, as shown here:
Another simple addition is to actually let browsers know that you're outputting the content of a VXML file. To do this, set the
Content-disposition response header in your servlet, like so:
Now browsers (or other code) reading your response can discover the VXML file that was served. Be sure not to include your complete file path, though; that is a security risk!
Once you have a servlet that outputs a VXML file, it's a pretty small task to move from that -- using the code as a model or template -- to a servlet that outputs VXML dynamically. In other words, you can move away from simply loading a static VXML file, and start programmatically creating VXML.
The Java platform really begins to shine when you get into dynamic VoiceXML. It provides the ability to easily output XML, as well as interact with databases, directory servers, authentication stores, and sessions. As it turns out, building dynamic VXML gets rid of some of the formality of voice-based systems, as well.
In this section, I walk you through creating a Java servlet that outputs dynamic VXML.
Outputting VXML through out.println()
You've already seen how to get access to the
ServletOutputStream, and then insert bytes into that output stream. However, dealing directly with bytes isn't nearly as manageable when you're not just transferring bytes from a source (like a static VXML file) to an output stream.
In cases where you want to create the VXML on your own, you're better off working with a
PrintWriter. You can push entire strings out with that class, making it much more useful for creating and outputting dynamic content. It only requires a small change to your code, as shown below.
Don't forget to also import the
java.io.PrintWriter class: it's not automatically available to your servlet's code base.
With a
PrintWriter, you can now output string-based content. For example, Listing 6 outputs the same VXML you saw back in Listing 1, except through a servlet, and without loading the VXML content from a static file.
Listing 6. Dynamically outputting VXML
You can compile this servlet, register it with Voxeo, and access it by phone, just as you did with Listing 1. Now let's move on to some examples that show off the dynamic programming capabilities of a language like Java.
One of the simplest things you can do with servlet-based VXML output is add some awareness of time. It's trivial to grab the current time and date with Java code, so that's a great place to start.
Using the
Calendar class you can easily get the hour of the day (or anything else related to the current date, really). Listing 7 demonstrates code to get a new instance of the
Calendar class, obtain the hour of the day (which is returned in a 24-hour format), and then put together a simple greeting based on that hour.
Listing 7. Dynamically outputting VXML
Differences between callers and VXML generators
Listing 7 also showcases another feature of VoiceXML and dynamically generated VXML: the possible disparity between the caller and the VXML itself. For instance, suppose a user living in New Zealand calls into the application shown in Listing 7. If it's 10:00 PM in New Zealand, but the server outputting the VXML is in Denver, Colorado, they'll probably be greeted with an odd message, such as "You're up early. Good morning." That's hardly appropriate, and it could get worse: if you've added in greetings for specific days of the week, you're really going to have some mismatches.
The basic problem stems from the VXML and Java running in the locale and time zone of a specific server, but being available to callers from all over the world. If your servlet doesn't take this into account, you're going to have some rather confused callers. You have several options:
- Ignore the difference and hope callers understand that your server just isn't running code in their time zone.
- Explicitly state that times and dates are local to the server; for instance, your afternoon greeting might be, "It's afternoon here. I hope you're having a nice day as well."
- Write code that asks for a time zone or the offset from GMT, and then develop greetings based on that information.
Unfortunately, none of these turns out to be an attractive option. The first does just what it says: basically ignores the caller. It should go without saying that ignoring callers is not the way to get and maintain a business. The second idea -- stating the local time and explicitly noting that it's local -- is not much more helpful, as it still tends to ignore the caller; it's just a little more considerate in the process.
The final option might seem at first to be attractive; it's easy to write VXML to allow the user to supply a number offset from GMT, and then respond based on that. However, callers tend to like to get to information as quickly as possible; the more response prompts you have, the greater risk of annoying your callers and having them hang up unsatisfied. Therefore, unless you are providing a time- or date-based service, requiring a caller to indicate the time zone is a waste of one of those prompts. Even worse, many callers don't know their offset from GMT, so you're faced with supporting time zones, time zone abbreviations, daylight savings time ... the list can get long and unwieldy.
So why even play with date-based VXML generation? Largely because it illustrates these very issues! You need to be very conscious of your audience, and try and give them information that is relevant to them, not to your server or your locality.
In the case of date-based processing, the lesson is that you should probably employ a final, better option for dealing with callers and avoid date- and time-based transactions altogether, unless absolutely necessary. If you expect callers from outside your time zone, you're just asking for trouble by trying to provide a time-related feature. The same principles apply for any data that might change across state, country, or continental lines.
Finally, there are obviously plenty of times when using a servlet for outputting VXML is not that great of an idea. If you're just spitting out VXML from a static file, you gain very little (perhaps a bit of flexibility), but adding code, compilation, debugging, a servlet engine, and a lot more to the complexity of your voice app. In these simple cases, stick with using static VXML files.
As you've seen so far in this article, sometimes a servlet-generated VXML does not make sense. Before finishing up, though, consider several cases where using a language like Java is a great telephone application solution. I won't provide full examples here, but look for them in future articles.
The most obvious application of Java concerning VoiceXML is using a database to feed a dynamic VXML output. This is probably what many of you expected to learn about when you began this article (although you wouldn't have learned as many lessons if that was the core example). In any case, JDBC makes it simple to connect to a database, and then use results from SQL queries to populate VXML.
For example, you could develop a table that contained all the grammar information for your VXM, and then load that grammar into each VXML file you output. Rather than having to code grammar for each and every VXML file, you can share a grammar between similar files. Even better, you can pre-load these grammars across all servlets, or instances of a particular servlet, and have the benefit of storing your grammars in a database without paying the cost for loading that grammar on each and every request.
Another nice Java feature -- particularly as it relates to servlets, JSPs, and Web-based programming -- is the ability to store user credentials in a session. This gets you firmly into authentication and authorization, as well as very highly customized content.
For example, consider a voice application that begins by asking for a user ID number and a PIN (like most banking or financial applications do today). You can authenticate these credentials against a database -- already a notable strength of the Java platform -- and then store the caller's ID into a session variable. Then, each Java servlet or JSP that fields a request from that caller can figure out what options to offer the user based on those credentials.
While plenty of VoiceXML alternatives offer similar functionality, very few boast the ability to share code with Web-based versions of their applications. In other words, the Java platform lets you share not only a database between a VoiceXML and Web-based version of an application, but code components. Your VXML-producing servlets can use the same authentication and permissions utility classes as your HTML- and XHTML-producing servlets; your JSPs that respond to phone calls can share cached database connections with your JSPs that handle HTTP requests. As a result, you end up with an application infrastructure that can handle multiple types of clients, rather than having to create an entire application for each client type.
In this article I barely scratched the surface of the things you can do with VXML and the Java platform. I introduced the process of developing VXML, and then showed you how to integrate Java technology into that process. Along the way I dropped hints as to all kinds of interesting ways you can use Java code to develop rich, dynamic VoiceXML applications.
I also let you see some of the really common ways that VoiceXML developers misuse Java technology in voice applications. Trying to get clever with dates and times, attempting locale-based services, and forgetting about the difference between a server's local time and a caller's local time are sure ways to alienate and frustrate users. Consider Java a tool for VoiceXML, but not a showcase for the
Date and
Calendar classes.
I'll continue writing about these topics and more in future articles, starting with the principles laid down here and expanding on them. If you want to know more about building rich voice application, developing telephone apps that interact with databases, tracking users, and providing individualized content, keep an eye on this space. Also, go back to Voxeo.com and try getting a servlet or two to serve up your VXML. Then come back here next month for more.
Information about download methods
Learn
- X+V is a markup language, not a Roman math expression (Les Wilson, developerWorks, August 2003): Explore your options with a look at X+V (XHTML plus Voice) -- a Web markup language for developing multimodal applications.
- Multimodal interaction and the mobile Web, Part 1: Multimodal auto-fill (Gerald McCobb, developerWorks, November 2005): Take the first steps into multimodal interaction development.
- Speech-enable Web apps using RDC with Voice Toolkit (Girish Dhanakshiru, developerWorks, March 2005): WebSphere developers can use the Voice Toolkit and Rational Application Developer (RAD) to add speech to any existing Web application.
- Start developing CCXML applications (Susan Jackson and Hannah Parker, developerWorks, June 2004): A tutorial introduction to the Call Control XML (CCXML) language.
- Choosing a platform: Voxeo explains the difference between CallXML, CCXML, and VoiceXML.
- W3C's Voice Browser Activity page: Includes links to specs, FAQs, tools, and helpful articles.
- VoiceXML 2.1: Read the W3C candidate recommendation.
- The CCXML Version 1.0 specification: Features the latest developments in call control technology.
- Java and XML, Second edition (Brett McLaughlin; O'Reilly Media, Inc., 2001): Includes material on XHTML, serving XML on the Web, and serving content to multiple types of devices.
- XML in a Nutshell, Third edition (Elliotte Rusty Harold, W. Scott Means; O'Reilly Media, Inc., 2004): A great all-in-one XML resource, with a chapter devoted to XML on the Web.
- Web Architecture zone: Find articles and tutorials on various Web-based solutions.
- developerWorks XML zone: FInd hundreds of XML-related articles and tutorials.
- developerWorks Wireless technology zone: Explore content about a variety of Wireless solutions.
- developerWorks Java technology zone: Check out numerous, Java-related articles and tutorials.
Get products and technologies
- Voxeo.com: An excellent source of VoiceXML information.
- Voxeo Community Tools: A starting point for finding VoiceXML-related add-ons and utilities.
- IBM trial software: Available for download directly from developerWorks.
Discuss
- Get involved in the developerWorks community by participating in developerWorks blogs.
>>IMAGE. | http://www.ibm.com/developerworks/library/wa-voicexml/ | crawl-002 | refinedweb | 4,343 | 60.55 |
Jan
20
PBS 28 of x – JS Prototype Revision | CSS Attribute Selectors & Buttons
Filed Under Computers & Tech, Software Development on January 20, 2017 at 10:00.
Solution to the PBS 27 Challenges
My sample solution follows the template described at the end of the previous instalment.
I’d like to draw your attention to a few aspects of the solution – firstly, the
pbs.DateTime prototype is by far the simplest of the three, because it leverages the code in the other two. Because the data attributes (
this._date &
this._time) are instances of the
pbs.Date and
pbs.Time prototypes, the functions from those prototypes can be leveraged. You really see this in action in the implementations of functions like
american24Hour():
When it comes to the years, I have implemented them as a whole number, which I allow to be negative or zero. In our day-to-day way of writing years, there is no year zero. The year before 1CE (or 1AD if you prefer the Christian-centric view of time) was not 0CE, or 0BCE (or indeed 0AD or 0BC), it was 1BCE (or 1BC).
We could store our year as a whole number with a sign, and throw an error if someone tries to use zero, but then maths stops behaving properly. You’d like to be able to subtract two years from each other to determine how far apart they are. If you implicitly skip zero then you start to get the wrong answer from simple subtractions when ever one number is positive and the other is negative.
The solution to this dilemma is to use so-called Astronomical Year Numbering, and that’s what my code does. When storing dates, you store them as whole numbers with a sign, and allow zero. All positive numbers represent CE years, and all negative numbers and represent BCE years plus one. So 1 is 1AD, 0 is 1BCE, and -1 is 2BCE and so on.
Internally, my solution stores years as astronomical years so that maths works, but, when generating strings, my code renders years in CE or BCE. This is done by checking whether or not the year is less than of equal to zero, and if it is, subtracting one get the correct BCE year. You can see an example of this in my implementation of the
.european() function:
Now lets look at what’s not so good about my solution.
Firstly, this code has a number of so-called bad smells (an actual software engineering term). My solution contains a lot of duplicated code, and yours probably does too. There is definitely scope for re-organising some of that repeated code into helper functions, or, to use the fancy software engineering term, for refactoring the repeated code into a number of helper functions.
There’s a lot of testing to see whether a given value is an integer within a given range – we need to make sure hours are whole numbers between 1 and 23, minutes and seconds are whole numbers between 0 and 59, and so on. Let’s write a little helper function to take care of all those cases in one go.
We can now refactor our accessor methods to use this function, e.g. the accessors from
pbs.Time could be re-written like so:
I’ve shown this function in isolation, but that still leaves us with a really important question – where should you place it within your code?
We could place it at the very top of our code, above the namespace and the self-executing anonymous function within which we define our prototypes, or, we could put it as the first thing within the self-executing anonymous function. In both cases, the code would run, so which is the right thing to do?
If we place it outside the self-executing anonymous function, it will be in the global scope. It’s precisely to avoid this kind of littering of the global scope that we introduced the concept of self-executing anonymous functions, so, the correct place to put these kinds of helper functions is inside the self-executing anonymous function.
Also notice that I have placed all three of my prototypes within the same self-executing anonymous function. If you placed each in its own function, then they would not share a scope, so you couldn’t use the same helper functions within all three prototypes. It’s for exactly this reason that I placed the three prototypes within the same self-executing anonymous function.
The next big issue we have is with validation of the days in the
pbs.Date prototype. The following does not currently throw an exception, and it really should:
How can we resolve this? Clearly, there is going to have to be some kind of linkage between the month and day parts of the date. When changing the month or the day, we need to check that the pair together are valid, and if not, we need to act.
The first thing we’ll want to create is a private lookup table storing the number of days in each month. Like with the helper functions, we don’t want this littering the global scope, so it too should be defined within the self-executing anonymous function:
This will allow us to deal with 11 of the 12 months in the year quite easily, but what about February? We need to know the year to know how many days there are in February! So, we actually need to validate the combination of day, month and year each time we update any one of them.
This calls for another helper function!
According to WikiPedia, the Gregorian Calendar we use today came into use in 1582. We could write our code so it uses the Julian calendar for years before 1582, but that would get very complex very quickly, instead, we’ll use the Proleptic Gregorian calendar, that is, our modern calendar projected backwards as if it had always been in use.
That gives us the following rules for calculating leap years:
- A year divisible by 4 is a leap year (has 29 days in February)
- Years divisible by 100 are exceptions to rule 1, and not leap years
- But years divisible by 400 are exceptions to rule 2, and actually are leap years
Below is a sample implementation:
Notice that I have not added any data validation on the arguments to this private helper function. This is because it is impossible for an end-user of our prototypes to access this function directly – it exists only within the self-executing anonymous function.
We can now go back and alter our accessor functions so they prevent invalid dates from being added. While in there, we can also fix another subtle bug – we should ensure that the data is all saved as numbers, not as string representations of valid numbers.
This now brings along a new problem – at the moment our prototype only allows days, months, and years to be set one, by, one, so there are edge cases where converting from one valid date to another valid date in one order will fail, but doing the same conversion in another order will succeed.
E.g. The following code looks perfectly valid, but will throw an exception:
However, the following will succeed:
Why?
The reason is subtle, but important, and shows a shortcoming in our current prototype design.
When you call the constructor with no arguments, the date is set to 1 Jan 1970. When you call
.day(29) on that object, you are setting the date to 29 Jan 1970, which is fine, but when you call
.month(2), you are setting the date to 29 Feb 1970, which is invalid because 1970 is not a leap year!
Why does the same thing in a different order succeed? By changing the year, then month, then day the object goes from 1 Jan 1970, to 1 Jan 2016, to 1 Feb 2016, to 29 Feb 2016, so it never passed through an invalid state.
How can we update our prototype to address this limitation?
We clearly need some kind of accessor function that accepts three arguments, validates the three together, then updates the three internal values (
this._day,
this._month, and
this._year).
The solution is to write a new accessor method that accepts three arguments, allowing all three values to be updated and validated in one go. We could write a whole new function, but we already have two functions for reading the entire date,
.american() and
.european(), so why not update those to optionally accept three arguments in the appropriate order?
Notice that in the above samples we use code like
myDate.year(2016).month(2).day(29), this is an example of so-called function chaining, and it is only possible because our accessors return
this when used to set a value.
Remember that we evaluate from left to right, so the first thing to happen is that
myDate is looked up. It is a reference to an object with the prototype
pbs.Date. Next, the dot operator applies the function
year() from the
pbs.Date prototype to what ever is to its left, i.e. the
myDate object. The
year() function returns
this, so,
myDate.year() returns the
myDate object. At this stage in the evaluation, the line has effectively become
myDate.month(2).day(29). The dot operator happens again, and the month() function from the pbs.Date prototype gets applied to the myDate object. Again, because
myDate is the object on which the function is being invoked, within the function,
this is a reference to the
myDate object. So, when
month() again returns
this, the value being returned is a reference to the
myDate object yet again. The line has now become equivalent to
myDate.day(29). The dot operator fires one last time, and applies the
day() function from the
pbs.Date prototype to the myDate object.
So, because we return this within all our accessors when setting a value, and only because we do that, the single line
myDate.year(2016).month(2).day(29) is entirely equivalent to:
A Challenge
Using either your own solution to the previous challenge, or my sample solution above as your starting point, and make the following changes.
First, add private helper functions to do the following, and re-factor your code to make use of them:
- A function to validate integers – it should accept optional upper and lower bounds on the values (you can use the sample function above)
- A function that takes a number and converts it into a string of a given length – if the length is greater than the number of digits, zeros should be added to the front of the string until it is long enough. Update the various functions for rendering dates and times as strings to make use of this function
You’ll know you have succeeded if the test code from the three sections of the previous challenge continues to work:
Next, update the
pbs.Date prototype so both the
.american() and
.european() functions continue to work as they do now when called with no arguments, but update the internally stored date (with validation) when called with three arguments. You should update
.american() so it accepts the arguments in the American order (M, D, Y), and
.european() so it accepts them in the European order (D, M, Y). When called with three arguments, both functions should return a reference to
this so as to enable function chaining. Try writing your code in such a way that you avoid code duplication. An productive approach would be to implement one of these functions, and then call that one from the other when called with arguments.
You’ll know your updated prototype is working when the following test code succeeds:
Finally, add two more functions to your
pbs.Date prototype with the following details:
- A function named .international() that behaves like the updated versions of
.american()and
.european(), but orders the date as Y, M, D.
- A function named
.english()that returns the date as a human-friendly string like 2nd of March 2016. Unlike
.american()etc., this function does not need to allow the currently stored date be updated. You may find it helpful to add some private lookup tables to aid you in your work.
You can test your functions with the following code:
The CSS Attribute Selectors
It’s been a long time since we’ve learned a new CSS selector, but now that we’re moving on to HTML forms, there’s a whole class of CSS selector that it would be good to know about – the attribute selectors.
Attribute Presence (
[attribute_name])
The simplest attribute selector is
[attribute_name] – it will match all elements with a value for the attribute
attribute_name. So, to add a green border around all images that have a title you could use CSS something like:
Attribute Value Equals (
[attribute_name="some_value"])
You can style elements based on a given attribute having an exact value with this selector. For example, to turn all links with a
target of
_blank purple we could use something like:
Attribute Value Begins With (
[attribute_name^="some_value"])
You can style elements based on the value for a given selector beginning with a given value. For example, you could turn any link with an href that begins with
https:// green with something like:
Attribute Value Ends With (
[attribute_name$="some_value"])
You can style elements based on the value for a given selector ending with a given value. For example, you could add a red border to any image with an
src attribute that ends in
.gif red with something like:
Attribute Value Contains (
[attribute_name*="some_value"])
You can style elements based on the value of a given attribute containing a given value as a sub-string using this selector. For example, you could add a green border to any image who’s
alt attribute contains the word
boogers with something like:
Attribute Value Contains Word (
[attribute_name~="some_word"])
Some HTML attributes can contain a space-delimited list of values. For example, the
rel attribute on links. We know it can contain
noopener to specify that a window opened by clicking the link should not get a JavaScript
opener object. But we can also set the
rel attribute to
nofollow to tell search engines not to follow the link when crawling the site. To specify that a link should have rel values of both noopener and nofollow, you would place both values into the same attribute separate by a space, like so:
If we want to make all links with a
rel of
nofollow grey, regardless of whether they also specified other values, and regardless of the order those values were specified in, we would use the
[attribute_name~="some_word"] selector like so:
The above selector would turn all the following links grey:
Like with all other selectors, the attribute selectors can be combined with the selectors we already know, so you could style all links with the class
pbs that have a
href attribute that starts with
https:// with be the selector
a.pbs[href^="https://"].
The HTML 5
button Tag
A button is a clickable inline element. In general, most buttons just contain text, but they can contain other HTML elements.
You should always specify a
type attribute on your buttons. You can choose from the following values:
type="submit"(the default)
- Clicking on the button will submit the form it belongs to.
type="reset"
- Clicking on the button will reset all form inputs within the form the button belongs to to their initial values.
type="button"
- A plain button that will do nothing unless a JavaScript event handler is added to it.
As mentioned in the previous instalment, if no type is supplied, or an invalid value is specified,
type="submit" is assumed.
Buttons can also contain a
value attribute. This attribute has no visible effect on the button, but it can be accessed via JavaScript and jQuery, and it will be passed to server when a form is submitted.
Buttons can be styled with CSS, and the CSS attribute selectors can be used to style different types of button differently. It’s common to use different colours for the different types of button, and, to use a bold font on submit buttons.
In this instalment’s ZIP file you’ll find just one HTML page, and a few images. Below is the code for the page, which contains nine buttons in three sets of three. First, un-styled examples of each of the three kinds of button, then styled examples of each kind of button, and finally, one of each kind of button where images are used to make the buttons easier to understand.
Final Thoughts
We have still only touched the tip of the web form iceberg. We’ll start the next instalment by showing some draw-backs to using image files for icons within buttons and other form elements. There is a better way to include useful pictograms, and we’ll learn all about it. We’ll also learn how to tell a screen reader that a piece of a web page is just decoration, and that it should be hidden from screen readers so as to give visually impaired users a better experience.
We’ll also continue on our revision of JavaScrip prototypes in parallel with all that.
[…] PBS 28 of x – The CSS Attribute Selectors & HTML Buttons […]
[…] PBS 28 of x – JS Prototype Revision | CSS Attribute Selectors & Buttons […]
[…] PBS 28 of x – JS Prototype Revision | CSS Attribute Selectors & Buttons […]
[…] As always the full written tutorial with examples is at bartbusschots.ie/… […] | https://www.bartbusschots.ie/s/2017/01/20/pbs-28-of-x-the-css-attribute-selectors-html-buttons/ | CC-MAIN-2017-17 | refinedweb | 2,956 | 58.72 |
We are not satisfied with classical xUnit way of setup and teardown. We prefer concise approach of py.test over the verbosity of standard unittest.
We found ourselves copying and pasting the same boilerplate code from one test to another or creating extensive structure of test class hierarchy.
py.test fixtures, injected in test functions as parameter names, is different approach for fixture management. It’s neither worse nor better, but we found it to be not as flexible as we need.
Some questions, that we wanted to solve often, looked like:
Sure enough, we can handle or work around all these issues somehow with xUnit setups and teardowns or py.test fixtures, but we wanted something more flexible, easy and convenient to use. That’s why we created resources library.
First, we define functions which we call “resource makers”. These makers are responsible for creating and destroying resources. It’s like setup and teardown in one callable.
from resources import resources @resources.register_func def user(email='joe@example.com', password='password', username='Joe'): user = User.objects.create(email=email, password=password, username=username) try: yield user finally: user.delete()
The flow is simple: we create, we yield, we destroy.
We get a number of resource makers, and we group them into modules, like tests/resources_core.py, tests/resources_users.py, etc.
Then, in a test file, where we plan to use resources, we import the same global object, load resource modules we need, and activate them in tests.
from resources import resources resources.register_mod('tests.resources_core') resources.register_mod('tests.resources_users') def test_user_properties(): with resources.user_ctx() as user: assert user.username == 'Joe'
This is where a little bit of magic happens. Once you define and register the resource maker with name foo, a context manager foo_ctx is created for your convenience. This context manager creates a new resource instance with the corresponding maker function, and destroys the object the way you defined, once the code flow abandons a wrapping “with”-context.
At this point and maybe not so exciting. Yeah, everyone can write the code like this, the difference is that we actually did it :-). We also have a bunch of nifty features making the whole stuff more interesting.
Contexts are better than py.test fixtures, because they are customizeable. Provide everything you need to context manager, and it will be passed to resource maker function as an arguments.
def test_user_properties(): with resources.user_ctx(name='Mary') as user: assert user.username == 'Mary'
We need to have access to resources at different stages of our tests: to get access to object’s properties and methods, to initiate another, dependent fixture instance, and finally to tear down everything.
As soon as you enter the context with resources.foo_ctx() a variable resources.foo will be created and will be available from everywhere, including your test function, and other resource makers.
The latter fact is especially important, because it’s the way we manage dependent resources. Yet we need some conventions, which resource is created first, and so on.
@resources.register_func def todo_item(content='Foo'): item = TodoItem.objects.create(user=resources.user, content=content)
We agreed that we create user resource first, and todo item afterwards, and created a new resource maker, taking advantage of this convention.
We use it like this:
def test_todo_item_properties(): with resources.user_ctx(), resources.todo_item_ctx(): assert resources.todo_item.content == 'Foo'
By the way, if you are still stuck with python2.6, several context managers in the same “with” expression aren’t available for you yet. Use contextlib.nested to avoid deep indentation.
Sometimes we need to create a couple of resources of the same type, instead of just one instance. It’s not a problem, if you don’t want to use global namespace to get access to them. Otherwise you must create a unique identifier for every resource.
Actually, it’s trivial. All you should do is provide a special _name attribute to context manager constructor. This attribute won’t be passed to your resource maker function.
def test_a_couple_of_users(): with resources.user_ctx(username='Adam', _name='adam'), \ resurces.user_ctx(username='Eve', _name='eve'): assert resources.adam.username == 'Adam' assert resources.eve.username == 'Eve'
Context manager can work as a decorator too. When we use it like this, an extra argument will be passed to the function.
@resources.user_ctx() def test_user_properties(user): assert user.username == 'Joe'
We should say that usually it works, but to make it work along with py.test which performs deep introspection of function signatures, we made in with some “dirty hacks” inside, and you may find out that in some cases the chain of decorators dies with a misleading exception. We’d recommend to use context managers instead of decorators, wherever possible.
Yes, we do use setup and teardown methods too. If every function in your test suite uses the same set of resources, it would be counterproductive to write the same chain of decorators or context managers over and over again.
In this case we use another concept: resource managers. Every resource maker foo creates the resources.foo_mgr instance, having start and stop methods. The start method accepts all arguments which the foo_ctx function does, including special _name argument. The stop method has only one optional _name argument, and is used to destroy previously created instance.
Here is a py.test example
def setup_function(func): resources.user_mgr.start(username='Mary') def test_user_properties(): assert resources.user.username == 'Mary' def teardown_function(func): resources.user_mgr.stop()
Sometimes it’s nice to take a look on what’s going on within test function and get access at some point to python console or debugger.
Usually you probably do something like
import pdb; pdb.set_trace()
Or, if you need to get shell and have IPython installed
from IPython import embed; embed()
As it happens often, we added to resources two functions, launching either debugger or python console inside your test function.
from resources import resources def test_something(): resources.pdb() # to launch debugger resources.shell() # to launch Python REPL
If you install IPython and ipdb (pip install IPython ipdb), you get more friendly versions of consoles, otherwise resources fall back to built-in python console and debugger.
Launch py.test with -s switch to be able to fall into interactive console.
It’s especially cool that resources object is autocomplete-friendly and it works well in IPython
In [1]: resources. resources.john resources.pdb resources.register_mod resources.mary resources.register_func resources.shell In [1]: resources.mary Out[1]: {'name': 'Mary Moe'} In [2]: resources.user_mgr.start() Out[2]: {'name': 'John Doe'} In [3]: resources.todo resources.todo_item_ctx resources.todo_item_mgr In [3]: resources.todo_item_mgr.start() Out[3]: {'text': 'Do something', 'user': {'name': 'John Doe'}} In [4]: resources.todo resources.todo_item resources.todo_item_ctx resources.todo_item_mgr In [4]: resources.todo_item Out[4]: {'text': 'Do something', 'user': {'name': 'John Doe'}}
This feature is not something unique to resources module. Pretty much every object can act this way, but it is handy to have a convention about the way you store your test-related constants.
It may work like this.
resources.TEST_DIRECTORY = '/tmp/foo' resources.DOMAIN_NAME = 'example.com' resources.SECRET_KEY = 'foobar'
And then, in the test file.
from resources import resources resoures.register_mod('<a resource module name here>') def test_constants(): assert resources.TEST_DIRECTORY == '/tmp/foo' assert resources.DOMAIN_NAME == 'example.com' assert resources.SECRET_KEY == 'foobar'
The resources library works for us in py.test environment. We don’t see any reasons why it shouldn’t work the same way with nose or classic unitttests. It works for python versions 2.6, 2.7 and 3.3.
Please bear in mind that the library is not thread safe, as we are happy with single threaded tests at this time.
And after all… Seven extra features to improve your test suites for free! What are you waiting for? It’s already improved the quailty of our lives in Doist Inc, and we do hope it will do the same for your. | https://pypi.org/project/python-resources/ | CC-MAIN-2016-50 | refinedweb | 1,321 | 60.01 |
CAPM
According to the CAPM, in equilibrium, expected return of a portfolio is equal to the risk-free rate plus a risk premium that is proportional to its beta. Because at any given time the risk-free rate can be assumed to be given and can be treated as constant, investors frequently are interested in calculating the risk premium that is included in the expected return of the portfolio. The risk premium can be obtained by subtracting the risk free rate from the expected rate of return.
That is, the expected portfolio risk premium is determined:
Rp-T = ßpm (Rm - T)
Where:
Rp = Return on portfolio T
T = Risk – less rate of interest
ßpm = Sensitivity of portfolio p’s excess return to the excess return of the market portfolio. In this form, the origin is defined as the point where the expected excess return (over the riskless rate) is zero for both the portfolio and the market. The result is that the characteristic line, which still has the slope, ßpm = 2 now passes through the origin.
Transtutors is the best place to get answers to all your doubts regarding CAPM vs market model. CAPM vs market model.
Attach Files | http://www.transtutors.com/homework-help/financial-management/capital-market-instruments/capm-market-model.aspx | CC-MAIN-2017-39 | refinedweb | 199 | 55.17 |
How to handle ArrayIndexOutOfBoundsException in java?
How do you handle the exception thrown by below program?
public class ExceptionDemo {
public static void main(String[] args) {
int[] arr = new int[10];
arr[10] = 10;
}
}
Since it throws ArrayIndexOutOfBoundsException which is an unchecked exception, we don't need to worry about it as it is not mandatory to handle it.
We have to put try{ .. }catch(ArrayIndexOutOfBoundsException e){..} to handle this exception.
there is no exception in this program to handle.
Since it throws ArrayIndexOutOfBoundsException which is an unchecked exception, we should not handle it by using try-catch block, rather we have to debug this program and fix that issue.
int[] arr = new int[10];
above line says it is an array of 10 integers.
Arrays always starts with 0th index. So the last index will be 9th index.
arr[10] = 10;
But here programmer is trying to access 10th index, which is not available.
So java throws an exception in this line.
Exception name is ArrayIndexOutOfBoundsException.
Back To Top | http://skillgun.com/question/3012/java/exceptions/how-to-handle-arrayindexoutofboundsexception-in-java-how-do-you-handle-the-exception-thrown-by-below-program-public-class-exceptiondemo-public-static-void-mainstring-args-int-arr-new-int10-arr10-10 | CC-MAIN-2016-50 | refinedweb | 169 | 59.8 |
11 January 2010
By clicking Submit, you accept the Adobe Terms of Use.
Some familiarity with ActionScript 3.
Beginning
Note: For the purposes of this article series, create a folder named pixel_bender and save it on your desktop. As you follow along with the instructions in this series, you'll save your completed project files in the pixel_bender folder..
In this series of articles, you'll learn how to get started with the Pixel Bender Toolkit and begin making filters to create unique effects. When you download the Pixel Bender Toolkit, you'll get the Pixel Bender kernel language and graph language, the Pixel Bender Toolkit IDE (an integrated development environment for Pixel Bender), sample filters, and the Pixel Bender documentation.
This article shows you how to create your first Pixel Bender filter. You'll also learn how to run the filter on an image and save it to your hard drive.
If you haven't already, be sure to download the Pixel Bender Toolkit. Once the installer mounts, or you've extracted the installer, double-click the Setup icon, accept the Adobe End User License Agreement, and step through the wizard to install it.
Locate the Pixel Bender Toolkit in one of the following locations (depending on your operating system) and double-click the icon to launch the Pixel Bender Toolkit:
Once the toolkit is running, your first task is to load an image. Follow these steps:
Note: The Pixel Bender Toolkit supports loading two different images. This feature makes it possible to test filters that combine multiple images (which we'll explore in an upcoming section of this series). The Pixel Bender language supports filters using up to four images as inputs.
Although this first filter effect is not very exciting, you'll learn the create–run–save workflow to follow when creating more complex filters later on.
Follow these steps:
Note: The default Pixel Bender filter created by the Pixel Bender Toolkit is called the identity filter. This filter processes the loaded image but passes it through unchanged because you haven't added any effects yet.
After clicking the Run button, two things happen:
You are now ready to edit a few lines of code to change the name of the filter. Rather than using the default name (NewFilter), rename it MAXFilter. Also change the strings for the namespace, vendor, and description. In the vendor string, you can enter your own name if desired.
Update the filter to match the following highlighted code:
<languageVersion : 1.0;> kernel Part1Filter < namespace : "com.adobe.devnet.pixelbender"; vendor : "Kevin's Filter Factory"; version : 1; description : "Playing around with pixels"; > { input image4 src; output pixel4 dst; void evaluatePixel() { dst = sampleNearest(src,outCoord()); } }
After updating this code, you are ready to run the filter again. Click the Run button to display the output.
Note: If there is an error, a message will appear in the panel on the right side of the Pixel Bender Toolkit. Otherwise the status will indicate that the kernel compiled successfully.
After familiarizing yourself with the Pixel Bender interface, continue with Part 2 in this series, where you'll create a new filter that affects the color values to create a vintage tone effect.. | http://www.adobe.com/devnet/archive/pixelbender/articles/creating_effects_pt01.html | CC-MAIN-2015-22 | refinedweb | 535 | 52.8 |
State of Golang linters and the differences between them — SourceLevel
Golang is full of tools to help us on developing securer, reliable, and useful apps. And there is a category that I would like to talk about: Static Analysis through Linters.
What is a linter?
Linter is a tool that analyzes source code without the need to compile/run your app or install any dependencies. It will perform many checks in the static code (the code that you write) of your app.
It is useful to help software developers ensure coding styles, identify tech debt, small issues, bugs, and suspicious constructs. Helping you and your team in the entire development flow.
Linters are available for many languages, but let us take a look at the Golang ecosystem.
First things first: how do linters analyze code?
Most linters analyzes the result of two phases:
Lexer
Also known as tokenizing/scanning is the phase in which we convert the source code statements into tokens. So each keyword, constant, variable in our code will produce a token.
Parser
It will take the tokens produced in the previous phase and try to determine whether these statements are semantically correct.
Golang packages
In Golang we have
scanner,
token,
parser, and
ast (Abstract Syntax Tree) packages. Let's jump straight to a practical example by checking this simple snippet:
package mainfunc main() {
println("Hello, SourceLevel!")
}
Okay, nothing new here. Now we’ll use Golang standard library packages to visualize the
ast generated by the code above:
import (
"go/ast"
"go/parser"
"go/token"
)func main() {
// src is the input for which we want to print the AST.
src := `our-hello-world-code`// Create the AST by parsing src.
fset := token.NewFileSet() // positions are relative to fset
f, err := parser.ParseFile(fset, "", src, 0)
if err != nil {
panic(err)
}// Print the AST.
ast.Print(fset, f)
}
Now let’s run this code and look the generated AST:
0 *ast.File {
1 . Package: 2:1
2 . Name: *ast.Ident {
3 . . NamePos: 2:9
4 . . Name: "main"
5 . }
6 . Decls: []ast.Decl (len = 1) {
7 . . 0: *ast.FuncDecl {
8 . . . Name: *ast.Ident {
9 . . . . // Name content
16 . . . }
17 . . . Type: *ast.FuncType {
18 . . . . // Type content
23 . . . }
24 . . . Body: *ast.BlockStmt {
25 . . . . // Body content
47 . . . }
48 . . }
49 . }
50 . Scope: *ast.Scope {
51 . . Objects: map[string]*ast.Object (len = 1) {
52 . . . "main": *(obj @ 11)
53 . . }
54 . }
55 . Unresolved: []*ast.Ident (len = 1) {
56 . . 0: *(obj @ 29)
57 . }
58 }
As you can see, the AST describes the previous block in a struct called
ast.File which is compound by the following structure:
type File struct {
Doc *CommentGroup // associated documentation; or nil
Package token.Pos // position of "package" keyword
Name *Ident // package name
Decls []Decl // top-level declarations; or nil
Scope *Scope // package scope (this file only)
Imports []*ImportSpec // imports in this file
Unresolved []*Ident // unresolved identifiers in this file
Comments []*CommentGroup // list of all comments in the source file
}
To understand more about lexical scanning and how this struct is filled, I would recommend Rob Pike talk.
Using AST is possible to check the formatting, code complexity, bug risk, unused variables, and a lot more.
Code Formatting
To format code in Golang, we can use the
gofmt package, which is already present in the installation, so you can run it to automatically indent and format your code. Note that it uses tabs for indentation and blanks for alignment.
Here is a simple snippet from Go by Examples unformatted:
package mainimport "fmt"
func intSeq() func() int {
i := 0return func() int {
i++
return i
}
}func main() {nextInt := intSeq()fmt.Println(nextInt())
fmt.Println(nextInt())
fmt.Println(nextInt())newInts := intSeq()
fmt.Println(newInts())}
Then it will be formatted this way:
package mainimport "fmt"func intSeq() func() int {
i := 0return func() int {
i++
return i
}
}func main() {nextInt := intSeq()fmt.Println(nextInt())
fmt.Println(nextInt())
fmt.Println(nextInt())newInts := intSeq()
fmt.Println(newInts())}
So we can observe that
import earned an extra linebreak but the empty line after
main function declaration is still there. So we can assume that we shouldn't transfer the responsibility of keeping your code readable to the
gofmt: consider it as a helper on accomplishing readable and maintainable code.
It’s highly recommended to run
gofmt before you commit your changes, you can even configure a precommit hook for that. If you want to overwrite the changes instead of printing them, you should use
gofmt -w.
Simplify option
gofmt has a
-s as Simplify command, when running with this option it considers the {...}
Note that for this example, if you think that variable is important for other collaborators, maybe instead of just dropping it with
_ I would recommend using
_meaningfulName instead.
A range of the form:
for _ = range v {...}
will be simplified to:
for range v {...}
Note that it could be incompatible with earlier versions of Go.
Check unused imports
On some occasions, we can find ourselves trying different packages during implementation and just give up on using them. By using
goimports package we can identify which packages are being imported and unreferenced in our code and also add missing ones:
go install golang.org/x/tools/cmd/goimports@latest
Then use it by running with
-l option to specify a path, in our case we're doing a recursive search in the project:
go imports -l ./.. ../my-project/vendor/github.com/robfig/cron/doc.go
So it identified that
cron/doc is unreferenced in our code and it's safe to remove it from our code.
Code Complexity
Linters can be also used to identify how complex your implementation is, using some methodologies for example, let’s start by exploring ABC Metrics.
ABC Metrics
It’s common nowadays to refer to how large a codebase is by referring to the LoC (Lines of Code) it contains. To have an alternate metric to LoC, Jerry Fitzpatrick proposed a concept called ABC Metric, which are compounded by the following:
- (A) Assignment counts:
- (B) Branch counts when: Function is called
- (C) Conditionals counts: Booleans or logic test (
else, and
case)
Caution: This metric should not be used as a “score” to decrease, consider it as just an indicator of your codebase or current file being analyzed.
To have this indicator in Golang, you can use
abcgo package:
$ go get -u github.com/droptheplot/abcgo
$ (cd $GOPATH/src/github.com/droptheplot/abcgo && go install)
Give the following Golang snippet:
package mainimport (
"fmt"
"os""my_app/persistence"
service "my_app/services"flag "github.com/ogier/pflag"
)// flags
var (
filepath string
)func main() {
flag.Parse()if flag.NFlag() == 0 {
printUsage()
}persistence.Prepare()
service.Compare(filepath)
}func init() {
flag.StringVarP(&filepath, "filepath", "f", "", "Load CSV to lookup for data")
}func printUsage() {
fmt.Printf("Usage: %s [options]\n", os.Args[0])
fmt.Println("Options:")
flag.PrintDefaults()
os.Exit(1)
}
Then let’s analyze this example using
abcgo:
$ abcgo -path main.go
Source Func Score A B C
/tmp/main.go:18 main 5 0 5 1
/tmp/main.go:29 init 1 0 1 0
/tmp/main.go:33 printUsage 4 0 4 0
As you can see, it will print the Score based on each
function found in the file. This metric can help new collaborators identify files that a pair programming session would be required during the onboarding period.
Cyclomatic Complexity
Cyclomatic Complexity in another hand, besides the complex name, has a simple explanation: it calculates how many paths your code has. It is useful to indicate that you may break your implementation in separate abstractions or give some code smells and insights.
To analyze our Golang code let use
gocyclo package:
$ go install github.com/fzipp/gocyclo/cmd/gocyclo@latest
Then let’s check the same piece of code that we’ve analyzed in the ABC Metrics section:
$ gocyclo main.go
2 main main main.go:18:1
1 main printUsage main.go:33:1
1 main init main.go:29:1
It also breaks the output based on function name, so we can see that the
main function has 2 paths since we're using
if conditional there.
Style and Patterns Checking
To verify code style and patterns in your codebase, Golang already came with installed. Which was a linter that offer no customization but it was performing recommended checks from the Golang development team. It was archived in mid-2021 and it is being recommended Staticcheck be used as a replacement.
Golint vs Staticcheck vs revive
Before Staticcheck was recommended, we had revive, which for me sounds more like a community alternative linter.
As revive states how different it is from archived
golint:
- Allows us to enable or disable rules using a configuration file.
- Allows us to configure the linting rules with a TOML file.
- 2x faster running the same rules as golint.
- Provides functionality for disabling a specific rule or the entire linter for a file or a range of lines.
- golint allows this only for generated files.
- Optional type checking. Most rules in golint do not require type checking. If you disable them in the config file, revive will run over 6x faster than golint.
- Provides multiple formatters which let us customize the output.
- Allows us to customize the return code for the entire linter or based on the failure of only some rules.
- Everyone can extend it easily with custom rules or formatters.
- Revive provides more rules compared to golint.
Testing revive linter
I think the extra point goes for revive at the point of creating custom rules or formatters. Wanna try it?
$ go install github.com/mgechev/revive@latest
Then you can run it with the following command:
$ revive -exclude vendor/... -formatter friendly ./...
I often exclude my
vendor directory since my dependencies are there. If you want to customize the checks to be used, you can supply a configuration file:
# Ignores files with "GENERATED" header, similar to golint
ignoreGeneratedHeader = true# Sets the default severity to "warning"
severity = "warning"# Sets the default failure confidence. The semantics behind this property
# is that revive ignores all failures with a confidence level below 0.8."
Then you should pass it on running
revive:
$ revive -exclude vendor/... -config revive.toml -formatter friendly ./...
What else?
As I’ve shown, you can use linters for many possibilities, you can also focus on:
- Performance
- Unused code
- Reports
- Outdated packages
- Code without tests (no coverage)
- Magic number detector
Feel free to try new linters that I didn’t mention here, I’d recommend the archived repository awesome-go-linters.
Where to start?
To start, consider using
gofmt before each commit or whenever you remember to run, then try
revive. Which linters are you using?
Originally published at on January 18, 2022. | https://medium.com/sourcelevel/state-of-golang-linters-and-the-differences-between-them-sourcelevel-3ae2c2072171?source=read_next_recirc---------3---------------------85fed462_7aa5_4160_ba4f_2577249e554d------- | CC-MAIN-2022-21 | refinedweb | 1,771 | 56.55 |
Say you've got a JAMStack app with authentication. Works great, loads fast, some pages need login.
That part's easy with something like useAuth, a manual integration with Auth0, or any number of 3rd party providers.
You have some code that asks "Is this user logged in?". Show them the page or some "Please Login" interface. There's a button somewhere that starts the login flow.
Something like this for example:
Now what if you want to add an area that only some users have access to? How would you do that?
🤔
Roles are the answer
User roles are the simplest approach to granular permissions. Everything else I've tried gets out of hand super fast.
Is this user a student or not? Access to course.
Does this user have module X? Access.
You can go as detailed as you want. Admin vs. not-admin is often the first and only role-based permission. Some apps eventually need more.
The more roles you add, the more complex it all gets. But trust me, roles are the only approach that scales at all.
A friend of mine lived through a horror story where it took an entire team and 3 years to build a robust permission system for a large app. 3 years 😳
Roles are way easier.
With useAuth 0.7.0
Hot off the presses, useAuth 0.7.0 adds a helper to check for user roles. Still just for Auth0, soon for others I promise.
Here's what you do:
And that's pretty much it, really.
The
isAuthorized method verifies your user is currently logged-in and that they have the
Student role.
You'll need to add a rule to your Auth0 config as well. They don't send this info by default. Don't know why, I tried everything. 😔
And you have to make sure that namespace matches a config in the
<AuthProvider> that wraps your whole component tree. That's how
useAuth hooks into the React context and keeps track of everything.
No, I don't know why the namespace needs to be a full URL. The Auth0 documentation isn't clear on that part and I was unable to hunt down details on the forums.
How do you get roles onto users in the first place?
Ah yes, adding roles to users. That part is a little tricky.
Here's an article I wrote on Connecting Gumroad to Auth0 for paywalled JAMStack apps ❤️
The TL;DR is that your checkout provider triggers your cloud function through a webhook and that adds a role to your user. You can also do it manually.
I use Gumroad, Stripe works too. I use AWS Lambda, a Netlify or Vercel cloud function should be fine.
Full details in the Connecting Gumroad to Auth0 for paywalled JAMStack apps article.
What if there's many roles to check?
This is where life gets tricky. The more roles you have, the trickier. 😅
Somewhere in your code there's going to be a component like this.
Like I said, messy. And there's nothing you can do about it.
Might look a little better, if you bake it into your router and use individual checks on individual pages. But the complexity remains. Somewhere something has to check this stuff. 🤷♀️
Using different components for different content pages makes it easier – check authorization in the component itself. But I'm using some MDX shenanigans and the
<Content> component doesn't know what it's showing.
So a nice big truth table is what I gotta do.
What about without useAuth?
Same principle my friend. You get the role for your user and you ask "Does this role have access to this page?"
✌️
Cheers,
~Swizec
PS: the coding-on-a-ipad workflow I suggested on Friday totally worked️ | https://swizec.com/blog/add-granular-rolebased-access-to-your-jamstack-app/ | CC-MAIN-2022-27 | refinedweb | 634 | 77.64 |
Before delving into the generic collection types, it’s useful to get an idea of the legacy collection types out there in the
System.Collections namespace. As you learned in the “Life Without Generics” section in Chapter 15, the only way you could create maximally applicable types in the past was to use
System.Object, the mother type of all types, somewhere. The nongeneric collection types do so for their storage and hence bubble up
System.Object to the type’s surface on methods like
Add,
Remove, and so on. The main reason to learn about those types is for survival purposes (when, for example, facing code that was written before the introduction of generics). For fresh code, generic collection types are the right choice. ...
No credit card required | https://www.oreilly.com/library/view/c-50-unleashed/9780133391480/ch16lev1sec1.html | CC-MAIN-2019-13 | refinedweb | 129 | 64.2 |
Details
Description)
Activity
I agree, and wanted to mention we shouldn't limit ourselves based on packaging.
for example we can have analyzers-def and analyzers-impl, but actually shove the analyzers-def into the lucene-core jar for simplicity/packaging purposes if we want.
but this way you could still use the analyzers without the lucene core if you wanted.
Analysis could even be released independently. I've got a start to a patch that I hope to put up today as a POC..
doesn't fully compile yet (but core does) due to our recursive build system, but at least fleshes out the proposed directory layout. I may, however, change src/declarations to src/common and then we would have lucene-common.jar. I was surprised by how much I needed to move out of core (e.g. BytesRef)
Looks like it makes sense that we would have to pull out these classes to do it now... but here are a few thoughts maybe for discussion... this stuff certainly should not block this issue, its hard refactorings and a lot of work, but just ideas for the future.
As far as analyzers:
- does the lucene-core/common jar need to have all the tokenAttributes? Maybe it should only have the ones that the indexer etc actually consume, and things like TypeAttribute, FlagsAttribute, KeywordAttribute, Token, etc should simply be moved to the analysis module?
- does the lucene-core/common jar need to have Tokenizer/TokenFilter/CharFilter/CharReader/etc. Seems like it really only needs TokenStream and those could also be moved to the analysis module.
- currently I think its bad that the analyzers depend upon so many of lucene's util package (some internal)... long term we want to get rid of the cumbersome backwards compatibility methods like Version and ideally have a very minimal interface between core and analysis so that you could safely just use your old analyzers jar file, etc... maybe we should see how hard it is to remove some of these util dependencies?
So in a way, this issue is related to
LUCENE-2309....
Architects remove dependencies
For external use, this locksteps the external user (Mahout for example) to changes in these data structures. It's a direct coupling. This is how you get conflicting dependencies, what the Linux people call "RPM Hell".
If you can make a minimal class for export, then have Lucene use a larger class, that might work. Here is a semi-coupled design:
public class ITerm
- A really minimal API that will never be changed, only added onto.
- Code that uses this API will always work- that is the contract.
- clone() is banned (via UnsupportedOperationException).
- If a class implements clone(), all subclasses must also implement it.
- I would also ban equals & hashCode- if you want these, make your own subclass that delegates to a real Term subclass.
public class Term extends ITerm
- This is what Lucene uses.
- It can be versioned.
- If you code to this, you lock your binaries to Lucene release jars.
Here is a fully-decoupled design:
- Separate suite of main Lucene objects, with minimal features as above.
- Separate Lucene library that xlates/wraps/etc. between this parallel suite and the Lucene versions. Lucene exports this jar and works very hard to avoid version changes.
It's a hard problem all around, and different solutions have failed in their own ways. Error-handling is a particularly big problem. Using these objects in parallel brings its own funkiness.
I'm resurrecting this. I'm now thinking that we just put some build targets into Lucene that make it easy to build this, instead of rearranging the packaging.
Hmm, this gets wonky with some of the dependencies. Ideally, I'd like to keep this isolated to just the analysis package in core and util (ideally not even that, but do need things like ArrayUtil, BytesRef, etc.), however not sure that can be done w/o some refactoring. For instance, Analyzer has a dependency on IndexableField, all so that it can check to see whether it is tokenized or not. Could it just take in a boolean for getOffsetGap indicating whether it is tokenized or not? It also has a dependency on AlreadyClosedException, which, I suppose could be moved to Util.
There also a number of imports for Javadocs which are probably useful, but a bit odd in terms of packaging for this particular thing.
Grant: I agree.
I guess it would be good to figure out exactly what the desired goals are:
- is the goal to just use analyzers.jar without having lucene core.jar for aesthetic reasons (no lucene jar file)
- is instead the goal to be able to depend on an analyzers.jar without causing classpath hell for users that might want to use a different version of lucene in their app, when all you need is the analyzers?
If its just the second, maybe we just need some fancy jar-jar packing. We would have to target
a different package name or something like that so there are no conflicts: then again this might be something the end
user could just do themselves without us doing anything (maven has plugins for this type of thing, etc?). Then
they could deal with what the renamed packages should be etc?
My goal is #1 (I have the same goal for the FST package). I want to be able to use analyzers independently of Lucene and I don't want to have to bring in whatever dependencies other parts of Lucene might have (which is admittedly small at the moment). Doing this also achieves #2, I suppose. I've almost got a patch ready that just makes this build sugar, but I wonder if it is better to separate out the code if #1 is the goal.
I don think it is from my perspective. Really this isnt a common use case of lucene and
will make things awkward and confusing (harder to use) to have a lots of jar files.
Here's a first draft at this. The packaging looks more or less right, but I haven't fully tested it yet. The main downsides to this approach are:
- Minor loss of Javadoc due to references to things like IndexWriter, DoubleField, etc. I kept the references, just removed the @link, which allowed me to drop the import statement
- We need to somehow document that this jar is for standalone use only. It's probably a minor issue, but going forward, people could get into classloader hell with this if they are mixing versions. Of course, that's always the case in Java, so caveat emptor.
I should add: to run this, for now, do
ant jar-analyzer-definition
. Still need to make sure it fully hooks into the rest of the build correctly, too.!
Updated version for trunk. Updated packaging to now include all of Util, as it was getting ridiculous trying to get the exact list. I tested this setup by taking the jar produced here, plus the common analyzers jar and put them in a standalone project and tested them and it seemed to work.
Thus, I think this is mostly done and ready to commit. I'd say the only issue left is to say how we want to document this so that people aren't confused. My suggestion would be to collocate a file name README-analyzers-def.txt alongside the jar that explains it. Otherwise, we could just put it in the README.
I tested this setup by taking the jar produced here, plus the common analyzers jar and put them in a standalone project and tested them and it seemed to work.
Personally I dont feel comfortable with that as a testing strategy. There is nothing to prevent someone from breaking this jar in the future (e.g. if i import something from o.a.l.index into an analyzer for some reason).
If we cannot test this, can we just make it an optional target (e.g. not part of package). Generally this is pretty expert to do (it must be, it has no javadocs, etc etc), so I think its fair the people who need this could just run the ant target from the source release.
For example, I just searched for org.apache.lucene.index in the analyzers-common source and there is code using IndexReader, TermsEnum, etc.
For example, I just searched for org.apache.lucene.index in the analyzers-common source and there is code using IndexReader, TermsEnum, etc.
Ugh. I wonder if that just makes all of this a moot point. I'll take a look. I was thinking about how to more reliably test it yesterday, but didn't implement it. I guess ideally we could exercise all the analyzers independently on some content, or just run the analyzers test suite somehow.
For:
- IndexReader – It's mostly just in tests, except the QueryStopWordAnalyzer
- TermsEnum – Same thing, a test and the QueryStopWordAnalyzer
- Synonym package has dependency on DataOutput and ByteArray* from store (which can be added to the base packaging)
So, basically, the issue would be with the QueryStopWordAnalyzer (the tests aren't an issue)
Is it intended to support jars from different Lucene versions? Would a "unit test" for this project include old versions of jars retained as binaries?
Why didn't the compiler catch these things?
How about using a tools to collect all realy needed dependencies from core and package it as lucene-core4analysis-min.jar? JARJAR can do this (as ANT task, without renaming classes, just to collect the dependent ones from core). We would then also not need to remove the Javadocs (NRQ,...), Grant's patch removed.
Why didn't the compiler catch these things?
Not sure I follow. There really isn't compilation involved at this point and they are runtime dependencies that fail.
@Uwe: it's possible, but I suspect the IndexReader dep. is going to bring in a lot, which seems a little silly given it is all just used in the QueryStopWordAnalyzer, which could easily be collapsed into just using the StopFilter and some example code for people. I'm not that familiar w/ JARJAR, but if you want to try it and we can compare.
@Robert, @Uwe, any more thoughts on this one? I hate to see this derailed by one single little used Analyzer that has a workaround solution anyway. I'm going to try to get more tests in place this weekend, or at least soon.
For the long term I like Uwe's idea better I think rather than restricting which javadocs
in core can link to what and restricting which files the analyzers can use.
Separately we should fix that Analyzer
Bulk move 4.4 issues to 4.5 and 5.0
Move issue to Lucene 4.9.
+1 | https://issues.apache.org/jira/browse/LUCENE-3151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel | CC-MAIN-2015-48 | refinedweb | 1,808 | 64.61 |
gruntjs
1896
in grunt.js, during uglify "jquery" and "blockui" i get error "Unexpected Character `?`"
842
Grunt Locales- Manually Translation Required?
7808
Automatically add all JS files to the index.html
1908
Sinon TypeError: '[object ProgressEventConstructor]' in Karma/SauceLabs testing
4278
Nodeunit not detecting done()
4411
Require config: get `paths` object from JSON in Gruntfile.js and main.js
8855
grunt bower-install and grunt serve ignoring main files in my root bower.json, using Yeoman
8246
Setting IEDriverServer location with protractor
5454
How do you get a grunt task to save a new simple mongodb document?
6332
How do I include the less version of bootstrap in my node project (local)?
1534
Serve compressed vs uncompressed js and css files based on url parameters - AngularJs, Grunt
1268
how to config grunt.js to minify files separately
5529
Unable to write "undefined" file during grunt:htmlrefs build
2812
grunt-php task springing open multiple browser instances
9612
Escaping partial with assemble
6212
How do I test grunt task callback with nodeunit?
5178
unable to run grunt after installing using npm
7532
Grunt dependencies conflicts in Bootstrap
1097
Install grunt-phonegap - Error: No compatible version found: URIjs@'^1.12.0'
4672
How can I run a task in Grunt using an array of filenames which are dynamically generated?
6769
Running mocha tests from node
3952
How to import a namespace with using 'require'?
2198
JavaScript (grunt): regular expressions for path to a file
3013
How to run two grunt watch tasks simultaneously
635
How to load a minified concatinated js file with requirejs
8049
Grunt bower_concat not adding css
2166
The Command "grunt" exited with code 3 error when building project
2732
Grunt dynamic dest location sass
5096
Is it necessary to keep Node modules and Grunt file in same directory?
8426
How to make grunt watch and grunt nodemon work together
341
Grunt - Not able to read external text file in GruntFile.js
9997
Grunt required options for specific task
9202
passing value fron conf.js to a grunt task or between grunt tasks
5657
Defining variables with grunt-contrib-less
7907
Grunt duplicate errors typescript
7082
Grunt: Access YAML symbols in external file
6683
Why is uncss ignore not working right?
2165
SyntaxError: expected expression, got '.' grunt-copy
2100
Grunt / PhoneGap : Warning: stdout maxBuffer exceeded
6307
Requirejs:dist failed, Loader plugin did not call the load callback in the build: text
8997
Proper use of grunt lib from within an Assemble plugin
7576
grunt throw "Recursive process.nextTick detected"
5503
Can an assemble target be silenced if there are no matching files? (Without using --force)
7219
Running grunt-contrib-jshint only on recently modified files
8005
How to browserify an AngularJS project with multiple SPAs
2889
Including/importing HAML files using grunt-contrib-haml and Yeoman
4633
Is there a grunt plugin for submitting a pull request to github?
1799
Is there any automated [grunt] task to prepend a CDN to CSS/JS files inside your index.html?
5606
Output multiple LESS source maps with Grunt?
8310
How to structure multiple large AngularJS projects that have shared functionality between them?
1492
Grunt isn't updating scripts when saving changes
9224
Creating test groups with grunt-mocha-test
7664
socket.io-client breaks when running grunt build
6782
Assemble - can't reuse {{>body}}: "The partial body could not be found"
8656
Using grunt-usemin with Mean.io (separate public/ and app/ folders)
7943
Grunt; Warning: uncommitted changes in working directory
8644
Travis env variables returns empty
4894
I don't want any cache in my angular application with grunt and nodejs server
1373
grunt Fatal error: Unable to find local grunt in Yeoman
6229
My Grunt.JS setup using Google Closure @preserve or @license not working?
2344
How to change PhantomJS viewport width in grunt-contrib-jasmine?
6222
Grunt and UglifyJS in Windows app development - UTF8 problems
3250
Change JS file encoding to UTF-8 with signature using Grunt
7028
Send parameters to jshint reporter in Gulp
6090
how to preserve grunt task arguments when wrapping it
1536
How to use the Node debug module with Grunt?
2066
How to process javascript code inside html template (jsdoc / grunt-jsdoc)
9871
My site is loading slow and hanging due the livereload process being added into my index.hml
4904
local npm module "grunt-prettify" not found. is it installed?
2629
Gruntfile task is not running properly
9529
Xampp Not Serving Metadata Folder
3112
Why is JSHint not showing me the filenames?
5532
Grunt: No such file or directory
4789
Exporting jasmine test results in json format from grunt
5675
Runs script repeatedly to api test
9192
grunt-wiredep on multiple files with different dependencies
8449
grunt-init template exclude files from init.filesToCopy(props)
4978
Any grunt packages exist for including html partials from an external web source?
«
1
»
Hot Questions
Is POST information from jquery $.post sent in parent page
Print RDLC using Dataset without Database in C#
To Get Date From DateTime Variable
how to use php+mysql to decode the image stream and store the stream to database
JQuery can't call css method twice at a time?
ArrayList only printing last member repeatedly
How to create csv file using servlet?
RTCMultiConnection how to get started
jQuery FancyBox: How do I increase the width of the element #fancybox-wrap by 10 px?
How can I speed up array generations in python?
Need MVC 3 Grid with AJAX paging/sorting with unobtrusive javascript
Why is persist in JPA clearing the existing data in the table row?
FACL commands in Linux
Uploading pictures to tinyMCE
android studio - opening files from multiple projects
Generating div on the fly with jQuery fails to load picture
How to nest other telerik controls listview, chart, inside Telerik LiveTile control and databind them
can packed arrays be passed by reference to the task in systemverilog
Apache Virtual Host - Mod Proxy issues
Xcode 4.3.2 - Base SDK for Distribution of App
Push notifications in PHP using Amazon SNS/SQS?
Can you add a function to a hijacked JavaScript Array?
Create directory with write permissions for the group
How RecyclerView concept works on android?
How to make number not reachable (similar to call blocker)?
Upload files from an iPad appliacation to SharePoint using objective C?
Using Hibernate (JSR 303) method validation to compare parameters
Floating div in the background
Selenium2 with PhantomJS and PHPUnit - session/cookie issue
Spring Integration - Need to Map SOAP Headers
Conversion error in a property in PropertyGrid
Which LINQ statement is efficient for searching record using entity framework
Trim a string to a specific number of characters in R
How to find the size occupied by the text in NSTextField?
Is there a native .NET method or equivalent to user32.dll's GetKeyboardState?
In PowerBuilder, can I use an object as part of my menu?
Why does nosetests say --with-coverage is not an option?
Scope IIS Rewrite Outbound Rule by request URL
How to highlight search results in gridview using asp.net?
Multiple Google api keys in manifest file (Android)
Does Doodle Jump use UIKit or OpenGL ES
Design for TreeCellRenderer
Simple sql multi-table query
Java EE specification and multi threading
Windows Phone 8 - Filling a stackpanel with a button
How well does the Android NFC API support Mifare Desfire?
How to get a user's repositories from Gitlab just like get from Gtihub?
Foreach Loop slow with API search
EWS API ExchnageWebServices Update() method on Contacts
background-color of jquery datepicker is overwritten
How to get specific data from a html file using Jsoup?
System.Threading.ThreadAbortException fired in new thread
Can we integrate some third party ios sdk/API part of hybrid application created using worklight?
HttpsURLConnection and Cookies
Sprite-Kit : Sprite z-Order
Moment.js convert MM/dd/yyyy format date
I need a good analogy of the Node.js event loop.
Navigation drawer alignment issue
List HTTP requests from android phone
How to make videos horizontally aligned in HTML 5
Choosing Intent for Zip files
A question on traits
How is Docker related to exokernal approach like Mirage OS?
Turning an HTML select element into a tree with submenus
C++ Win32 API GetMessage From Awesomium In Separate Thread
How to show the years b/w to specific dates
Adding in missing dates from results in SQL
getting css content and then resolving image references of external css files for Link Validator
NSPredicate (filterContentForSearchText) if/else: searchText hasPrefix:@"#"
Nullpointer Exception "void android.media.MediaPlayer.setDataSource(android.content.Context, android.net.Uri)"
Sorting by column
silex logout page NotFoundHttpException
how do I delay or control queries?
Dig Command returning "SERVFAIL"
Maintaining a list of programs I don't want Make to compile that I could change with >> and SED instead of editing the makefile?
MySQL Cluster Architecture Considerations
SQL Server Rank() by group
std::shared_ptr: Custom deleter not being invoked
Is there a cross-platform version of win32 CopyFile?
Setting env var in NSIS, not visible in program launched from installer
Retrieve subcategory selection option via Ajax based on category selection
how find all instances of a word occurring anywhere in a website
Eclipse displaying images in a java applet
Custom previous & next buttons for slick slider
How to get triangle meshes of objects defined in an X3D file?
how to call DB2 functions in Informatica ETL?
Get all child, grandchild etc nodes under parent using php with mysql query results
XRM.SDK QueryExpression ConditionOperator.Like
How to tell Node.JS to use modules from global by default?
Seq count number of elements
AJax ignoring response
Using cwac-endless adapter with existing/custom AsyncTasks
putty connection timeout - how to troubleshoot
Doing an SQL query in PHP that depends on a variable
javascript timer not working
Recursive tr with Knockout JS and Jquery Template
Java generics + Builder pattern
Articles dissapearing after JUpdate 1.5-2.5 Joomla
Installing terminator on cygwin
Set format to a Datetime in a function (ASP.Net MVC | http://www.brokencontrollers.com/tags/gruntjs/page1.shtml | CC-MAIN-2019-30 | refinedweb | 1,663 | 50.26 |
Visual Studio .NET® and facilitates in the creation of mixed-language solutions.
In addition, these languages leverage the functionality of the .NET Framework,
which provides access to key technologies that simplify the development of ASP.NET Web
applications and XML Web services. The Microsoft® .NET Framework transforms application
development with a fully managed, protected, and feature-rich application execution
environment; simplified development and deployment; and seamless integration with a
wide variety of languages.
The following sections discuss and demonstrate to what extent the .NET Framework and Visual Studio .NET support Arabic, including information about the features and limitations of Arabic support.
As you install Visual Studio .NET, you will have the opportunity to choose among several different installation and setup options. Visual Studio .NET setup provides two specialized installations modes, Administrator and Remote components. The following section explains the most popular and simple installation scenario and shows how Visual Studio .NET installation may support Arabic. The system and user locals must be set to an Arabic locale to be able to enter and display text correctly.
To install Visual Studio .NET:
Select step 1 to update system component. If a component update is not required, this option is not available.
Select step 2 to start installing Visual Studio .NET. The Start Page appears with the End-User License Agreement and prompts you to supply your name. You can type a name in Arabic.
The Options Page appears to select the languages and components that you want to install and to specify the path of the installation. You can enter Arabic text for folders names.
Press Install Now! to run the installation process to completion.
Note: If you are running an anti-virus program while setup runs, warnings may be displayed because setup runs scripts that access the file system object. It is safe to allow the setup to continue.
The following topics explain the main elements of the Visual Studio .NET integrated development environment (IDE) and how they support Arabic.
The IDE enables developers to create new projects with different languages and templates. A project template creates the initial files, references, code framework, property settings,
and tasks appropriate for the selected project.Visual Studio .NET is based on Unicode, so you can use Arabic for your project name, using any Arabic characters including Kashida and Diacritics. In addition, the path of the project can contain Arabic as part of folder names, as illustrated in Figure 1.
Figure 1
Solution Explorer provides you with an organized view of your projects and their files as well as ready access to the commands that pertain to them. Solution Explorer is able to display the names of the solution, project or projects, and files in Arabic, as shown in
Figure 2.
The code and text editor allows you to enter, display, and edit code or text.
It is referred to as either the text editor or the code editor, based on its content.
Visual Studio .NET editor supports entering, displaying and editing Arabic text, but it
does not support right-to-left (RTL) reading order because the editor was designed for
English support. As result, manipulating Arabic text or mixed text (English and Arabic)
doesn't work smoothly. However, you are free to use Arabic text in your code and comments
in these ways:
The Visual Studio .NET compilers, mainly VB .NET and C# .NET, support any language into the code, as well as English, without generating any errors. Figure 3. illustrates the use of Arabic in your code.
Figure 3
One of the most important enhancements in the Visual Studio IDE for international support is the ability to save and display your code in a particular language and on a particular platform and to associate a particular character encoding with a file.
Opening Files:You can choose the editor you want to use to edit the file. The list of editors available when you are opening the file depends upon the type of file you are attempting to open or create.
Figure 4.and 5
Figure 4
Figure 5
Saving Files:You can save your code with a Unicode encoding or a different
code page to support various languages, such as Arabic. You can associate a particular
character encoding with a file to facilitate the display of your code in that language,
as well as a line ending type to support a particular operating system. Note that,
some characters cannot be used in file names unless they are saved with Unicode encoding.
See Figure 6.
Figure 6
Use the Properties window to view and change the design-time properties and
events of selected objects in the designers. You can also use the Properties window to edit
and view file, project, and solution properties. The Properties window is available from the
View menu. You can enter Arabic text in the Properties window as a property value -
for example the Text property of a Button control. To enter text in RTL reading order
in the Properties window press Ctrl + right Shift (this may vary according to your Text
Services and Input Language – Key Sequence setting in the control panel).
Figure 7 shows
how to enter Arabic text with RTL reading order in the Properties window.
Figure 7
Server Explorer is the server management console for Visual Studio .NET.
You can use Server Explorer to view and manipulate data links, database connections,
and system resources on any server to which you have network access.
The following figure demonstrates the capability of Server Explorer to explore a
database whose elements are named in Arabic.
The Clipboard Ring tab of the Toolbox stores the last twelve items added
to the system Clipboard using the Cut or Copy commands. These items can be dragged from the
Clipboard Ring tab and dropped onto the active editing or design surface.
The Clipboard Ring supports Arabic as well as English, as shown in
Figure 8
Figure 8
The .NET Framework has two main components: the common language runtime and
the .NET Framework class library.Common Language Runtime:The common language runtime
is the foundation of the .NET Framework. The common language runtime loads and runs
code written in any language that targets the runtime. Code that targets the runtime
is known as managed code, while code that does not target the runtime is known
as unmanaged code. This paper's main emphasis would be the managed code.
The common language runtime provides core services such as memory management,
thread management, and remote accessing, while also enforcing strict safety and accuracy
of the code. In addition, the runtime provides code access security that allows developers
to specify permissions to run the code. In fact, the concept of code management is a
fundamental principle of the runtime.
.NET Framework Class Library:The .NET Framework class library is a
collection of classes that can be called from any .NET enabled programming language.
The class library is object oriented, providing types from which your own managed
code can derive functionality. This not only makes the .NET Framework types easy to use,
but also reduces the time associated with learning new code. In addition, third-party
components can integrate seamlessly with the classes in the .NET Framework.
You can use the .NET Framework to develop the following types of applications and services:
Console applications are designed typically without a graphical user
interface and compile into a stand-alone executable file. A console application is run
from the command line with input and output information being exchanged between the
command prompt and the running application.
The Console applications do not support Arabic Text for input
and output due to a limitation in the operating system console support.
Windows applications created with .NET framework offer you many benefits.
You can access operating system services and take advantage of other benefits provided
by your user's computing environment. You can access data using ADO .NET.
GDI+ allows you to do advanced drawing and painting within your forms.
Your Windows applications can make calls to methods exposed through XML Web services.
The following topics explain how Windows Forms applications support Arabic.
Windows.
Right To Left:Windows Forms are controls, because they inherit from the
Control class. This class has a key property for Arabic support: the RightToLeft property,
which specifies whether the text is displayed from right to left, such as when using Arabic
fonts. The form itself supports the RightToLeft property, as do all your controls.
The RightToLeft property takes one of the following values:
The effect of this property can differ slightly effect from one control
to another, as described later in the section Windows Forms Controls. In the Form object, w
hen you set the RightToLeft property to Yes, text displayed in the form's title bar
is right-aligned with RTL reading order. However, the icon and control box retain their
left and right alignment respectively, as shown in Figure 9
Figure 9
What about design time? The Windows Forms Designer reflects the
effects of the RightToLeft property on controls. For example, when you set the RightToLeft
property of the form to Yes, it automatically updates the display of the form in the
designer, so that the form's caption appears in the title bar right-aligned, as at run time.
Mirroring:
You might ask, "What's new about that? Visual Basic 6.0 forms support the RightToLeft
property. This is not full Arabic support — we need Windows Forms to support mirroring!
" The answer is: "Yes, but …" Windows Forms don't support mirroring directly, as they
do the RightToLeft property. But Windows Forms do give developers the capability to
customize the appearance and functionality of the controls through inheritance.
For details, see the next section, Windows Forms Inheritance.
In some occasions, you may want to create a basic form with settings
and properties such as a watermark or a certain control layout that you will then
use again within a project. Each new form may contain modifications to the original
form template. The solution is to use Windows Forms inheritance, which is a strong
feature offered with the .NET Framework Class library. For example, using inheritance,
you can build your own mirrored form by inheriting from the System.Windows.Forms.Form class
and changing its window style. For more details, see Developing Arabic Windows Forms
Controls later in this paper.
The MessageBox class represents the message box, it displays a message
box that can contain text, buttons and symbols that inform and instruct the user.
The MessageBox class fully supports Arabic features including RTL reading order
and mirroring as shown in Figure 10
Figure 10
The Show method of the MessageBox class displays the message box.
It takes constant defined in the MessageBoxOptions enumeration as parameter.
These include RightAlign and RtlReading both of which enable the message box to display
Arabic.The following code illustrates how to display an Arabic message box:
Notes:
Note You can not translate (localize) the caption of
the message box buttons like Ok or Cancel. These depend on the windows system
locale and will therefore appear as "Ok" in English Windows and موافق on Arabic
localized windows irrespective of your application.
The common language runtime implemented. Although you cannot use GDI+ directly on Web Forms, you can
display graphical images through the Image Web Server control. GDI+ is now the
only way to use graphics in the .NET framework.
Graphics Class:The System.Drawing namespace provides access to
GDI+ basic graphics functionality. The Graphics class represents a GDI+ drawing surface,
and is the object that is used to create graphical images. Before you can draw lines
and shapes, render text, or display and manipulate images with GDI+, you need to create a
Graphics object. There are several ways to create the graphics object.
The common way is to receive a reference to a graphics object as part of
the PaintEventArgs in the Paint event of a form or control. This is usually how you
obtain a reference to a graphics object when creating painting code for a control,
for more details see GDI+ Graphics in MSDN. This technique is used in the code sample below.
How GDI+ Support Arabic?
GDI+ supports Arabic text manipulation including print text with RTL reading order
for both output devices, Screen and Printer. The Graphics.DrawString method draws
the specified text string at a designated x, y location or rectangle (according to its
overloading), with the specified Brush and Font objects using the formatting attributes
of the specified StringFormat object. The StringFormat object includes text layout
information such as text reading order.
Therefore, you can easily move the origin of the graphics object to
be Right-Top, instead of Left-Top, to print out the Arabic text in the designated
location on the screen smoothly, without having to calculate locations explicit.
The following steps explain how to do this:
You can use the same technique to print out text on the printer. The only things that will differ are the events you need to handle, for example, the PrintPage event.
Windows Forms controls are reusable components that encapsulate
user interface functionality and are used in client-side Windows applications.
Windows Forms provide many ready-to-use controls, in addition to the main
infrastructure for developing your own controls.
RightToLeftThe Control class includes RightToLeft property,
which specifies whether the text appears from right to left. All Windows Forms controls
that inherit from Control class support RightToLeft property, as illustrated in figure 11.
When setting the RightToLeft property, the TextAlign property behaves in a different manner.
If you enforce right alignment it would be displayed to the left and vice versa because the
TextAlign property is based on the near and far coordinates rather than absolute left or
right.
As we discussed earlier this property may have different effect from one control to another. The table below (Table 1) lists all the controls and how the RightToLeft property may affect them.
When the RightToLeft property value is set to RightToLeft.Yes, the horizontal alignment of the control's elements.
Windows Forms component hierarchy
Figure 11
MirroringAs you know, Arabic controls are laid out from right to left.
Visual Studio .NET supports Arabic controls through the RightToLeft property of the controls.
However, the RightToLeft property is not sufficient for some controls to be fully
Arabic-aware. For example, suppose you have a Treeview control and want to use it
in an Arabic application. When you set the RightToLeft property to Yes,
only the reading order of the Treeview control will be set to RTL, while
the tree and its nodes stay left aligned. The solution is to mirror the Treeview control,
so that the tree and its nodes will be right aligned.
Windows Forms controls don't have a property named mirror. Instead, you create your
own controls to be displayed as you need. You can develop your own RTLTreeview control,
for example, which inherits from the Treeview control and changes its style to be mirrored.
For more details, see Developing Arabic Windows Forms Controls later in this paper.
The following table explains how the RightToLeft property affects on the Windows Forms Controls and which of them need to be mirrored.
Table 1: Explains how the RightToLeft property affects on the Windows Forms controls and which of them need mirroring
In this section we will explain how to create mirrored Windows Forms
controls that support Arabic. In general, the following are common scenarios for developing
Windows Forms controls
In this scenario, derive from the base class System.Windows.Forms.Control.
You can add as well as override properties, methods, and events of the base class.
Using the second scenario, you can create a new class that inherits from one
of the controls that you want to mirror. While the control is being created you can mirror
the control by setting the style of the control using some Win32 window styles that are not
provided by the Windows Forms namespace. You can override the CreateParams property of
the Control class, which gives you the ability to set the desired styles while the
control is being created.
The following two examples explain how to develop mirrored Windows form and Treeview controls. The examples use two window styles:
Mirrored Treeview control
Mirrored Windows FormThe following example differs slightly
from the previous one in that the style is set to WS_EX_NOINHERITLAYOUT.
By having the "no inherit" layout flag, you ensure that the controls on the forms will
not get mirrored.
Note These window styles are defined in the header file winuser.h as:
#define WS_EX_LAYOUTRTL 0x00400000L Right to Left Layout
#define WS_EX_NOINHERITLAYOUT 0x00100000L Do not inherit the layout
However, this solution is not applicable to all controls that may require mirroring. Some classes are marked as not inheritable so, we can not extend these controls. For example, the Progressbar and Imagelist controls are not inheritable and therefore can't be customized to be mirrored using this technique.
The common dialog boxes are part of the operating system.
Thus, you can't change their user interface language because they depend on the language
of the operating system. For example, the interface of Open Dialog will be RTL when
displayed in Arabic on Arabic-localized Windows, while it will be LTR when displayed
in English on English Windows. fashion,
, which makes it possible to use Arabic. Metadata stores the following information:
Description of the assembly.
You can use Arabic in the metadata to describe your assemblies, types, and attributes, and they will be retrieved successfully. The following example uses Arabic text with the Description attribute to describe Class1, and then retrieves this description in the constructor.
Earlier in the first example in Developing Windows Forms Controls section, it showed how you can add a property called Mirrored. Using metadata, you can also add information to describe the Mirrored property:
Now, when you select the Mirrored property in the property window, the Arabic description will appear as in the following figure: UI. In Web Forms pages,
the user interface programming is divided into two distinct pieces: the visual component
and the logic. The visual element is referred to as the Web Forms page. T.vb" or ".aspx.cs" extension. The logic written in the code-behind file can be written in Visual Basic or Visual C#.
In Visual Studio .NET, you can create ASP.NET applications using
either Visual Basic .NET or Visual C# .NET. When you are creating a new ASP.NET Web
application, you are free to use Arabic in the project name. You can also use Arabic
to name the Web pages that you may add to your web project, as shown in the following figure.
Be ware, these Arabic Web page names would not display correctly on operating systems
that do not support Arabic, both while viewing and while hosting the page.
Therefore it is not recommended to use Arabic names.
The Visual Studio .NET HTML designer provides editors and tools
that assist you to create and change HTML documents. The two editing views are:
When you design Arabic Web Forms pages, the best way to make text flow from
right to left is to use the DIR (direction) attribute. It is usually placed in the [HTML]
tag or the [BODY] tag, and then controls and HTML elements on the page inherit the
specified direction.The Dir attribute can be used in the following ways:
You can set the Dir attribute to rtl visually using the Design view using the Properties window, by setting the Dir property of the DOCUMENT object. All the controls on the form would inherit the same settings. However, the Dir attribute can be used individually with other tags like Table and Web Form controls to let them appear as right to left, as in the following example:
In Design view of the HTML Designer, there are two methods to
position elements on your HTML document, Flow Layout and Grid Layout. To establish
element positioning preferences, set the pageLayout property to FlowLayout or GridLayout.
You can set this property through either the Document Property Page or in the Properties
window.
Flow LayoutWhen the pageLayout property is set to FlowLayout,
the client Web browser positions elements one after the other, in the order that they
occur in your markup. Elements flow from left to right within a line (or from right to
left if the Dir attribute of the Web Form set to rtl), and from top to bottom within the
page.
Grid LayoutWhen the pageLayout property is set to GridLayout,
elements are displayed at specified locations on the page using absolute positioning.
An MS_POSITIONING="GridLayout" attribute is added to the opening
To design an Arabic Web Form, it is recommended that you use flow
layout rather than grid layout. A Web Form set to grid layout and with its Dir attribute
set to RTL has the following behavior:
As noted, when you set the Dir property of the Web Form to RTL,
the origin of the Web Forms page becomes the top right-hand corner of the page
and the vertical scrollbar appears on the left side of the page, as shown in Figure 12.
In Design view, if you position controls on the Web Forms page (without using tables and
cells) and design your interface to flow from right to left and then display the page in
the Web browser, the controls might appear in different positions than they do in Design
view. An example is shown in Figure 13. Therefore, the preferred setting is flow layout.
Figure 12 the position of the Label control in the Design View
Figure 13 the position of the Label control in the browser
LTR WebForm
After changing the Dir to RTL
In addition to the simple HTML controls, Visual Studio .NET introduces
other types of controls that you can use with the Web Forms pages:
To overcome this problem, you can set the HTML style attribute of the
Label control to include right padding as in the following :
The validation controls.
The following table lists describes the validation controls and
explain how they support Arabic.
Supports Arabic, but doesn't ignore Diacritics..
The most important issue in building Arabic-enabled Web services is to send and
receive Arabic text correctly, regardless of the configuration of the server or
machine that the Web service is running on. Web services support UTF-8 and Unicode,
so, they support Arabic language text and send and receive Arabic text correctly.
WIn Visual Studio .NET, there are two parts to creating a world-ready
application: globalization, the process of designing applications that can adapt to
different cultures, and localization, the process of translating resources for a specific
culture. The .NET Framework provides extensive support for developing world-ready
applications. The System.Globalization namespace of the .NET Framework contains classes
that define culture-related information, including the language, the country and region,
the calendars in use, the format patterns for dates, currency, and numbers, and the sort
order for strings. CurrentUICulture in Visual Basic .NET
or Visual C# .NET code (and UICulturein Web.config files and page directives).
The culture setting determines formatting of values such as dates, numbers, currency,
and so on. The culture is set as CurrentCulture in Visual Basic or Visual C# code
(and Culture in Web.config files and page directives).. Classes that automatically format
information according to the culture setting are called culture-specific. S
ome culture-specific methods are IFormattable.ToString, Console.Writeline,
and String.Format. Some culture-specific functions (in Visual Basic .NET) are MonthName and WeekDayName.
To a specific culture
The DateTime Structure provides methods such as the DateTime.
ToString method and DateTime.Parse Method "ar-EG"
culture. An instance of DateTimeFormatInfo can be created for a specific culture,
or the invariant culture, but not for a neutral culture. See the following example.
Using DateTimeFormatInfo, you can get all information about
a specific culture like date patterns, time patterns, and AM/PM designators.
The following example retrieves the abbreviated day names of the Egyptian culture:
CultureInfo ci = new CultureInfo("ar-Eg");
foreach (string day in ci.DateTimeFormat.AbbreviatedDayNames)
{
Console.WriteLine(day);
}Note:The previous code will return the correct abbreviated day names. It always orders the days of the week starting with Sunday and ending with Saturday.
Working with DateTime ObjectsThere are some rules that you must be
aware of while working with DateTime members:
A globalized application should be able to display and use calendars
based on a specific culture. The .NET Framework provides the Calendar Class as well
as the following Calendar implementations: GregorianCalendar and HijriCalendar and
other calendars.
The CultureInfo Class has a CultureInfo.Calendar Property that retrieves a culture's
default calendar. Some cultures support more than one calendar.
The CultureInfo.OptionalCalendars Property retrieves the optional calendars
supported by a culture. For example, the Saudi Arabia culture has 6 calendars,
Hijri calendar (as default) plus 5 of different Gregorian calendar types,
as specified in the Regional Settings of the Control Panel of the system.
The GregorianCalendar has different language versions of the Gregorian calendar.
Using the GregorianCalendar.CalendarType Property you can get or set the
GregorianCalendarTypes value that denotes the version of the current GregorianCalendar.
The date and time patterns associated with the GregorianCalendar vary depending on the
language. For Arabic cultures, more language versions of the Gregorian calendar are
available. For example, you can use the French version of GregorianCalendar using the
MiddleEastFrench value. For more information see "GregorianCalendarTypes Enumeration"
in MSDNThe following example lists the calendar types of the Saudi Arabia culture:
The output is:
System.Globalization.HijriCalendar
USEnglish
MiddleEastFrench
Arabic
Localized
TransliteratedFrench
Using Hijiri CalendarYou can set a culture to one of its supported
calendar types. For example, the Arabic cultures support the Hijri calendar, so, you
can set the Arabic culture calendar to Hijiri calendar. As mentioned before, the
CultureInfo.Calendar is a read-only property which gets the current calendar,
but can't be changed. Instead, you can specify the calendar of the DateTimeFormat.Calendar
of the CultureInfo, as in the following code :
You can perform any operation with Hijri dates in the same way,
such as displaying Hijri and converting Hijri from/to Gregorian or other calendars.
The following example converts a date from Gregorian to Hijri and displays it:
The output is:
The output is:
26/11/2001 or 11/26/2001 (according to your current culture settings)
The NumberFormatInfo class defines how currency, decimal separators,
and other numeric symbols are formatted and displayed based on culture.
For example, the currency used in the "en-US" culture is formatted as $123,456,789.00
and 100.000 ج.م. for the "ar-EG" culture.The following code example displays an
integer using the NumberFormatInfo standard currency format ("c") for the specified
CurrentCulture setting.
The output is:
100.000 ج.م.
Note:With Arabic cultures, the currency symbol appears on
the left side of the number. To apply this currency format, you can use the Regional
Settings of the system, from the Control Panel OR set CultureInfo.NumberFormat.CurrencyNegativePattern and
CultureInfo.NumberFormat.CurrencyPositivePattern in the code.
Sorting, searching and comparing operations are culture aware,
this mean that they are affected by the affected by the system culture.
For example, if you are comparing two strings, you have to take into
consideration their language. In Arabic, if you are comparing two words,
one containing Kashida and other without Kashida, the two words in Arabic
are the same, so the comparison result should be equal.
Sorting
You may need to sort the elements of a collection or list.
For example, you might sort a list of employee's names. This sort should be built on
the rules of the language of the employee names.
Some data structures, like arrays and controls, such as the Combobox, Listbox, ListView controls consist of elements that can be sorted. Because the .NET Framework based on the Unicode, it sorts the Arabic strings correctly. The following example sorts Arabic strings of a Combobox named Combobox1:
Output:
Actually, searching and comparing are the same operation, where the search is a kind of comparing. When you are comparing two words, you can perform either a binary or text compare. The binary compare (or search) compares without considering the local national language or culture, while the text search considers the local national language or culture. For example, if you are comparing two English words "pen" and "Pen," which differ in their case (upper and lower), the result of the binary comparison will be not equal, while the text comparison returns equal. In the Arabic culture, the text comparison should take into consideration the capability of ignoring some special characters, such as Kashida, Diacritics and Alef/Alef-Hamza.
The .NET Framework supports both comparisons through its classes and methods. The CompareInfo class provides a set of methods you can use to perform culture-sensitive string comparisons. The CultureInfo class has a CultureInfo.CompareInfo property that is an instance of this class. This property defines how to compare and sort strings for a specific culture. The String.Compare method uses the information in the CultureInfo.CompareInfo property to compare strings.
For example, consider the String and RichTextBox classes. You can use the String class to make comparison between two strings. The String.CompareOrdinal method makes a binary comparison, as in the following:
String.CompareOrdinal("خالد", "خالــــد")
The function will not return 0, which means that the two words are not equal.
The String.Compare method makes a text comparison, where the method takes the culture as a parameter, as in this example:
String.Compare("خالــــد", "خالد", True, New System.Globalization.CultureInfo("ar-EG"))
The function will return 0, which means that the two words are equal.
Note String.Compare method doesn't ignore Diacritics or Alef-Hamza, it ignores only Kashida.
The RichTextBox control can be used as a text editor. One of the most useful functions of text editors is to search for a specific string.
The Find method of the RichTextBox class searches for text within the
contents of the specified RichTextBox control. With this function,
you can indicate how a text search is carried out in a RichTextBox control,
by specifying the RichTextBoxFinds options as in the following example:
With Arabic:
The .NET Framework follows the same rules as the operating system
regarding digit substitution. The digit substitution of the operating system depends
on the user locale and how you have set the digit substitution of the regional settings
in the control panel.
The .NET Framework gives you the ability to control the digit substitution
programmatically. You can use StringFormat.SetDigitSubstitution method,
which specifies the method to be used for digit substitution. This function takes two
parameters, the language of the text and an element of the StringDigitSubstitute
enumeration that specifies how digits are displayed.
The following code uses the Arabic-Egypt culture to set the digit substitution to Hindi (Arabic) digits by using StringDigitSubstitute.National element, where the user locale of the system is assumed to be Arabic.
The output is:
Globalization is the process of designing and developing a Web application
that functions for multiple cultures and languages, while localization is the process of
customizing your Web application for a given culture or locale.
(Localization consists primarily of translating the user interface.)The following sections
explain how to select the encoding and how to configure the Web application to
support globalization.
Encoding support
Internally, the code behind Web Forms pages handles all
string data as Unicode. You can set the ResponseEncoding
attribute to set the encoding that the server uses to send data to the client,
for example UTF-8. The Web Forms page also sets the CharSet attribute on the Content-Type
of the HTTP header according to the value of ResponseEncoding.This enables browsers to
determine the encoding without a meta tag or having to guess the correct encoding from the
content.The RequestEncoding attribute indicates the encoding the server will use to
interpret data entered on the client and sent to the server.
The FileEncoding attribute specifies the encoding that is used to interpret the data
included in the aspx file. If the file encoding is specified in the Web.config file,
the actual file must be saved in the same encoding. This may happen if you are
developing a Web Form's page which includes Arabic text, for example, while the system
locale of your machine is English. In that case, you have to save the Web Form with
encoding and select the appropriate Arabic code page. To select an encoding different
from the system default, use the Advanced Save Options dialog box (available on the File
menu). For details, see Managing Files with Encoding".
To specify encoding and specific culture
You can use the Page directive to specify any attribute except for
the fileEncoding attribute, because it applies to the file itself.
To change the fileEncoding attribute you can set it in the globalization section
of the Web.config file.
Web Configuration Settings
Web Forms configuration files are established in an XML file named Web.config. It
provides settings for every Web Forms page residing in the same directory as the
configuration file. The settings are usually also inherited by subdirectories. Each file
can contain a globalization section in which you can specify default encodings and cultures.
The following code set the globalization section in the Web.config file :
The attributes of the globalization section in the Web.config file
can also be specified in the @ Page directive (with the exception of fileEncoding,
which can only be specified in Web.config because it applies to the file itself).
Settings in the Page directive are only valid for a specific page and override the
settings of the Web.config file.
Localization Support
Properties of a locale are accessible through the CultureInfo class. Additionally,
ASP.NET tracks two properties of a default culture per thread and request:
CurrentCulture for the default of locale-dependent functions and CurrentUICulture for
locale-specific lookup of resource data.
The following code sets the culture to Arabic-Saudi and displays the culture values
on the Web server.
The result is as follows:
(العربية (المملكة العربية السعودية
(العربية (المملكة العربية السعودية
For locale-dependent data like date/time formats or currency, ASP.NET
leverages the support of the .NET Framework class library in the common language runtime.
Code on ASP.NET pages can use locale-dependent formatting routines like DateTime.Format.
For example, the following code displays the current date in a long format.
The first line according to the system locale, the second one according to the Egypt ("EG") locale:
The result is as follows:
Monday, December 03, 2001 10:53 PM
الثلاثاء, ديسمبر 04, 2001
Resources in Applications
Nearly every production-quality application needs to use resources. A resource is
any non-executable data that is logically deployed with an application. Storing your
data in a resource file allows you to change application data without recompiling your
entire application. The .NET Framework provides comprehensive support for the creation and
localization of resources. In addition, the .NET Framework supports a simple model for
packaging and deploying these localized runtime based on the locale for the current user on the local computer.
Crystal.
For example, you can create a Web application that enables users to drill down in a chart and filter its information according to their needs. The chart is actually a Crystal report interacting with other controls in the application.
When you create a report, you specify the data source of the report, design the report layout, and decide how you want users to access the report data. This section provides an overview of these reporting fundamentals.
CIn Visual Studio .NET, you can create a new Crystal report, or add an existing Crystal report to a project. You can keep the report on a local machine, or publish it as a Web Service on a Web server. Depending on whether you are developing a Windows or Web application, you first bind the report with either the Windows Forms Viewer or the Web Forms Viewer; then you build the application. Users can run the Windows application on a Windows platform, or deploy the Web application on a client browser to view your Crystal report.
The Web Forms Viewer and the Windows Forms Viewer each provides a set of properties, methods, and events. You may initialize either viewer's properties at design time or set them at runtime.
The following sections focus on the report designing and viewing with respect to Arabic data.
Use the Crystal Report Designer to define the report's source of data, to select and group the data records you want to use, and to format the report's objects and layout. The Crystal Report Designer enables you to design and modify reports inside the Visual Studio .NET IDE. The Designer can be directly programmed from within Visual Studio .NET. You do not need to distribute the Report Designer with your report. The Report Designer is divided into labeled report sections. You can place report objects, such as database, formula, parameter, and running total fields, in the section you would like them to appear, as in Figure 14.
To access the Crystal Report Designer, you have to add a new report object to your project (.rpt file), and then the Crystal Report Designer launches automatically.
Designing Arabic Report:
To design Arabic report, you have to let the interface flow from right to left.
The Crystal Report Designer supports Arabic via the Horizontal Alignment and Reading
Order properties. So, there are two main steps to designing an Arabic report:
There are different ways to view Crystal Reports, depending on whether
you are working with a Windows application or a Web application.
Windows Application
If you are developing a Windows application, you can host a report on a Windows Form with the Crystal Reports Windows Forms Viewer, which is available as a control in the Visual Studio Toolbox. As well as providing the convenience of report viewing in a Windows application, the Windows Forms Viewer can interact with other controls in the same application and can dynamically update the report it is hosting. The Windows Forms Viewer contains properties that allow you to customize and control the look, feel, and behavior of your report. For more information see "Reports in Windows Applications" in MSDN. Before you can display a report in the Windows Forms Viewer, you must bind a report object to the viewer. You can do this by assigning the ReportSource property of the Windows Forms Viewer.
The Crystal Report Viewer class supports Arabic through its RightToLeft property. By setting the RightToLeft property of the report, or of the Windows Form that hosts the report. The report will run in an Arabic context, but the Report Viewer sill flows from left to right as in the following figure.
Obviously, the CrystalReportViewer control of the Windows Forms has some
shortcomings with respect to the right-to-left Windows Forms and Arabic support. These are:
Web Forms
If you are developing a Web application in Visual Basic or C#, you can host a Crystal
Report on a Web Form with the Crystal Reports Web Forms Viewer, which is available
as a control in the Visual Studio Toolbox. In order to display a report in your application,
you must add a Web Forms Viewer to your Web Form. The Web Forms Viewer is labeled
CrystalReportViewer in the page in Design view. You may initialize properties of the Web
Forms Viewer at design time. You can also set or change the Web Forms Viewer's properties
in response to user-driven events at run time.
Before you can display a report in the Web Forms Viewer, you must bind the report
object to the viewer. You can do this by assigning the DataBindings or the
ReportSource property of the Web Forms Viewer.
As other Web controls, to have the CrystalReportViewer control flow from right to left
you can set the Dir attribute of the control, HTML or BODY tags to rtl.
The CrystalReportViewer control will appear as in the following figure:
As you can see, by setting the Dir attribute with rtl, the Crystal
Report Viewer flows from right to left, but it still doesn't match the Arabic
interface completely. You can see these problems: | http://www.microsoft.com/middleeast/msdn/arabicsupp.aspx | crawl-002 | refinedweb | 6,726 | 54.52 |
Hi,:
the expression is:
I get the error message
Can anyone please explain what I do wrong?
Many thanks,
Tegir:
def CheckValueExists(gefülltesFeldBis1): import arcpy if gefülltesFeldBis1 != '': return "true" else: return "false"
the expression is:
CheckValueExists("%gefülltesFeldBis1%")
I get the error message
ERROR 000989
Python-Syntaxfehler: Parsing error SyntaxError: invalid syntax (line 1)
Can anyone please explain what I do wrong?
Many thanks,
Tegir
This seems to work.
R_
changing "ü" into "ue" plus right intendation solved the problem. Great, again something learned! Now it looks like this and works without giving me error messages.
Only problem is that I don't get the right result. The variable "gefülltesFeldBis1" refers to a field of type float in a table. Sometimes it is empty (no '0', just empty) so I expect the result to be 'false' but I still get 'true'. I also tried out the following options, but none of them gives me the correct result
Any ideas on how to solve this problem?
*maybe set the value to 'None' will pick it up? (untested)
But when going through it all again I noticed that my variable 'gefuelltesFeldBis1' actually doesn't refer to a field but to the table itself. The field is called 'Bis'.
So somehow I have to access that particular field first. Unfortunately it's not as easy as something like this:
I have to play a bit with that new problem, still I appreciate every help :)
When testing for Nulls (None) with python, don't put the quotes around the None or it is actually comparing if it is the text string "None".
R_
If it is a table and you are looking for a value in a field, you will not only have to get access to the table, but to the particular row that you are testing for a value.
You haven't mentioned what your workflow is, but most likely you will want to use a searchCursor for this.
Somthing like that should be what you are after.
R_
You have wrapped your variable in quotes, that means that everything passed to your function is converted to a string. That is fine, it just means that you only need to test strings (i.e. 'NULL' not None...), see below.
You do not need to import Arcpy for pieces of Python in calculate value or field calculators.
ArcGIS doesn't care so much (it will convert a returned value of 0, "false" or False to boolean False, and returned values of 1, "true" and True to boolean True), but I personally prefer to use proper True/False values rather than text, see below.
You can also use a list to check a bunch of values all at once (empty string, string with just a space, 0 and NULL):
There is still some strange behaviour, depending on what data type gefuelltesFeldBis1 is...
Now, I am not sure about your workflow. You say you are trying to test if a field contains a value? That is not how calculate value works...
I try to explain my problem better:
I have a single point with 8 different water height attributes (from low water to flooding) associated. From these 8 values I want to create a remap table for the reclassification of a raster. I have a big model that includes this submodel for making the remap table.
It all works fine as long as all values are there and are different to each other. But sometimes one value is missing (empty field) or 2 (neighboring) values are the same. Therefore I have to check that each time.
Originally my submodel looks like this:[ATTACH=CONFIG]26141[/ATTACH]
Sorry, it's in german but basically it copies the one attribute line of the point shapefile 8 times, adds to each new table 3 fields "NewClass", "From" and "To" ("Neu", "Von" und "Bis" in german). Then the correct water heights are copied into the respective fields and the unneccessary ones are deleted. In the end all 8 tables are merged together into a single table, which looks like this: [ATTACH=CONFIG]26142[/ATTACH]
The last 3 columns are the important ones with the "NewClass", "From" and "To" values.
Now I thought about doing an if-then logic inside of model builder but maybe this gets too complicated and I need to use a script.
For example I have in class 3 no "To" value, therefore also no "From" value in class 4. Then I would like to delete the line of class 3 and reset the "NewClass" value of "4" to "34".
I hope that's clearer now, for sure writing it already helped me to get it clearer in my head :)
I'll let you know as soon as I found a workaround. Help is very much appreciated!
Attachments | https://community.esri.com/thread/77082-syntax-problem-if-then | CC-MAIN-2018-22 | refinedweb | 801 | 69.62 |
When you are first starting out learning how to program, one of the first things you will want to learn is what an error message means. In Python, error messages are usually called tracebacks. Here are some common traceback errors:
- SyntaxError
- ImportError or ModuleNotFoundError
- AttributeError
- NameError
When you get an error, it is usually recommended that you trace through it backwards (i.e. traceback). So start at the bottom of the traceback and read it backwards.
Let’s take a look at a few simple examples of tracebacks in Python.
Syntax Error
A very common error (or exception) is the SyntaxError. A syntax error happens when the programmer makes a mistake when writing the code out. They might forget to close an open parentheses, or use a mix of quotes around a string on accident, for instance. Let’s take a look at an example I ran in IDLE:
>>> print('This is a test) SyntaxError: EOL while scanning string literal
Here we attempt to print out a string and we receive a SyntaxError. It tells us that the error has something to do with it not finding the End of Line (EOL). In this case, we didn’t finish the string by ending the string with a single quote.
Let’s look at another example that will raise a SyntaxError:
def func return 1
When you run this code from the command line, you will receive the following message:
File "syn.py", line 1 def func ^ SyntaxError: invalid syntax
Here the SyntaxError says that we used “invalid syntax”. Then Python helpfully uses an arrow (^) to point out exactly where we messed up the syntax. Finally we learn that the line of code we need to look at is on “line 1”. Using all of these facts, we can quickly see that we forgot to add a pair of parentheses followed by a colon to the end of our function definition.
Import Errors
Another common error that I see even with experienced developers is the ImportError. You will see this error whenever Python cannot find the module that you are trying to import. Here is an example:
>>> import some Traceback (most recent call last): File "
", line 1, in ImportError: No module named some
Here we learn that Python could not find the “some” module. Note that in Python 3, you might get a ModuleNotFoundError error instead of ImportError. ModuleNotFoundError is just a subclass of ImportError and means virtually the same thing. Regardless which exception you end up seeing, the reason you see this error is because Python couldn’t find the module or package. What this means in practice is that the module is either incorrectly installed or not installed at all. Most of the time, you just need to figure out what package that module is a part of and install it using pip or conda.
AttributeError
The AttributeError is really easy to accidentally hit, especially if you don’t have code completion in your IDE. You will get this error when you try to call an attribute that does not exist:
>>>>> my_string.up() Traceback (most recent call last): File "
", line 1, in my_string.up() AttributeError: 'str' object has no attribute 'up'
Here I tried to use a non-existent string method called “up” when I should have called “upper”. Basically the solution to this problem is to read the manual or check the data type and make sure you are calling the correct attributes on the object at hand.
NameError
The NameError occurs when the local or global name is not found. If you are new to programming that explanation seems vague. What does it mean? Well in this case it means that you are trying to interact with a variable or object that hasn’t been defined. Let’s pretend that you open up a Python interpreter and type the following:
>>> print(var) Traceback (most recent call last): File "
", line 1, in print(var) NameError: name 'var' is not defined
Here you find out that ‘var’ is not defined. This is easy to fix in that all we need to do is set “var” to something. Let’s take a look:
>>>>> print(var) Python
See how easy that was?
Wrapping Up
There are lots of errors that you will see in Python and knowing how to diagnose the cause of those errors is really useful when it comes to debugging. Soon it will become second nature to you and you will be able to just glance at the traceback and know exactly what happened. There are many other built-in exceptions in Python that are documented on their website and I encourage you to become familiar with them so you know what they mean. Most of the time, it should be really obvious though.
Related Reading
- Python documentation on Errors and Exceptions
- Built-in Exceptions in Python
- The traceback module
- Handling Exceptions in Python | http://www.blog.pythonlibrary.org/2018/07/24/understanding-tracebacks-in-python/ | CC-MAIN-2020-10 | refinedweb | 816 | 68.7 |
Up to [DragonFly] / src / sys / vfs / isofs / cd9660.
Simplify vn_lock(), VOP_LOCK(), and VOP_UNLOCK() by removing the thread_t argument. These calls now always use the current thread as the lockholder. Passing a thread_t to these functions has always been questionable at best.)
Remove the VREF() macro and uses of it. Remove uses of 0x20 before ^I inside vnode.h
Style(9) cleanup to src/sys/vfs, stage 7/21: isofs. - Convert K&R-style function definitions to ANSI style. Submitted-by: Andre Nathan <andre@digirati.com.br> Additional-reformatting-by: cpressey
Per-CPU VFS Namecache Effectiveness Statistics: * Convert nchstats into a CPU indexed array * Export the per-CPU nchstats as a sysctl vfs.cache.nchstats and let user-land aggregate them. * Add a function called kvm_nch_cpuagg() to libkvm; it is shared by systat(1) and vmstat(1) and the ncache-stats test program. As the function name suggests, it aggregates the per-CPU nchstats. * Move struct nchstats into a separate header to avoid header file namespace pollution; sys/nchstats.h. * Keep a cached copy of the globaldata pointer in the VFS specific LOOKUP op, and use that to increment the namecache effectiveness counters (nchstats). * Modify systat(1) and vmstat(1) to accomodate the new behavior of accessing nchstats. Remove a (now) redundant sysctl to get the cpu count (hw.ncpu), instead we just divide the total length of the nchstats array returned by sysctl by sizeof(struct nchstats) to get the CPU count. * Garbage-collect unused variables and fix nearby warnings in systat(1) an vmstat(1). * Add a very-cool test program, that prints the nchstats per-CPU statistics to show CPU distribution. Here is the output it generates on an 2-processor SMP machine: gray# ncache-stats VFS Name Cache Effectiveness Statistics 4207370 total name lookups COUNTER CPU-1 CPU-2 TOTAL goodhits 2477657 1060677 (3538334 ) neghits 107531 47294 (154825 ) badhits 28968 7720 (36688 ) falsehits 0 0 (0 ) misses 339671 137852 (477523 ) longnames 0 0 (0 ) passes 2 13104 6813 (19917 ) 2-passes 25134 15257 (40391 ) The SMP machine used for testing this commit was proudly presented by David Rhodus <drhodus@dragonflybsd.org>. Reviewed-by: Matthew Dillon <dillon@backplane..23.2.2 | http://www.dragonflybsd.org/cvsweb/src/sys/vfs/isofs/cd9660/cd9660_lookup.c | CC-MAIN-2014-42 | refinedweb | 363 | 56.76 |
Latest TDD Tool in the NFoo series: NCover
Date Published: 22 February 2004
Everybody’s talking about it, it seems (Jonathan Cogley, Jeff Key, hey, that’s everybody, right?).
NCover (GDN, SF) is a new tool that analyzes source code and unit tests to provide information about test coverage — that is, how much of your code is actually being tested by your tests. One nice feature it supports already (the SF version is 0.7 — I haven’t tried the GDN version which I just now noticed is v1.2.2) is HTML formatted reports, showing a breakdown of test coverage by namespace, with line numbers for where additional test coverage is needed. Definitely a cool tool — I’ll write more once I’ve played with the GDN version.
Update:
Ok, I’ve installed the GDN NCover and found that it is completely different from the SourceForge NCover. The GDN version is a command line tool that uses .NET profiling to do its thing, and simply dumps out the results in XML. It also requires some environment variables to work, it seems, and uses COM for some reason so it won’t be xcopy-deployable. The SF version is basically just a NAnt task, no command line or GUI, but it requires that code to be analyzed be modified at the source level and re-compiled with the instrumentation, which is a bit of a disadvantage. It produces both XML data and HTML reports, though, which is quite nice. And it’s easier to integrate with NAnt since that is how it was designed.
Tags - Browse all tags
Category - Browse all categories | https://ardalis.com/latest-tdd-tool-in-the-nfoo-series-ncover/ | CC-MAIN-2020-40 | refinedweb | 272 | 69.62 |
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python.
import numpy as np import tables as tb
We create a new HDF5 file.
f = tb.open_file('myfile.h5', 'w')
We will create a HDF5 table with two columns: the name of a city (a string with 64 characters at most), and its population (a 32 bit integer).
dtype = np.dtype([('city', 'S64'), ('population', 'i4')])
Now, we create the table in '/table1'.
table = f.create_table('/', 'table1', dtype)
Let's add a few rows.
table.append([('Brussels', 1138854), ('London', 8308369), ('Paris', 2243833)])
After adding rows, we need to flush the table to commit the changes on disk.
table.flush()
Data can be obtained from the table with a lot of different ways in PyTables. The easiest but less efficient way is to load the entire table in memory, which returns a NumPy array.
table[:]
It is also possible to load a particular column (and all rows).
table.col('city')
When dealing with a large number of rows, we can make a SQL-like query in the table to load all rows that satisfy particular conditions.
[row['city'] for row in table.where('population>2e6')]
Finally, we can access particular rows knowing their indices.
table[1]
Clean-up.
f.close() import os os.remove('myfile.h5')
You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).
IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages). | http://nbviewer.jupyter.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter04_optimization/11_hdf5_table.ipynb | CC-MAIN-2017-47 | refinedweb | 258 | 64.2 |
I've heard that pointers can be a pain in the butt to implement and now I know.. Basically, I'm supposed to input a last, first and middle name. Then compute the size needed for a dynamic array to hold the full name. After that dynamically allocate the array, check DMA success and do all the copying. Once the copying and concatenating is done pass it to my function and display the results. To me passing to a function to display is silly for such a small program, but that's what my instructor wants.. lol.. I wrote the code to do all the above minus passing to a function and instead just using cout to display, but when I tried to implement it in a function I got thrown a curve ball. I keep running into an error..
ERROR: cannot convert parameter 1 from 'char *' to 'char'
Here's the source code.
HEADER CODE:
#ifndef Lab5_h #define Lab5_h void DisplayName (char pFullName); #endif
CPP CODE:
#include <iostream> using std::cin; using std::cout; using std::endl; #include <cstring> using std::strcpy; using std::strlen; using std::strcat; #include "Lab5.h" int main () { char FirstName[20]; char MiddleName[20]; char LastName[20]; char *pFullName; int Choice = 0; int Sum = 2; while (Choice != 9) { cout << "Please input your last name.. (i.e. Doe) "; cin >> LastName; cout << "Now, input your first name.. (i.e. John) "; cin >> FirstName; cout << "Finally, input your middle name.. (i.e. Edward) "; cin >> MiddleName; Sum = strlen (LastName) + strlen (FirstName) + strlen (MiddleName); cout << "Sum is " << Sum << endl; pFullName = new char[Sum]; if (pFullName == 0) { cout << "Memory allocation failed -- exiting!\n"; exit (1); } strcpy (pFullName, FirstName); strcat (pFullName, " "); strcat (pFullName, MiddleName); strcat (pFullName, " "); strcat (pFullName, LastName); DisplayName(pFullName); delete [] pFullName; cout << "Input 9 to QUIT..\n"; cout << "Or 1 to CONTINUE with a new name.. "; cin >> Choice; } } void DisplayName (char pFullName) { cout << pFullName << endl; } | https://www.daniweb.com/programming/software-development/threads/127405/pointer-conversion-problem | CC-MAIN-2018-43 | refinedweb | 315 | 63.39 |
Here is what I have. I am not sure what is wrong. After playing with several combinations of spaces and quotation marks, I still don't know what is wrong. My error messages tell me that the problem lies within the last line, that the syntax is wrong. Please help if you know :( #TeamGandalf
def tree():
tree = eval(input("Enter your favorite tree: "))
tree()
def food():
food = eval(input("Enter your favorite food: "))
food()
print("Do you want to eat magical", food, "in an enchanted", tree, "with me?")
I dont think you need to have the eval() in front of the input.
Try it without it
If you wanna enter a string of text, you will use this input function:
variable = input(‘words describing what to enter‘)
Ex: tree = input('Enter your favorite tree: ')
If you wanna enter an integer number, you will use this input function:
variable = int(input(‘words describing what to enter‘))
or
variable = eval(input(‘words describing what to enter‘))
Ex: numberoftrees = int(input('What is the total number of trees in the world: '))
Be careful between those above input functions and determine what type of input value (text or number) you wanna enter.
Hope this helps you avoid this error next time!
eval() tries to convert whatever input is given into an integer(number).
so if the user enters '5', it is converted from just text to an actual value for the code to use. you only need it if the variable you define(food/tree in this case) is a number.
also, it looks like you are defining tree() and food() as functions, which is unnecessary—you generally want to define a single function (commonly called main()) which contains the lines for defining tree and food, and then prints the text you want.
Do you need the eval before input? Maybe try without and see if it helps. | http://lovelace.augustana.edu/q2a/index.php/4179/how-do-you-use-input-information-into-a-print-statement | CC-MAIN-2022-40 | refinedweb | 314 | 66.98 |
A .NET Runtime for Cobalt Strike's Beacon Object Files
BOF.NET is a small native BOF object combined with the BOF.NET managed runtime that enables the development of Cobalt Strike BOFs directly in .NET. BOF.NET removes the complexity of native compilation along with the headaches of manually importing native API. Testing BOF.NET assemblies is also generally much easier, since the .NET assemblies can be debugged using a traditional managed debugger.
Implementing you first BOF.NET class is simple. Add a reference to the BOF.NET runtime DLL from the BOFNET NuGet package and create a class that inherits from
BeaconObject. A mandatory constructor with a
BeaconApiobject as the only parameter is needed. This should be passed along to the
BeaconObjectbase constructor.
Finally override the
Gofunction. Arguments will be pre-processed for you exactly how a
Mainfunction behaves inside a normal .NET assembly.
namespace BOFNET.Bofs { public class HelloWorld : BeaconObject { public HelloWorld(BeaconApi api) : base(api) { } public override void Go(string[] args) { BeaconConsole.WriteLine($"[+] Welcome to BOF.NET { (args.Length > 0 ? args[0] : "anonymous" )}"); } } }
Once you have compiled your BOF.NET assembly, download the .nupkg from the releases page or nuget.org. Open the package in your favorite zip application and extract the contents of the lib folder. Move the BOFNET.DLL from your preferred target framework folder into the same folder as the .cna and BOF obj files. The final structre should look like this.
. +-- | +-- BOFNET.dll | +-- bofnet.cna | +-- bofnet_execute.cpp.x86.obj | +-- bofnet_execute.cpp.x64.obj
Load the bofnet.cna aggresor script into Cobalt Strike and being using your BOF.NET class.
Before any BOF.NET class can be used, the BOF.NET runtime needs to be initialised within the beacon instance.
bofnet_init
Once the runtime has loaded, you can proceed to load further .NET assemblies including other BOF.NET classes. BOF.NET now chunks the loading of Assemblies, therefore large assemblies can also be loaded (1M+)
bofnet_load /path/to/bofnet/HelloWorld.dll
You can confirm the library has been loaded using the
bofnet_listassembliesalias. A complete list of classes that implmements
BeaconObjectcan be shows by executing the
bofnet_listalias.
Finally, once you have confirmed your assembly is loaded and the BOF.NET class is available you can execute it
bofnet_execute BOFNET.Bofs.HelloWorld @_EthicalChaos_
You can also use the shorthand method of just the class name, but this will only work if there is only one BOF.NET class present with that name.
bofnet_execute HelloWorld @_EthicalChaos_
The BeaconObject class implements functionality to allow custom implementations of screen capture, file downloads (from memory 😊), keylogger and hash dumps. If, for example, the built in keylogger or screen capture implementation is causing Windows Defender or other AV engines to kill your beacon, you can implement your own. The relevant functions are documented below.
void SendScreenShot(byte[] jpgData, int session, string userName, string title)
jpgDataRaw JPEG image data.
sessionUser session id the screen capture was taken from.
userNameThe user name running under the session.
titleThe title of the window to name for the screen shot.
SendKeystrokes(string keys, int session, string userName, string title)
keysThe sequence of keys captured.
sessionUser session id the screen capture was taken from.
userNameThe user name running under the session.
titleThe title of the window to application the keys were captured from.
DownloadFile(string fileName, Stream fileData)
fileNameThe file name to use for the metadata within beacon.
fileDataA readable stream that will be used for the file content.
DownloadFilewill lock beacon and become unresponsive until the download completes!
SendHashes(UserHash[] userHashes)
userHashesA collection of usernames that have been captured.
| Command | Description | |----------------------------------------|--------------------------------------------------------------------------| | bofnetinit | Initialises the BOF.NET runtime inside the beacon process | | bofnetlist | List's all executable BOF.NET classes | | bofnetlistassembiles | List assemblies currently loaded into the BOF.NET runtime | | bofnetexecute bof_name [args] | Execute a BOF.NET class, supplying optional arguments | | bofnetload *assemblypath* | Load an additional .NET assembly from memory into the BOF.NET runtime. | | bofnetshutdown | Shutdown the BOF.NET runtime | | bofnetjob bof_name [args] | Execute a BOF.NET class as a background job (thread) | | bofnetjobs | List all currently active BOF.NET jobs | | bofnetjobstatus job_id | Dump any pending console buffer from the background job | | bofnetjobkill *jobid* | Dump any pending console buffer from the background job then kill it. Warning, can cause deadlocks when terminating a thread that have transitioned into native code | | bofnet_boo booscript.boo | Compile and execute Boo script in seperate temporary AppDomain |
Depending on the target operating system will depend on which distribution should be used (net35/net40). The runtime will attempt to create a .NET v4 CLR using the
CLRCreateInstancefunction that was made available as part of .NET v4. If the function cannot be found, the older mechanism is used to initialise .NET v2. Currently the native component cannot determine which managed runtime to load dynamically, so make sure you use the correct distribution folder. A fully up to date Windows 7 will generally have .NET 4 installed, so on most occasions you will need the net461 folder from inside the dist folder. Older operating systems like XP will depend on what is installed.
BOF.NET will follow the same restrictions as it's native BOF counterpart. Execution of a BOF.NET class internally uses the
inline_executefunctionality. Therefore, any BOF.NET invocations will block beacon until it finishes.
BOF.NET does have the added benefit that loaded assemblies remain persistent. This facilitates the use of threads within your BOF.NET class without the worry of the assembly being unloaded after the
Gofunction finishes. But you cannot write to the beacon console or use any other beacon BOF API's since these are long gone and released by Cobalt Strike after the BOF returns.
If you want to execute your BOF.NET class as a background job using a thread, use the
bofnet_jobcommand. This wraps the invocation in a separate thread and handles
BeaconConsolewrites transparently for you. Be careful with long running jobs and lots of console output, since the console buffer will cached until a call to
bofnet_jobstatusis invoked.
BOF.NET contains a small native BOF that acts as a bridge into the managed world. When
bofnet_initis called, this will start the managed CLR runtime within the process that beacon is running from. Once the CLR is started, a separate .NET AppDomain is created to host all assemblies loaded by BOF.NET. Following on from this, the BOF.NET runtime assembly is loaded into the AppDomain from memory to facilitate the remaining features of BOF.NET. No .NET assemblies are loaded from disk.
All future BOF.NET calls from here on out are typically handled by the
InvokeBofmethod from the
BOFNET.Runtimeclass. This keeps the native BOF code small and concise and pushes all runtime logic into the managed BOF.NET runtime.
BOF.NET uses the CMake build system along with MinGW GCC compiler for generating BOF files and uses the .NET core msbuild project type for building the managed runtime. So prior to building, all these prerequisites need to be satisfied and available on the PATH.
From the root of the checkout directory, issue the following commands:
mkdir build cd build cmake -DCMAKE_BUILD_TYPE=MinSizeRel -G "MinGW Makefiles" .. cmake --build . cmake --install .
On Linux we utilise a CMake toolchain file to cross compile the native BOF object using the mingw compiler. For the managed component, please make sure the dotnet command line tool is also installed from .NET core
mkdir build cd build cmake -DCMAKE_INSTALL_PREFIX=$PWD/install -DCMAKE_BUILD_TYPE=MinSizeRel -DCMAKE_TOOLCHAIN_FILE=../toolchain/Linux-mingw64.cmake .. cmake --build . cmake --install .
If you'd rather build using a docker image on Linux with all the build dependencies pre installed, you can use the
ccob/windows_crossimage.
docker run --rm -it -v $(pwd):/root/bofnet ccob/windows_cross:latest /bin/bash -c "cd /root/bofnet; mkdir build; cd build; cmake -DCMAKE_INSTALL_PREFIX=$PWD/install -DCMAKE_BUILD_TYPE=MinSizeRel -DCMAKE_TOOLCHAIN_FILE=../toolchain/Linux-mingw64.cmake ..; cmake --build .; cmake --install ."
Once the steps are complete, the
build\distfolder should contain the artifacts of the build and should be ready to use within Cobalt Strike | https://xscode.com/CCob/BOF.NET | CC-MAIN-2021-43 | refinedweb | 1,328 | 59.6 |
Question 6 :
What will be the output of the program?
public class Switch2 { final static short x = 2; public static int y = 0; public static void main(String [] args) { for (int z=0; z < 3; z++) { switch (z) { case y: System.out.print("0 "); /* Line 11 */ case x-1: System.out.print("1 "); /* Line 12 */ case x: System.out.print("2 "); /* Line 13 */ } } } }
Case expressions must be constant expressions. Since x is marked final, lines 12 and 13 are legal; however y is not a final so the compiler will fail at line 11.
Question 7 :
What will be the output of the program?
public class If1 { static boolean b; public static void main(String [] args) { short hand = 42; if ( hand < 50 && !b ) /* Line 7 */ hand++; if ( hand > 50 ); /* Line 9 */ else if ( hand > 40 ) { hand += 7; hand++; } else --hand; System.out.println(hand); } }
In Java, boolean instance variables are initialized to false, so the if test on line 7 is true and hand is incremented. Line 9 is legal syntax, a do nothing statement. The else-if is true so hand has 7 added to it and is then incremented.
Question 8 :
What will be the output of the program?
public class Test { public static void main(String [] args) { int I = 1; do while ( I < 1 ) System.out.print("I is " + I); while ( I > 1 ) ; } }
There are two different looping constructs in this problem. The first is a do-while loop and the second is a while loop, nested inside the do-while. The body of the do-while is only a single statement-brackets are not needed. You are assured that the while expression will be evaluated at least once, followed by an evaluation of the do-while expression. Both expressions are false and no output is produced.
Question 9 :
What will be the output of the program?
int x = l, y = 6; while (y--) { x++; } System.out.println("x = " + x +" y = " + y);
Compilation fails because the while loop demands a boolean argument for it's looping condition, but in the code, it's given an int argument.
while(true) { //insert code here }
Question 10 :
What will be the output of the program?
int I = 0; outer: while (true) { I++; inner: for (int j = 0; j < 10; j++) { I += j; if (j == 3) continue inner; break outer; } continue outer; } System.out.println(I);
The program flows as follows: I will be incremented after the while loop is entered, then I will be incremented (by zero) when the for loop is entered. The if statement evaluates to false, and the continue statement is never reached. The break statement tells the JVM to break out of the outer loop, at which point I is printed and the fragment is done. | http://www.indiaparinam.com/java-question-answer-java-flow-control/finding-the-output/page1 | CC-MAIN-2019-18 | refinedweb | 460 | 72.46 |
In the original Linq CTP and the first Orcas Beta, we included a DataSet specific Linq operator called CopyToDataTable<T> (It was called ToDataTable at one point also). For Beta 2 of Orcas, we ended up restricting this method to only work with DataRows (or some derived type) via a generic constraint on the method.
The reason for this was simply resource constraints. When we started to design how the real version of CopyToDataTable<T> should work, we realized that there are a number of potentially interesting mappings between objects and DataRows and didn't have the resources to come up with a complete solution. Hence, we decided to cut the feature and release the source as a sample.
Surprising to us, a lot of folks noticed this and were wondering where the feature had gone. It does make a nice solution for dealing with projections in Linq in that one can load instances of anonymous types into DataRows.
So as promised, below is sample code of how to implement CopyToDataTable<T> when the generic type T is not a DataRow.
A few notes about this code:
1. The initial schema of the DataTable is based on schema of the type T. All public property and fields are turned into DataColumns.
2. If the source sequence contains a sub-type of T, the table is automatically expanded for any addition public properties or fields.
3. If you want to provide a existing table, that is fine as long as the schema is consistent with the schema of the type T.
4. Obviously this sample probably needs some perf work. Feel free to suggest improvements.
5. I only included two overloads - there is no technical reason for this, just Friday afternoon laziness.
UPDATE 9/14 - Based on some feedback from akula, I have fixed a couple of issues with the code:
1) The code now supports loading sequences of scalar values.
2) Cases where the developer provides a datatable which needs to be completely extended based on the type T is now supported.
UPDATE 12/17 - In the comments, Nick Lucas has provided a solution to handling Nullable types in the input sequence. I have not tried it yet, but it look like it works.
class Sample { static void Main(string[] args) { // create sequence Item[] items = new Item[] { new Book{Id = 1, Price = 13.50, Genre = "Comedy", Author = "Jim Bob"}, new Book{Id = 2, Price = 8.50, Genre = "Drama", Author = "John Fox"}, new Movie{Id = 1, Price = 22.99, Genre = "Comedy", Director = "Phil Funk"}, new Movie{Id = 1, Price = 13.40, Genre = "Action", Director = "Eddie Jones"}}; var query1 = from i in items where i.Price > 9.99 orderby i.Price select i; // load into new DataTable DataTable table1 = query1.CopyToDataTable(); // load into existing DataTable - schemas match DataTable table2 = new DataTable(); table2.Columns.Add("Price", typeof(int)); table2.Columns.Add("Genre", typeof(string)); var query2 = from i in items where i.Price > 9.99 orderby i.Price select new {i.Price, i.Genre}; query2.CopyToDataTable(table2, LoadOption.PreserveChanges); // load into existing DataTable - expand schema + autogenerate new Id. DataTable table3 = new DataTable(); DataColumn dc = table3.Columns.Add("NewId", typeof(int)); dc.AutoIncrement = true; table3.Columns.Add("ExtraColumn", typeof(string)); var query3 = from i in items where i.Price > 9.99 orderby i.Price select new { i.Price, i.Genre }; query3.CopyToDataTable(table3, LoadOption.PreserveChanges); // load sequence of scalars. var query4 = from i in items where i.Price > 9.99 orderby i.Price select i.Price; var DataTable4 = query4.CopyToDataTable(); } public class Item { public int Id { get; set; } public double Price { get; set; } public string Genre { get; set; } } public class Book : Item { public string Author { get; set; } } public class Movie : Item { public string Director { get; set; } } } public static class DataSetLinqOperators { public static DataTable CopyToDataTable<T>(this IEnumerable<T> source) { return new ObjectShredder<T>().Shred(source, null, null); } public static DataTable CopyToDataTable<T>(this IEnumerable<T> source, DataTable table, LoadOption? options) { return new ObjectShredder<T>().Shred(source, table, options); } } public class ObjectShredder<T> { private FieldInfo[] _fi; private PropertyInfo[] _pi; private Dictionary<string, int> _ordinalMap; private Type _type; public ObjectShredder() { _type = typeof(T); _fi = _type.GetFields(); _pi = _type.GetProperties(); _ordinalMap = new Dictionary<string, int>(); } public DataTable Shred(IEnumerable<T> source, DataTable table, LoadOption? options) { if (typeof(T).IsPrimitive) { return ShredPrimitive(source, table, options); } if (table == null) { table = new DataTable(typeof(T).Name); } // now see if need to extend datatable base on the type T + build ordinal map table = ExtendTable(table, typeof(T)); table.BeginLoadData(); using (IEnumerator<T> e = source.GetEnumerator()) { while (e.MoveNext()) { if (options != null) { table.LoadDataRow(ShredObject(table, e.Current), (LoadOption)options); } else { table.LoadDataRow(ShredObject(table, e.Current), true); } } } table.EndLoadData(); return table; } public DataTable ShredPrimitive(IEnumerable<T> source, DataTable table, LoadOption? options) { if (table == null) { table = new DataTable(typeof(T).Name); } if (!table.Columns.Contains("Value")) { table.Columns.Add("Value", typeof(T)); } table.BeginLoadData(); using (IEnumerator<T> e = source.GetEnumerator()) { Object[] values = new object[table.Columns.Count]; while (e.MoveNext()) { values[table.Columns["Value"].Ordinal] = e.Current; if (options != null) { table.LoadDataRow(values, (LoadOption)options); } else { table.LoadDataRow(values, true); } } } table.EndLoadData(); return table; } public DataTable ExtendTable(DataTable table, Type type) { // value is type derived from T, may need to extend table. foreach (FieldInfo f in type.GetFields()) { if (!_ordinalMap.ContainsKey(f.Name)) { DataColumn dc = table.Columns.Contains(f.Name) ? table.Columns[f.Name] : table.Columns.Add(f.Name, f.FieldType); _ordinalMap.Add(f.Name, dc.Ordinal); } } foreach (PropertyInfo p in type.GetProperties()) { if (!_ordinalMap.ContainsKey(p.Name)) { DataColumn dc = table.Columns.Contains(p.Name) ? table.Columns[p.Name] : table.Columns.Add(p.Name, p.PropertyType); _ordinalMap.Add(p.Name, dc.Ordinal); } } return table; } public object[] ShredObject(DataTable table, T instance) { FieldInfo[] fi = _fi; PropertyInfo[] pi = _pi; if (instance.GetType() != typeof(T)) { ExtendTable(table, instance.GetType()); fi = instance.GetType().GetFields(); pi = instance.GetType().GetProperties(); } Object[] values = new object[table.Columns.Count]; foreach (FieldInfo f in fi) { values[_ordinalMap[f.Name]] = f.GetValue(instance); } foreach (PropertyInfo p in pi) { values[_ordinalMap[p.Name]] = p.GetValue(instance, null); } return values; } }
Great thanks!!! This is good way!
Pingback from.
Sorry, entry moved to
–rj
This is great and was exactly what I needed. I do get an parameter mismatch error in
foreach (PropertyInfo p in pi)
{
values[_ordinalMap[p.Name]] = p.GetValue(instance,null);
}
if I use:
var a = (from m_var in dc.Ptabs
select m_var.CAR ).Distinct();
DataSet ds = new DataSet();
ds.Tables.Add(a.CopyToDataTable());
but not if I do this
var a = (from m_var in dc.Ptabs
select new { Car = m_var.CAR }).Distinct();
DataSet ds = new DataSet();
ds.Tables.Add(a.CopyToDataTable());
thanks – there are a couple of problems here:
1) The code is not catching the error case when the type T of the source sequence does not match the schema of the provided datatable. I suppose I could extend the table automatically in this case.
2) The results of your query is just a sequence of scaler values. The code wasn’t really designed for this and I am not sure I see much value, but I suppose I could just make a table with a single column.
I will update the sample code to fix these issues.
Thanks
This seems very helpful, is there a straightforward way to implement in vb.net?
Hi, this post was helpful.
However it seems to have problem when used with nullable types.
I am using LINQ to SQL data context calss to store data base tables.Then I query these tables and get result of type "var" and then convert it to datatable using this code.
Some of the tables are of nullable type.So while conversion I receive an error saying "DataSet does not support System.Nullable<>"
inside the ExtendTable method, when the code tries to add columns to the "table"
Pls let me know if you have any suggestions/workaround to this problem.
Thanks in advance
Regards,
Neeta
ah – I will try to get the code working with nullable types over the holidays.
Change the code to be this in order to handle nullable types:
foreach (PropertyInfo p in type.GetProperties())
{
if (!_ordinalMap.ContainsKey(p.Name))
{
Type colType = p.PropertyType;
if ((colType.IsGenericType) && (colType.GetGenericTypeDefinition() == typeof(Nullable<>)))
{
colType = colType.GetGenericArguments()[0];
}
DataColumn dc = table.Columns.Contains(p.Name) ? table.Columns[p.Name]
: table.Columns.Add(p.Name, colType);
_ordinalMap.Add(p.Name, dc.Ordinal);
}
}
Wow thank you so much! This allows me to use a LINQ query in my DAL and then bind an Object Data Source control to it. Then I can bind the GridView to the Object Data Source control and turn on sorting and paging and it all works!
Thank you all.Is there a VB version of the complete code anywhere else?Any help appreciated much.
I am trying to use the above idea to convert entities into datatables. However, the linq query does not expose the CopyToDataTable() method. Am I missing something here?
Make sure you use the namespace the you defined the DataSetLinqOperators in.
A number of people have asked me for a VB version of the CopyToDataTable<T> sample I wrote a few
A number of people have asked me for a VB version of the CopyToDataTable<T> sample I wrote a few
I cannot find any CopyToDataTable() method……
btw, i think if we just want a datatable , using the code above is
sooooooooooooo complex.
Awesome.
I added in the change for Nullable types by Nick and the whole thing is working beautifully. Only thing of note is that I had to change the method names due to a conflict. I am going to check that out.
Cheers.
There was a CopytoDataTable method in early betas of LINQ but then it disappeared. C#: Andy Conrad on
any code out there that can help load a dlinq object FROM a datatable? Im workin with webservices and still passing datasets, so would like to load up a bunch of dlinq entities from the datatables and commit them to the db… probably easier to just use the datatables, i guess…
LINQ to DataSet中实现CopyToDataTable
A utilização do LinQ em projetos dentro do TJMT forçou uma estrutura de projeto, mas ainda não estamos
I was looking the same thing.
I tried the above solution but for various reasons I was not satisfied.
One of the biggest reasons was that I like using Typed Datasets.
So I tried to create my own convertion method.It stated as a proof o concept and later became something that could be done.
Here is the solution I propose
Hi,
There seems to be a problem with the ExtendTable routine and I’m not sure how to solve it. In the routine ExtendTable there is the following code:
For Each p As PropertyInfo In type.GetProperties()
If Not _ordinalMap.ContainsKey(p.Name) Then
Dim colType As Type = p.PropertyType
If (colType.IsGenericType) AndAlso (colType.GetGenericTypeDefinition() Is GetType(Nullable(Of ))) Then
colType = colType.GetGenericArguments()(0)
End If
Dim dc As DataColumn = IIf(table.Columns.Contains(p.Name), table.Columns(p.Name), table.Columns.Add(p.Name, colType))
_ordinalMap.Add(p.Name, dc.Ordinal)
End If
The issue seems to be that type.GetProperties() returns the columns in alphabetical order instead of the order returned from the query. Can anyone offer some ideas on how to get them back in the right order. Or, at least to be able to contruct the datatable with the columns in the right order.
Thanks … Ed
I’m having the same issue as Ed. Has anyone figured how to return columns in the same order as the query?
Thanks,
For me it throws exactly the same compilation error
error CS0311: The type ‘AnonymousType#1’ cannot be used as type parameter ‘T’ in the generic type or method ‘System.Data.DataTableExtensions.CopyToDataTable<T>(System.Collections.Generic.IEnumerable<T>, System.Data.DataTable, System.Data.LoadOption)’. There is no implicit reference conversion from ‘AnonymousType#1’ to ‘System.Data.DataRow’.
My code is :
DataTable tableSimCnfCopy = tableSimCnf.Clone ();
var varLst2 =
from car in tableSimCnf.AsEnumerable()
select new
{
ModelID = car.ModelID,
VehiclePrice = car.VehiclePrice,
APR24PercDown = car.APR24PercDown,
APR36PercDown = car.APR36PercDown,
APR48PercDown = car.APR48PercDown,
APR60PercDown = car.APR60PercDown,
APR72PercDown = car.APR72PercDown
};
varLst2.CopyToDataTable(tableSimCnfCopy, LoadOption.PreserveChanges);
The last line is the line I’m getting this compile error | https://blogs.msdn.microsoft.com/aconrad/2007/09/07/science-project/ | CC-MAIN-2017-43 | refinedweb | 2,040 | 51.85 |
Welcome to the third installment of "Twisted Web in 60 seconds". The goal of this installment is to show you how to serve different content at different URLs using APIs from Twisted Web (the first and second installments covered ways in which you might want to generate this content).
Key to understanding how different URLs are handled with the resource APIs in Twisted Web is understanding that any URL can be used to address a node in a tree. Resources in Twisted Web exist in such a tree, and a request for a URL will be responded to by the resource which that URL addresses. The addressing scheme considers only the path segments of the URL. Starting with the root resource (the one used to construct the Site) and the first path segment, a child resource is looked up. As long as there are more path segments, this process is repeated using the result of the previous lookup and the next path segment. For example, to handle a request for
/foo/bar, first the root's
"foo" child is retrieved, then that resource's
"bar" child is retrieved, then that resource is used to create the response.
With that out of the way, let's consider an example that can serve a few different resources at a few different URLs.
First things first: we need to import Site, the factory for HTTP servers, Resource, a convenient base class for custom pages, and reactor, the object which implements the Twisted main loop. We'll also import File to use as the resource at one of the example URLs.
from twisted.web.server import Site
from twisted.web.resource import Resource
from twisted.internet import reactor
from twisted.web.static import File
Now we create a resource which will correspond to the root of the URL hierarchy: all URLs are children of this resource.
root = Resource()
Here comes the interesting part of this example. I'm now going to create three more resources and attach them to the three URLs
/foo,
/bar, and
/baz:
root.putChild("foo", File("/tmp"))
root.putChild("bar", File("/lost+found"))
root.putChild("baz", File("/opt"))
Last, all that's required is to create a Site with the root resource, associate it with a listening server port, and start the reactor:
factory = Site(root)
reactor.listenTCP(8880, factory)
reactor.run()
With this server running, serve a listing of files from
/tmp, serve a listing of files from
/lost+found, and serve a listing of files from
/opt.
Here's the whole example uninterrupted:
from twisted.web.server import Site
from twisted.web.resource import Resource
from twisted.internet import reactor
from twisted.web.static import File
root = Resource()
root.putChild("foo", File("/tmp"))
root.putChild("bar", File("/lost+found"))
root.putChild("baz", File("/opt"))
factory = Site(root)
reactor.listenTCP(8880, factory)
reactor.run()
Next time I'll show you how to handle URLs dynamically. Also, hey! I want your feedback. Do you find these posts useful? Am I presenting the information clearly? Tell me about it.
These posts are useful. Keep them coming. I'm just getting into Twisted.
Hi JP. Yes, they're great! Are you on Twitter? I guess I could do a search...
And, thanks.
Hi Terry,
I can't imagine what I would ever use Twitter for, so I've never signed up.
Thanks for the feedback. :)
Thanks for the post.
I'm looking at the Twisted source. It looks like file operations are blocking... it's just using the standard Python "open" call and isn't setting the os.O_NONBLOCK flag.
If file IO is blocking, and everything runs in a single OS thread under Twisted, how performant is this? Should we still delegate serving of static content to a separate web server (whether it's nginx, lightty, etc)?
Thanks again.
Yes and yes. How deep do you intend to take this series?
I'm enjoying the posts, too. I like that each one is very focused
but that in sequence they tell a bigger story.
Yes, twisted.web.static.File is implemented in terms of the normal, blocking file I/O calls. If reading data from the filesystem is slow, then this can be a problem. Generally, "slow" could reasonably mean that the filesystem is actually mounted from the network (eg NFS). Since this typically isn't the case, it's not very common to need to worry about it. For a normal, local filesystem, file I/O only blocks for a very short period. Overall, it's something to worry about later, not now.
There are a variety of options for dealing with the issue when you do need to consider it. Serving static content with another web server is certainly one. At some point, you don't even really care what the web server is, you just want to dump your static data onto a CDN and move on. :) However, it's also possible to implement something like File that makes use of some platform's asynchronous I/O APIs. O_NONBLOCK doesn't really help here - POSIX more or less lets systems pretend that file-based I/O is always non-blocking. O_NONBLOCK is around mostly for FIFOs and for its interaction with fcntl(2). However, Windows does have real asynchronous file I/O (via IOCP) and it's possible that someday Linux and other POSIX platforms will too. Any of these could be used to create a more event-driven-friendly version of File.
Yes they are useful. I hope after this series you start another one "Twisted in 5 minutes" to include some more advanced concepts ;)
This is exactly the types of articles and examples I'd have loved to see when I was first delving into twisted.web.
Thanks for the feedback. :)
I don't think Twisted Web itself is very deep. I'll go to the end, or so, I think. Suggestions for topics to cover are welcome, of course.
Just starting with twisted. It's a great framework and I'm learning a lot from it. Your posts make it really enjoyable!
I'm waiting for the next one...
Thanks for that!
I have a server with an ad hoc, informally-specified, bug-ridden, slow implementation of half of Twisted, which I want to supplement with an internal webserver. (Preferably on the same port as its other protocol, evil as that may sound...) After looking at Twisted's documentation, I gave up and started looking at other asynchronous frameworks; you've given me hope that I can use it after all.
Isn't there AIO for such purposes? Though I don't know if it's stable enough.
Another solution would be a separate worker thread which would take requests out of a queue, serve them and report the results in a loop; the Twisted reactor would then fill the queue and poll for interesting events from that worker.
Don't know if it smells too much like duct tape solution, but at least we would have more or less complete asynchronous support for all I/O bound operations.
The AIO APIs available for POSIX and on Linux are among the APIs which may someday be good enough to use to solve this problem, yes. :) There seems to be little interest from Linux kernel developers to actually solve this problem, though, so it may take a while.
You're absolutely right that a userspace threadpool could also be used to do this, though. Great point.
Roomate and I are excited. This is illuminating. Thanks a lot I feel indebted :D | http://as.ynchrono.us/2009/09/twisted-web-in-60-seconds-static-url_19.html | CC-MAIN-2021-04 | refinedweb | 1,265 | 67.04 |
Since the release of SQL Server 2012, users can choose between two different versions of PowerPivot for Excel: SQL Server 2008 R2 and SQL Server 2012. Both PowerPivot versions are free of charge, but the 2012 add-in includes more features than its predecessor and offers several improvements to the user interface, so it’s reasonable to assume that users would want to upgrade. However, not all users are rushing at once to the Microsoft Download Center. Some are perfectly happy with the 2008 R2 add-in, others might not even be aware that a 2012 version exists, and yet others might want to upgrade but can’t because they don’t have permissions to install software on their workstations. Whatever the reason, if your users are working with different PowerPivot add-ins, they might encounter interoperability issues on the desktop and in SharePoint environments.
On the desktop, the 2008 R2 and 2012 PowerPivot add-ins are not 100% compatible. The 2012 version can open a 2008 R2 PowerPivot workbook. You can interact with the data, such as by clicking on slicers, but when you try to display the underlying data tables in the PowerPivot window, you are prompted to upgrade the model, as in the following screenshot. The 2012 PowerPivot add-in cannot simply modify a 2008 R2 model because this would introduce incompatible database features. So, the 2012 add-in must first upgrade the model, which can take some time depending on the size of the workbook.
Having to wait for an upgrade to finish is inconvenient, but more importantly, if you decide to save your upgrade changes, you can no longer share your workbook with 2008 R2 users. These users can still open this workbook, but they cannot interact with the data anymore. Clicking on a slicer produces an error that the initialization of a data source failed, and trying to open the PowerPivot window results in another error, shown in the following screenshot. Because the 2008 R2 add-in is not forward compatible, it cannot open the upgraded model. You must now use the 2012 PowerPivot add-in. Downgrading a model is not an option.
In SharePoint, interoperability is likewise limited. SQL Server 2008 R2 PowerPivot for SharePoint does not support workbooks created with the 2012 version of PowerPivot for Excel because the 2008 R2 Analysis Services instance running on the PowerPivot application server cannot load a 2012 model. If you upload a 2012 workbook to SharePoint and try to interact with it in the browser, you receive an error that the system cannot refresh the data for a data connection, and the ULS log on the server might show a ConnectionInfoException, as in the following screenshot. So, make sure you upgrade your farms to SQL Server 2012 PowerPivot for SharePoint if your users want to upload and share 2012 PowerPivot workbooks.
Unfortunately, SQL Server 2012 PowerPivot for SharePoint isn’t free of challenges either when it comes to different workbook versions. For starters, make sure you install the Analysis Services OLE DB Provider (MSOLAP) from the SQL Server 2008 R2 SP1 Feature Pack on your application servers running Excel Calculation Services (ECS). Otherwise, you might find that ECS has trouble loading 2008 R2 workbooks. You might also need the MSOLAP provider from the SQL Server 2012 Feature Pack to load 2012 PowerPivot workbooks. SQL Server 2012 Setup installs this MSOLAP version automatically with PowerPivot for SharePoint, but if you run PowerPivot on separate application servers you must install the 2012 MSOLAP provider on your ECS servers explicitly. Note that you also need to install SQL Server 2012 Management Tools on your application servers because of an XMLA assembly dependency introduced with the 2012 version of MSOLAP. For details, check out the MSDN article “Install the Analysis Services OLE DB Provider on SharePoint Servers” at
Why do you need multiple MSOLAP versions? The reason is that Excel 2010 registers version-specific MSOLAP references in the data connections of your PowerPivot workbooks. As the following screenshot reveals, data connections in 2008 R2 PowerPivot workbooks point to MSOLAP.4 and data connections in 2012 workbooks point to MSOLAP.5. MSOLAP.4 is the 2008 R2 data provider. MSOLAP.5 is the 2012 data provider. ECS recognizes the references and expects to find the corresponding provider versions on the application server. If the specified provider is not installed, ECS cannot establish the data connection and cannot load the model.
With the 2008 R2 and 2012 MSOLAP providers installed on your ECS application servers in a 2012 PowerPivot farm, you can load and interact with 2008 R2 and 2012 PowerPivot workbooks in the browser. SQL Server 2012 PowerPivot for SharePoint also supports the Workbooks as a Data Source feature for both workbook versions. However, the Scheduled Data Refresh feature is only available for 2012 PowerPivot workbooks. For the same reasons that the 2012 PowerPivot add-in for Excel cannot modify a 2008 R2 model on the client, the 2012 version of PowerPivot for SharePoint cannot refresh a 2008 R2 model on the server. If you haven’t enabled AutoUpgrade, you must first upgrade the model in the Excel client to the 2012 version before Scheduled Data Refresh can succeed.
By default, 2012 PowerPivot for SharePoint does not automatically upgrade 2008 R2 workbooks, but it’s possible to change this behavior by enabling AutoUpgrade. You can select the option to Upgrade workbooks to enable data refresh in the PowerPivot Configuration Tool. You can find this option on the Create PowerPivot Service Application page for new installations or on the Upgrade PowerPivot System Service page for upgrades from 2008 R2 PowerPivot for SharePoint, as in the following screenshot. You can also set this option by using the Set-PowerPivotSystemService cmdlet. The following command enables the AutoUpgrade feature:
Set-PowerPivotSystemService –WorkbookUpgradeOnDataRefresh:$true
One word of caution, though. Do not enable AutoUpgrade without first preparing your users. Here at Microsoft, AutoUpgrade was enabled in one of the main farms without further notification and then helpdesk received numerous calls from users complaining about corrupted workbooks. Of course, the workbooks were perfectly fine, they were merely auto-upgraded to the 2012 version when Scheduled Data Refresh ran, so the 2008 R2 add-in could no longer load the models, as explained earlier. Upgrading the clients to the 2012 version of PowerPivot for Excel solved the issue, but it still was an unpleasant experience for users and helpdesk. It doesn’t have to be this way. Make sure your users upgrade their clients before you enable AutoUpgrade in PowerPivot for SharePoint. On the other hand, if you cannot ensure that your clients are ready, don’t enable this feature. Scheduled Data Refresh will fail for 2008 R2 workbooks in this case, but users can still download, upgrade, and re-upload their workbooks individually to work around this issue. In a way, this approach helps to accelerate 2012 add-in adoption because upgrading a workbook requires the user to have the 2012 PowerPivot add-in for Excel installed.
Putting it all together, perhaps the best approach to move to SQL Server 2012 PowerPivot across an organization is to focus on upgrading SharePoint farms first without enabling AutoUpgrade. Don’t forget to inform your users that Scheduled Data Refresh no longer succeeds for existing workbooks after the upgrade unless they install the 2012 PowerPivot add-in on their desktops and upgrade their workbooks manually. Upgrading your farms to 2012 PowerPivot ensures that 2008 R2 and 2012 users can share their workbooks in SharePoint. Now, you can focus on upgrading the client base, and then you can optionally enable AutoUpgrade to unblock Scheduled Data Refresh for all remaining old workbooks in your farms. Keep in mind, however, that workbooks are only upgraded as part of the data refresh process, so workbooks that don’t participate in Scheduled Data Refresh are not affect by AutoUprade and remain 2008 R2 workbooks.
In any case, the move from 2008 R2 to 2012 PowerPivot isn’t frictionless. At times, you might encounter an issue and then it is useful to verify the workbook version. This is tricky because 2008 R2 and 2012 PowerPivot workbooks look alike and error messages are sometimes vague, such as “Unable to refresh the data for a data connection.” Fortunately, you can determine the PowerPivot version of a workbook with relatively little effort, as the following PowerShell script function demonstrates, which I created with the help of developer Leonid Lyakhovitskiy. Thanks Leonid for clarifying where exactly the PowerPivot add-ins store their version information.
# The GetPowerPivotVersion function returns the
# the build number of the Microsoft.Office.PowerPivot.ExcelAddIn.dll
# that was last used to save the PowerPivot model.
# For example:
# 10.50.1600.1 corresponds to SQL Server 2008 R2 RTM PowerPivot for Excel 2010.
# 11.0.2100.60 corresponds to SQL Server 2012 RTM PowerPivot for Excel 2010.
# 00.0.0000.00 indicates that the specified workbook is not a PowerPivot workbook.
Function GetPowerPivotVersion($fileName)
{
# Initially, assume this isn’t a PowerPivot workbook.
# i.e. there is no PowerPivotVersion.
$ppVersion = “00.0.0000.00”
try
{
# Start Excel and open the workbook.
$xlApp = New-Object -comobject Excel.Application
$wbk = $xlApp.Workbooks.Open($fileName)
try
{
# Retrieve the version info from the PowerPivotVersion custom XML part.
$xlPart = $wbk.CustomXMLParts.SelectByNamespace(“”)
$version = $xlPart.Item(1).SelectSingleNode(“//ns0:CustomContent”)
$ppVersion = $version.Text
}
catch
{
try
{
# The PowerPivotVersion custom XML part was not found.
# Check the SandboxNonEmpty value to determine if this is a 2008 R2 RTM workbook.
$xlPart = $wbk.CustomXMLParts.SelectByNamespace(“”)
$nonEmpty = $xlPart.Item(1).SelectSingleNode(“//ns0:CustomContent”)
if($nonEmpty.Text -eq “1”)
{
# SandboxNonEmpty value is 1, so this is a 2008 R2 RTM workbook.
$ppVersion = “10.50.1600.1”
}
}
catch
{ # SandboxNonEmpty not found = plain workbook.
# Just suppress the exception…
}
}
# Close the workbook and quit Excel
$wbk.Close($false)
$xlApp.Quit()
}
catch
{
Write-Error $_
}
#return the results
return $ppVersion
}
The GetPowerPivotVersion function starts Excel, loads the specified workbook, and then reads the version information from a PowerPivotVersion custom XML part. If it is found, its value is returned—but note that there is one special case: Original 2008 R2 PowerPivot workbooks don’t contain this PowerPivotVersion part. So, if the version info cannot be found, GetPowerPivotVersion checks another custom XML part called SandboxNonEmpty to see if a PowerPivot model exists. If it does, the workbook is a 2008 R2 PowerPivot workbook and the function returns the version of the 2008 R2 RTM add-in. Otherwise, it’s a plain Excel workbook without a PowerPivot model and the function returns 00.0.0000.00.
And that’s it. GetPowerPivotVersion works with file paths as well as workbook URLs if you have the Desktop Experience installed on your workstation (). Of course, this script function is not super-fast or optimized for bulk-operations because it launches Excel for every file, but you could use it to iterate through a collection of workbooks in a document library. Here’s another script function that does just that and a screenshot of a test run in the PowerShell Integrated Scripting Environment (ISE).
# Iterates through the specified document library in the referenced site
# and returns the PowerPivot version for all workbooks in this library
# in form of a hashtable.
Function GetWorkbookVersions($siteUrl, $docLib)
{
# Create an empty hashtable
$wbkInfo = @{}
try
{
# Instantiate the SharePoint Lists Web Service
$uri = $siteUrl + “/_vti_bin/Lists.asmx?wsdl”
$listsSvc = New-WebServiceProxy -Uri $uri -UseDefaultCredential
# Create the request to retrieve all the default fields
$xmlReq = New-Object -TypeName System.Xml.XmlDocument
$viewFields = $xmlReq.CreateElement(“ViewFields”)
# Get the list of all the items in the document library
$nodeListItems = $listsSvc.GetListItems($docLib, $null, $null,
$viewFields, $null, $null, $null)
# Iterate through the items in the list
foreach($item in $nodeListItems.data.row)
{
# Make sure we are dealing with a workbook
if($item.ows_DocIcon -eq “xlsx”)
{
# Get the PowerPivot version and add it
# together with the workbook URL to the hashtable
$wbkInfo[$item.ows_EncodedAbsUrl] = GetPowerPivotVersion $item.ows_EncodedAbsUrl
}
}
}
catch
{
Write-Error $_
}
# Return the results
return $wbkInfo
}
As you can see, the version numbers in my document library are quite all over the place, which is not astonishing because I kept updating my PowerPivot add-in during the course of the development work. I hope you’ll find these script functions helpful. As always, keep in mind that the code has not been sufficiently tested, so don’t use it in a production environment. Feel free to modify the code according to your needs and use it at your own risk.
Join the conversationAdd Comment
quite nice post..thanks
quite nice post..thanks
hi exact pproblem i have,
how can i convert powerpivot workbook developed in 2012 to 2010 sql version? already i developed many charts and published in gallery, again i cant do rework.. any tricks? and tips? my workbook contains only one connection to sharepoint list
Hi,
Thanks for the above information, it was quite helpful
While I was inquisitive about the structure of CustomXML's found in Excel 2013 PowerPivot package.
I was a little confused about how does the Custom XML's stores structure about the tables itself.
it contains a Custom Xml for each table that is present in the PowerPivot window, however fields like ColumnFormat do not get updated.
So what are those fields for anyways?
And another Custom Xml with namespace :
What does this Xml refer to?
It would be really helpful if you could answer me on this.
Thanks a lot in advance | https://blogs.msdn.microsoft.com/analysisservices/2012/07/18/supporting-multiple-powerpivot-versions/ | CC-MAIN-2017-51 | refinedweb | 2,238 | 53 |
Hello -
I'm just starting into the world of C++. While I'm sure that these answers will come eventually, I have a couple of simple questions that I would like to know now please.
using namespace std;
Is this like a library named 'std'?
How do I know when/if to use that line of code?
Where does this information physically reside?
How will I know if my comuputer has it or not?
cout << "Hello World";
To use cout, do I need a namespace reference or an include reference?
How will I know which one to use?
If I need an include reference, how will I know which include to use?
For example, if I need #include <iostream> before I use cout, how do I know iostream is 'related' to cout? How would I find out what else is 'related' to #include <iostream>?
How do I know if my computer has #include <iostream>?
Do the contents of <iostream> ever change?
When I compile my program to run on another machine, would the other machine need the namespace std and the <iostream>?
Thanks | http://cboard.cprogramming.com/cplusplus-programming/85566-newbie-questions.html | CC-MAIN-2015-48 | refinedweb | 183 | 83.76 |
1 – 200 of 245 Newer› Newest»
Hi! First off, thank you for this amazing work, i really love it! I just started trying this out and i am getting some funky twitchy behavior on diagonal movement between divs in internet explorer 7... All other browsers are fine... Wondering if anyone had any thoughts?
testpage:
using this version and localscroll 1.2.7
and the function (i am no java freak, just getting started)
jQuery(function( $ ){
$.localScroll.defaults.axis = 'xy';
$('#navigation').localScroll({
target: '#container',
duration:1000,
onBefore:function( e, anchor, $target ){
},
onAfter:function( anchor, settings ){
}
});
});
Hi Aiwazz
That looks odd! browsers suck at redrawing. You'll need to try playing with the css, it'll probably work faster without the absolute positioning (just guessing).
If nothing works, you could opt for using the 'queue' setting.
Thanks Ariel! I did so last night (queue:true) and got everything smooth in all browsers... I'll play around with dif scroll methods/css though to find the culprit. What a great script, thanks again!
You need to return false to avoid the default behavior.
Just wanted to say thanks for helping me with that Ariel. I know bearing with java dorks like me for people at your level must be a bore but I appreciate it immensely!
Hi Ariel.. Got the mousewheel implemented. Pretty awesome. Thanks so much..
@aiwazz
You're welcome :)
@Arthur
Glad it worked!
Is there any method to using this plugin with newly created DOM elements (after page load)?
Hi Ariel,
This is really awesome stuff, very good job.
I am trying to get New Ticker (vertical scroll) in my application. With a top->bottom scrolling animation. I cant get this working based on News ticker sample app you have. The animation works bottom->top, but not the other way??
Your comments would really be appreciated. Thanks again.
@Synd
The plugin works for new elements (scrollTo). If you mean localScroll or serialScroll, then there's a setting called "lazy" for that.
@Kapil
It's hard to tell without a demo.
ScrollTo in action:
So-so design, great plugin. Tx :)
Pretty cool Emlyn :)
Ariel,
Any way to know when the end of scrollable region has been reached? For an example, check out either Blog, Awards, or Mission at - I'd like to display a message like "Bottom of page" when the user clicks Next towards the bottom of the screen and nothing happens (because there's no more scrolling to be done).
The psudo-code I'm using is basically :
$(".scrollNext").bind("click",function(){
$.scrollTo("+="+$aCertainAmountToGetToTheNextItem+"")
}
Thanks.
Maybe something like this:
var $elem = $(...);
var current = $elem.scrollTop();
var max = $.scrollTo.max(
$elem[0], 'y'
);
if (current == max)
....
I've been trying for the better part of two days to get jQuery.ScrollTo to work for me. But, I don't completely understand javascript, so I'm not exactly sure what I'm messing up with--except that I absolutely cannot get the slow scroll going--though I CAN get basic navigation working. But the effect I want is to slowly move over a large page.
Basically, I'm trying to achieve the navigation effect of with 4 quadrants, rather than 6.
I've loaded my test page on my sandbox:
This is exactly what I needed today. Tku tons for developing this. Your demos are really clear and a great resource as well.
this is such a brilliant script. some time back i saw a site that used this script to great effect.
How do you create the min-versions? I want to create some for my plugin
@Lars
I used to do hand-made minification using Dean's Packer, it was a bit tedious.
I now use the YUI compressor and run it using the command line (the Makefile's in there).
Now that it is automated, it's so much nicer.
Hey Ariel, Thanks for addding the scrolling limit, that came in handy.
Hi Ariel,
First of all, I'd like to thank you for this, it's wonderful!
Secondly, I want to reiterate what someone above said: it took me quite a while how to figure out how to make things work. The problem was that I was unaware of the necessity of an init.js file; I don't think it was referred to anywhere in the documentation (or I just missed it). For those of us who are relatively new to javascript/jquery, it might be helpful to mention this somewhere. This is not a complaint by any means, just an observation.
Lastly, I've been experimenting with the plugin a little bit and I'm having some issues. Essentially what I am trying to do is create a horizontally-oriented website that scrolls (horizontally) to the appropriate content. I have that working just fine. Additionally, I want there to be a "More Info" section that scrolls (vertically) to some content immediately below. This is where I am running into issues. What is happening it seems is that the page reorients itself to its origin and then begins to scroll vertically down from that location rather than that of the link. Sorry if that's a bit confusing, I have an example here:
If you click the first link, "About Anna's Hope" it will scroll you to the next section. There click the last link, "More Info" and it will instantly (as opposed to scrolling) return to the origin and THEN scroll down instead of scrolling down from where the "More Info" link is.
Any ideas?
Sorry if this is an obvious question, I'm still learning :D
Thanks!
Sweet tool.
I created 2 DIV tags and I wanted to slide the DIV on the right under the DIV tag on the left.
Works well in IE 7 but on FireFox 3.51 I get a ripping effect when text from the right div slides under the left div.
Does anyone know how I can avoid that?
Thank You
@Ben
I'll see what I can do about it once I'm a little more free.
@eyeCoder
Got a demo online ? if you kept the scrollbars, they're known to bust the animation in firefox.
Please, cand you show me (us) an simple example, make me understand more fast and easy?
The demo has a huge source!
I`d like to see how do i scrool in an div with width 3000px at his middle orizonatly
Hi
ScrollTo's demo is meant to show all the settings you can use.
To scroll a div to the middle do this:
$('#the_div').scrollTo('50%',900, {axis:'x'});
That's the basic.
I've been using scrollTo for quite some time, but am now looking to change things up a bit.
I am using the coda slider, serial scroll, scrollto, and localscroll.
What I want to do is load a specific panel, the middle one or #center on loading the page. So it will be basically coming in to that panel with two above it and two below. By the way I have vertical scrolling activated.
My sandbox is here:
I've played with various settings, checked related posts and jquery documentation and cannot for the life of me figure this out.
I'd appreciate a punt in the right direction, but understand if you have better things to do.
Thanks for all your hard work.
Damon
Hi Ariel, thanx for the great plugin... i've used it in a website for both horizontal and vertical scrolling and it is working perfectly fine in all the browsers except opera. The problem is what the ajax calls. Whenever a portion of the page is updated using .net update panel, the page scrolls back to the first panel. Can you help me in this? Thanks
I've used the plugin to create a simple scrolling text pane. At the bottom of my text pane I have an 'up' arrow and a 'down' arrow, which the user clicks to make the text 'slide' up or down. This was very easy to set up and works perfectly.
However, is there any way to hide the 'up' arrow if the text is at the top or hide the 'down' arrow if the text is at the bottom?
Thanks :)
@Nomados
Ok, the links below are working, the one that says "Blog" throws an error when clicked (check with Firebug).
I'm not sure I understood what is that you need and what's the problem.
@Test
Only in opera ? odd. You could call scrollTo after refreshing to reset it ? do you have anything online for me to see ?
@Matt
If the text is enclosed in certain elements, like divs, you could use serialScroll with has that option as a snippet. If you don't, then you'll need to check what's the scrollTop and see if it's at the limit.
If you get me a link I'll check.
Ariel,
I'm sorry for not being clearer.
My page, has 5 panels (top, centertop, center, centerbot, bot) using the coda slide and your associated js goodies. When the page loads the "top" panel is visible, which is the default (top down structure). I was hoping that there was a way where I could use ScrollTo to change the default behavior and have the "center" panel become visible after page loads. This would put the visitor to the site into the middle of the panels, so that they could choose to either go up the panel tree or down. ( I do not have the graphical links for the in-panel navigation created yet) I can accomplish what I want by pointing my A record to but would rather use ScrollTo to make the change if that is possible.
I tried first changing in SerialScroll start: from 0 to 2 (center panel) but there was no effect. So I thought that the ScrollTo held the ability to make the changes I desired.
Thanks again for your time
Hope that is clearer.
@nomados:
just use this:
$( document ).ready( function() {
$( '#yourContainer' ).scrollTo("#center", 500 );
} );
I have a different question though. Can you scroll to different DIVs in such a way that they would be centered on the page?
Example:
I'd like the grey boxes to show in the center of the screen.
Thanks for the help.
@Bostjan - Sorry for the green behind my ears, but where do I put this code?
In the HTML file? the scrollTo.js?
Again I apologize for my lack of knowledge. I've tested it out in a couple of different places. I believe my #container is called scrollContainer so maybe I am calling it out wrong. But I'm not getting any of the desired effect.
Thanks for your time,
Damon
I got it, was calling wrong container.
Thanks for your help.
d
hi there.
is it possible to scroll the body 100px to the right each time on a click on a certain id?
know what i mean? not scrolling to a fix coordinate or anchor...
thanks and kind regards,
m.
@nomados: glad I could help
anyone have a suggestion towards my problem?
Hi! Could someone please help me...? I've seen the examples, and read lots of comments.. search the web and still haven't managed to do the following:
I have a div, with fixed height and width (not published online.. so i can't show it :( ), and i want to scrollTo a span Id inside the div.
So far so good! I can do it!
My problem is.... since the div is bigger then the available screen if the span i'm scrollingTo is at in the last row of the div it's going to be hidding.. since the browser doesn't scroll down to follow it.
The div is to big to center it in the middle of the screen...
I'm only using the following code:
$(document).ready( function(){
$('#Div_Id').scrollTo('#span_Id'); });
(hope it's not something simple i just haven't notice..ehehe)
@Bostjan
You seem to have figured it out ?
@Flip
So you need to scroll the window as well if necessary ? that's not too hard, but not trivial either.
You'll have to do some fun calculations using jQuery's width, height, scrollTop and scrollLeft to achieve this, good luck :)
I'm sorry, i jumped the gun and emailed you and didnt realize you had a very nicely active support area on the comments.
My issue is the same as BEN's
I have a scrolling horizontal website using the Tiny Scroller ()
I wanted to use YOUR code to create a neat tabbed type effect for the content within the individual divs. The issue is that both of these plugins use the anchor tag to fire the effect.
So what happenes is that I click on an anchor link on the main navigation, it scrolls horizontally to the div it anchors to using this
Nav. Text
When I land on the page with your code on it (by clicking the link on my home page for "services") you will see the secondary navigation and beneath that the content that I would like for that page.
So if you click on those links for your code, you will see that the effect works just fine, it scrolls nicely and the content expands and contracts. I LOVE IT!!
But i think the Tiny Scrolling might be effecting it, because it also moves to the right as if it's looking for a div that is not fixed in the center.
I tried absolute positioning and stuff.. no such luck.
HERE IS THE LINK TO MY WEBSITE
I REALLY hope you can help me with this.. I think this addon is awesome.
also, do you get annoyed with the people yelling at you for not telling them step by step on how to work it into their own individual website? The tone in some of these comments is a little well, asshole-ish!!!
ok.. sorry.
I think I got a LITTLE bit of an idea what is going on thanks to FireBug.. but I am completely and utterly confused now.
When i view my site using FireBug, I see the your div "panel" has an element.style width of 3500px.
This is NOT in the css, and i did a find of ALL of the documents and NOTHING showed up that gives it 3500px width. So my guess is some whacky javascript code that I dont know about.
Changing this 3500px width to about 650px in firebug ALMOST fixes my problem. Instead of scrolling all the way over, it only moves the content a tiny bit, which I have a feeling, once we get this settled, a little bit more css (maybe with your help ::WINK WINK::) this issue can be completely fixed!
Sorry if i flood this area with posts, but I want to try and give you as much info as possible so that when/if you do look at it, it is a little but easier.
again here is the link:
bsc design
Hi
It's a little confusing to have both scripts. The content shows messed up on my browser (FF2).
There's a huge white margin on the right. You seem to be including jQuery twice (no need) and thw.js is throwing an error.
Based on what you've done, I think localScroll would fit perfectly. You should check it out (and maybe remove thw?).
ok.. I would be fine with working with a different script that does the same thing if it's that easy.. I havent gotten to the point of trouble shooting in differernt browsers yet, I assumed Tiny Scrolling worked in your browser because the plugin creator SAID IT DID!! (but dont they always?)
So with the local scrolling, I can use BOTH scripts successfully on the same page with no issues?
Oh yea, and I called jquery twice cause i quickly copied and pasted.. OOPS!
I am going to recreate the site using your localScroller and come back and see if there are any issues..
in the meanttime, I am going to head over to browser shots and see what my site looksl ike in firefox2. It looks ok in FF3
this is weird. I uploaded localScroll (not minified) to my lcoal computer and linked it to my index.html using dreamweaver and its saying that there is a synax error on line one of locaLscroll.js
Where can I see this ? (let's move on to emails)
Just wanted to post a big giant THANK YOU for the help you gave last night. My website will now be functioning the way i envisioned and all SHOULD be well.. thanks So much..
now to head over to the donations area...
I was able to stack the DIVs on top of each other by giving the position: block to the proper divs code and removing the float:left from the encompassing #panel div...
But it seems to be reacting strange and I'm not sure if this is how it is supposed to react.
When you click on the link, you will see that the div FIRST adjusts the height to fit the next div and then scrolls to it. When it does it, it shows some of the other div that is beneath it before scrolling..
Is this because I removed the float? or is this what it's supposed to do> I would prefer the height to change once it lands on the div that you clicked on
You can see here:
Here is a hack to get scrollTo out of the way of Drupal sticky headers:
When you're scrolling up, use the relative syntax. Ugly, but workable:
if (up == 1) {
$.scrollTo(’-=100px’, {
axis: "y",
onAfter: function(){
}
});
}
});
Hello Ariel I am attempting to use scrollTo() I am simply doing this...
jQuery('.navto').bind('click', function() {
$.scrollTo(jQuery(this).attr('rel'));
return false;
});
This does not work. All the page does is jump and scroll bars appear then nothing.
@Karega
Are you doing this within a document.ready ? if you can't solve it, try to get me a demo so I can see.
Good plugin, thank you. I'm so pleased it works on so many browsers.
Showing off my implementation:
Thanks for the great plugin.
Quick question. I.
I think the demo below will illustrate it well.
@Pocky
For some reason, your demo seems to be 404 for me :(
@Ariel Flesler
Thanks for reading my question. I"ll post my relevant javascript below. But my link works if you copy and paste it. Here's the link again:
test page
Relevant javascript:
$(document).ready(function() {
var $paneTarget = $('#screen');
$('#logo-home').click(function(){
$paneTarget.stop().scrollTo( 0 , 0, {duration: 1200, easing: 'swing'} );
});
$('#mainnav li#button-home').click(function(){
$paneTarget.stop().scrollTo( 0 , 0, {duration: 1200, easing: 'swing'} );
});
$('#mainnav li#button-films').click(function(){
$paneTarget.stop().scrollTo( 0 , 1600, {duration: 1200, easing: 'swing'} );
});
$('#mainnav li#button-links').click(function(){
$paneTarget.stop().scrollTo( 0 , 3200, {duration: 1200, easing: 'swing'} );
});
$('#mainnav li#button-press').click(function(){
$paneTarget.stop().scrollTo( 3200 , 0, {duration: 1200, easing: 'swing', queue: 'true', axis: 'yx'} );
});
$('#mainnav li#button-blog').click(function(){
$paneTarget.stop().scrollTo( 1200 , 1600, {duration: 1200, easing: 'swing'} );
});
});
Original question:.
Thanks again!
@Pocky
You're trying to use jQuery and fancybox within javascript.js and it is included before the other 2.
Note that you don't need to include both versions of fancybox.
Most of the calls to scrollTo that you make are scrolling to (0,0) and the second 0 is the duration (check the main post to see the right way to use it).
I'd advice you to try localScroll (also on this blog) for this.
What happened to my question?
I delete support requests when I take too long to reply, assuming most people just solved it in some other way. In order to keep this comment list readable.
Feel free to send me an email with your problem and I'll check that asap.
it's possible to nidify a scroll box inside another scroll box?
how to do this?
thanks
Ariel - How can I bind mouse wheel scrolling to a div that has height set and overflow hidden? Your examples don't really show how to do this, they all focus on logging the events that happen. I tried putting
$('#scroll-auth').mousewheel(function(event, delta) {
if (delta > 0)
log('#test2: up ('+delta+')');
else if (delta < 0)
log('#test2: down ('+delta+')');
return false; // prevent default
});
but it doesn't scroll.
What an idiot I am - I meant to post that to the mouse wheel page! sorry
Ariel - Fantastic plug in! Hello single page design with easy maintenance.
I'm having troubles understanding how to specify which #anchor gets shown when the page loads (Section 3b for example, instead of 1a).
I added Bostjan's code to the init.js file but I'm not getting anywhere...
$( document ).ready( function() {
$( '#content' ).scrollTo("#section3b", 500 );
} );
(#content is my viewer div)
Thanks for any help
I can't seem to get this to work in firefox. Works great in IE. Is there something i'm missing? I'm new to javascript so...
hi i created a site using this plugin..... All is doing well except for IE8. When I clicked on a button it always starts on the homepage. I need help anyone. please
Hi Ariel,
Many thanks for your effort.
I can't get it to work right tho.
I want the window to scroll horizontal to the middle of div, so the div is always in the middle of the view.
regardless of the resolution.
Here is my setup:
Here is what i want to achieve:
It also doensn't work on IE, even with height&width set on the containers.
I wanted to have scrollTo center the element (as did Bostjan?) I think I figured out a reasonable way to do it.
I added a 'center' option to scrollTo with the below code. I put it in the this._scrollable().each() loop in $.fn.scrollTo()
The line # is 155 in scrollTo() 1.4.2
--CODE--
if (settings.center) {
// Center in the scroll target
var dimKey = axis== 'x' ? 'width' : 'height';
attr[key] -= ($elem[dimKey]() - targ[dimKey]()) / 2;
}
-- END CODE--
This plugin is awesome. I have one problem, for some reason if I scroll to an element in a pane, and then refresh the window, the previous position persists. There is no option to turn this off. How do I reset the pane so it always loads the first element? Thanks,
Thanks for the script!
I want to make use of the UL/LI menu to link to another HTML file (doesn't use the scrollto script), but it won't let me, what do I need to do to override this?
Great script! So many ways to give it information, and then it "just goes". Excellent.
But there is one way to enter the destination which I miss.
Instead of giving it a string, such as '+=400' is there a way to feed it a variable for that string?
That is:
var shift = 400;
var dest = "+="+shift;
$.scrollto(dest....)
?
Thanks!
hellow, i don't understand why it doesent scrool back under ie, it works fine only on firefox, other browsers has some problem with scroling list
you can see at
plese help
Thanks for the post! This is my scrollTo creation that came of me reading your article:
Hope you like it
Thanks again!
Pieter
Hi
Im new to javascript, and I am trying to get ScrollTo to work but I cannot get it to work.
Here is the code i have so far, can anyone tell me where i am going wrong.
Very cool - thx. I'm seeing a bit of inconsistency on a simple, static test page I've up when I use IE7. Refresh after refresh it doesn't scroll to the same spot. Just wondering if my usage of "$(document.getElementById('anchor2_17').parentNode).scrollTo( document.getElementById('anchor2_17'));" is ok?
Ah, forgot my test page is at
Hey Ariel,
the ScrollTo Effekt really looks great on your Demo Page. I would like to implement it in my current project:
I linked all the .js files and modified the first link like shown on your demo page. I can't get it work though.
Do I need the init.js file?
It would be great if you could have a look on it.
Cheers!
Matthias
has anyone else experienced problems with a "jagged" diagonal animation in FF or IE?
i use scrollTo to navigate between a grid of divs (each 800x600px), and instead of smoothly scrolling diagonally, the separate vertical & horizontal steps in the animation are blatantly visible. is this a problem with my implementation, or just with the way these browsers execute scrollTo?
I found the same thing with my implementation.
works fine in fireFox, Chrome, Safari, but not in IE. I'm not sure if there is a solution out there, I didn't search very hard.
Hi
First thanks for a nice plugin
It works nicely clicking on a link as described in demo but
I am trying to use it as below:
user clicks on a radio button called input#oui
jQuery('input#oui').click(function(){
jQuery('div#reponse1').show('slow');
jQuery('div.postpop').show('slow');
jQuery('div#reponse2').hide('fast');
jQuery('div#reponse3').hide('fast');
});
As it hides quite a bit of text I would like to scroll smoothly back to top of page where I have div#site-top
Unfortunatly I have just started learning jQuery and have not succeded so far !
Thanks for help
Nice post, keepup with the good work, i have subscried to your blog
Celebrities in Avatar getup
Great plugin! Got it to work with prettyphoto plugin as a lightbox feature.
However, I'm having a difficult time trying to get it to scroll horizontal only. I set the $.localScroll.default.axis value to 'x'; but it will never scroll past the fifth picture.
Here' a sample of the code:
(div id="content")
(div class="section")
(ul class="gallery clearfix")
(li class="sub" id="section1")
(a href="images1/lrg1.jpg" rel="prettyPhoto[gallery1]" title="For more info about the products on this page Call us at 800.555.1212")
(img src="images1/metro1.jpg" alt="Metro 1" /)
(/a)
(aNext(/a)
(h2)Metro 1 ( 1 of 24 )(/h2)
(/li)
(/ul)
(/div)
Thanks for a great plugin. Is there anything that can be done to make this work with Opera? A Simple scroll with jquery.scrollTo-1.4.2-min.js
jQuery(window).load(function() {
jQuery.scrollTo('a.[name='something']',100, {axis: "y", offset: -20});
});
does not do anything in Opera, but Chrome, Safari, IE and FF work well.
Hi Ariel and thank you for the good article.
I have some problems scrolling after .click. My div is hidden until click(tabs), but I would like the whole window to slide to the bottom when I clik on #work (Arbeid picture). Is there any way(simple) to do this? I´m a newbie on jquery ;)
You can view my page here:
Thank you in advance.
Regards
Kristian
I have this working when I scroll the whole window but I cannot figure out why when I try to scroll a particular div I get nothing - have a look at my demo at - - the top 2 large buttons on the left, the first one is set to scroll a div (not working) and the 2nd to scroll the window (which works).
Can anyone please help me, I'm really struggling with it.
Thanks!
Hi there,
I am really getting very confused with this plugin. I cant seem to find the call in anybody's sourcecode so i cant really get some clear examples.
When i look at my own sourcecode it shows the stuff that makes it tik.
Anyway i want the scroll to stop before it reaches the end of the scrollbar... How do i doe this, i know i should use the scrollTo script but when i replace it in de code it just stops working... I am having a hard time to understand this script because i cant see anybody's code (why is that anyway?)
Thanks in advance for the information :-)
HI there, is there any way I can tell the scrollto function to scroll to multiple div classes? I have a static navigation thats on every page and then I have dynamic content so every post thats generated has a diff id... So I'm trying to just assign multiple div classes in the link so that it goes to which ever div is present on the page.
a1</a
Ariel, I'm loving the smooth scrolling that "scrollTo" allows, however, I'm having an issue with the scrollTo plugin on the iPhone:
The problem I'm having is that once an initial scrollTo() is called, the "page" is "locked" to that scrolled position (meaning, no matter how much I try to reposition the page, the release of a finger results in the page "snapping" back to the scrolled location).
I can't seem to duplicate this on another browser (desktop), so I'm wondering if it's a webkit problem on iPhone, but I really think it's a matter of the scrollTo not releasing the position that was scrolled-to.
Any ideas?
dear ariel,
very nice tool thanks. the demo page throws js warnings e.g.:
Warnung: reference to undefined property b.onAfter
Quelldatei:
Zeile: 11
Kept getting an error stating 'Cannot read property 'slice' of undefined'.
After tracking that down, seems it will throw the error if you request it to scroll to a selector that doesn't exist.
Problem resolved by replacing
if(b=="max")b=9E9;
with this
if(b=="max")b=9E9;else if(jQuery(b).length==0)return;
I am working on a project that is working great across browsers here (styles unfinished):
But when I add more links to the nav, it doesn't work in Safari:
Instead, it just 'jumps' to the div id, instead of scrolling nicely (works in FF though).
I updated the CSS and JS file.
Follow up: Actually, this () is working fine across browsers but it's not working locally in Safari - very strange.
Instead of the page scrolling nicely up and down, it jumps to each anchor.
Any idea why this would be happening?
An earlier version () works fine locally...
Why the heck doesn't this work in Webkit (Chrome, Safari) browsers:
$.localScroll();
or
$('#nav a').click(function() {$('body).scrollTo($(this).attr('href'), {duration : 750});
return false;})
but somehow this does work in safari/chrome:
$('body).scrollTo('#link1'), {duration : 750});
Works fine in Firefox.
scrollTo and localscroll plugins are included of course.
Markup:
[ul id="nav"][li][a href="#link1"]link 1[/a] .. etc ...[/ul]
[div id="link1"]...[/div] ... etc ...
Please help!
Hi Ariel,
I'm trying to modify your great scrollTo scripts - I want to have text/html along with the image and so far it seems that doesn't work?
Any suggestions would be greatly appreciated.
regards,
Mark
mark.e.gould@gmail.com
I'm really confused...where do you specify the $(...).scrollTo( target, duration, settings ); parameters??
Hi Ariel,
1st THANK YOU for this totallytotallyamazing plug-in!!!!!!
Derek
Link:
3 JS functions in html HEAD:
function scrollToAbout(){
$(window).scrollTo( {top:'2073px', left:'0px'}, 800 );
}
function scrollToPortfolio(){
$(window).scrollTo( {top:'0px', left:'0px'}, 800 );
}
function scrollToContact(){
$(window).scrollTo( {top:'2779px', left:'0px'}, 800 );
}
Accessing these jquery libraries:
jquery.js
jquery.scrollTo-min.js
The AS3 code from the Flash AS3 buttons:
import flash.external.ExternalInterface;
about.buttonMode = true;
about.mouseChildren = false;
about.addEventListener(MouseEvent.MOUSE_OVER, aboutOver);
about.addEventListener(MouseEvent.MOUSE_OUT, aboutOut);
about.addEventListener(MouseEvent.CLICK, aboutClickHandler);
function aboutOver(event:MouseEvent)
{
about.gotoAndPlay("overAbout");
};
function aboutOut(event:MouseEvent)
{
about.gotoAndPlay("outAbout");
};
function aboutClickHandler(event:MouseEvent):void
{
ExternalInterface.call("scrollToAbout");
};
portfolio.buttonMode = true;
portfolio.mouseChildren = false;
portfolio.addEventListener(MouseEvent.MOUSE_OVER, portfolioOver);
portfolio.addEventListener(MouseEvent.MOUSE_OUT, portfolioOut);
portfolio.addEventListener(MouseEvent.CLICK, portfolioClickHandler);
function portfolioOver(event:MouseEvent)
{
portfolio.gotoAndPlay("overPortfolio");
};
function portfolioOut(event:MouseEvent)
{
portfolio.gotoAndPlay("outPortfolio");
};
function portfolioClickHandler(event:MouseEvent):void
{
ExternalInterface.call("scrollToPortfolio");
};
contact.buttonMode = true;
contact.mouseChildren = false;
contact.addEventListener(MouseEvent.MOUSE_OVER, contactOver);
contact.addEventListener(MouseEvent.MOUSE_OUT, contactOut);
contact.addEventListener(MouseEvent.CLICK, contactClickHandler);
function contactOver(event:MouseEvent)
{
contact.gotoAndPlay("overContact");
};
function contactOut(event:MouseEvent)
{
contact.gotoAndPlay("outContact");
};
function contactClickHandler(event:MouseEvent):void
{
ExternalInterface.call("scrollToContact");
};
Here some changes I've made to source code to make scrollTo work on my page.
144 ... if( toff ){// jQuery / DOMElement
145 --- attr[key] = toff[pos] + ( win ? 0 : old - $elem.offset()[pos] );
145 +++ attr[key] = (toff[pos] - toff[key]) + ( win ? 0 : old - $elem.offset()[pos] );
147 ... // If it's a dom element, reduce the margin
199 ... if( !$(elem).is('html,body') )
200 --- return elem[scroll] - $(elem)[Dim.toLowerCase()]();
200 +++ return elem[scroll] - parseInt($(elem)[Dim.toLowerCase()]());
202 ... var size = 'client' + Dim,
... line with no changes
--- old line
+++ new line
Plugin won't worked without this changes
I am using scrollto/localscroll with jquery tabs.
Actually i have a div where i add jquery ui tabs and when there are many tabs, i show arrows so user can navigate through different tabs.
I manage to do so but the tabs lose their event i.e. when u apply this plugin to tabs, clicking each tab do nothing.
is there any link where i can find a demo or something?
i want an effect like in jquery. plz help
I am having trouble getting this to work properly. Please see the script on this page:
I realize that the script has scrollTo rather than $.scrollTo. Without the $ I can get the screen to jump to the proper position. It just does not tween properly.
Any advice?
Thanks.
Just finished working on jQuery scrollable tabs using your scrollTo plugin :)
I would love to use this... but I've been trying for a couple hours to get it to work and cannot. I'm a 10-year javascript veteran, but only a little experience with jQuery. Can you please post a simple & complete example? The "Demo" is completely bewildering. I just want to make two arrow buttons make a div scroll. Thanks!
RE-POSTED From the CodeProject:
Question:
Hi,!!!Confused
---------
Hi Richard,
This worked Thank You Again!!!
To some it up: replacing the embed JS code AC_RunActiveContent.js with swfobject.js eliminated the ActiveX error "Error on page." I was encountering with IE.
Of course this change opened another can of worms. The CSS that I had applied to the flash div container no longer worked e.g. position:"fixed" etc. To get around this problem I found a post by Vincent Polite on Google groups and followed his advice. Here it is and I quote:
"When the SWFObject is actually embedded it does replace the div named in
your code, and the css associated with that DOM node disappears.
The recommended approach to deal with this is to create a container div
around your SWFObject div and apply the styles there. Presuming you have
control over the naming convention and the css file, this shouldn't be hard
to resolve."
totallytotallyamazing
Permalink
Posted 27 Apr '10 3:57 PM
totallytotallyamazing
Rep: 20 (328)
Is it possible to set the initial position of the scroll area on load of the page? I want my scroll area to begin in the middle basically not the top left.
I did it on our site. Check the code to see how.
Pieter
Hi Peiter
Thanks for the sample. One more question I notice how your moves after the page load. Can it be set to move a certain amount of seconds or time after the load?
script type="text/javascript"
jQuery(window).load(function() {
jQuery('#loading').hide();
});
script
This shows(it's already visible) the loading image until the page loads, then hides it.
script type="text/javascript"
window.onload = function() { $('div.scroller').scrollTo('div.Home', 3000, { easing: 'elasout' }); return false; }
script
This scrolls to the middle div after the page loads. The load takes time because my page is pretty heavy, but if you want to have a delay you can use the setTimeout() javascript method
To stop the scroll if some other link was clicked edit the funktion "animate" into this:
function animate( callback ){
$elem.stop().animate( attr, duration, settings.easing, callback && function(){
callback.call(this, target, settings);
});
};
(sorry for my bad english :) )
I'm trying to use ScrollTo to scroll to the open pane contents in an accordion.
My divs are class "pane" but I'm not sure how to invoke scrollto to scroll the window down to an open pane..?
Any help appreciated!
BTW, the div.pane has a style switch to make it active done thru jQuery tools, so I tried this:
$('div.pane{style="display: block;"}').scrollTo('max');
But I'm pretty new to all this and not at all sure of the syntax for selecting a div of class pane with style="display: block" .. which is waht shows in Firebug when inspecting the currently open paragraph in the currently active div.
In jQuery tools accordions, a header precedes the div of class pane, which includes the paragraph of content.
The header switches from class="" to class="" when clicked and the paragraph within the div switches to the style setting above from style="display:none".
Hope that's clear. I'm lost in the selector language to get down to that active paragraph and scroll the window to is.
Hi,
Thanks for sharing your script for free.
Can I ask you if ScrollTo is able to achieve a scroll effect found on Apple's site ()
The effect on Apple's site is hard to explain, but basically it doesn't scroll through the whole page, but it skips part of the page. There is less animation and it is less disorienting (especially on very long pages).
What do you think?
Hi,
Thanks so much for the great work. It works fine. However i need to scroll two things at a time.
1. Banners (total 5)
2. Numbers (i.e. 1/5 , 2/5 .. 5/5)
As banners scroll i also need to scroll or increase the respective numbers.
Is it possible? Also i am not able to control the "wait time / scroll time" between two scrolls.
Thanks,
Kush
There is JS error if the target of scrollTo is a jQuery object with zero elements. Please, fix. Example:
$('#my_element').scrollTo('#some_inexisting_element');
Hi. I'm very new to jQuery and js.
I have the scrollTo plugin working on a large block of text (all text is visible, no overflow settings). I noticed that if the text size is increased/decreased (in any browser) the anchors are no longer precisely located. Is this a known issue or is there a fix?
Thanks,
Todd
I'm using this script as part of another project currently using jquery 1.3.2. When I replace that with jquery 1.4.2 (current release), scrollto 1.4.2 stops working, without any errors.
Ariel - Many thanks for your great work on the 'scrollTo' to plug-in. Using it as an alternative to Microsoft's 'scrollIntoView', was able to scroll GridView rows smoothly and limit screen changes to the GridView itself!!
Here's some tips for using jQuery and your 'scrollTo' plug-in in a Visual Studio 2008 (SP1)/ASP.NET 3.5 AJAX-enabled project which uses 'master pages':
1.) In the 'master' page file (*.master) place the *.js file references in element: head before the first 'ContentPlaceHolder'
(Sorry, your comments parser does not allow me to attach a code sample here..)
2.) When referencing 'document' elements in Javascript, use the Microsoft 'ClientIDs'. For example, given a GridView (grdviewSearchResults) within a
scrollable div (scrollableDiv), here's one means to 'scrollTo' to a specific row in the GridView:
- Iterate through the GridView rows, maintaining an index: ii to the current row
- Upon finding the 'to be scrolled to' row do the following
var row = jQuery("#<=grdviewSearchResults.ClientID%> tr").eq(ii); //Select the GridView row
jQuery("#<%=scrollableDiv.ClientID%>").scrollTo(row); //Scroll the div to the selected row
- Please note: To makes things easier to read, prefer to use 'jQuery' rather than the rather cryptic '$'. Also, ASP.NET appears happier with double
quotes in the parameters rather than the single quotes found in many published jQuery examples.
Works like a charm!! Thanks again.
Hi Ariel, thanks for this plugin i really love it; i was wandering if it's possible to bass a variable instead of a number; something like this:
$('.triangolo_top') .click(function () {
var numero= +$(this).attr('title')+1;
$(window).scrollTo ('li:eq(numero)', { duration:500, axis:'y'})
})
In this specific way doesn't work, i've tried so many times to get this script to work without success; you can have a look to a reference page at:
thank you!!
This looks pretty good. I have a question. I want to use LocalScroll on a page that also uses an jQuery-UI Accordion. If the users clicks a local anchor two things need to be done:
#1: The target may reside in another section of the accordion so that must be activated.
#2. The page must scroll to the anchors target.
Could I use the onBefore function to open the corect section of the accordion or is there some other way to achive this.
Thanks for your input,
Fred!
Having a blast with the scroll.to and local.scroll-- thanks for the great scripts! I am learning a lot from both.
I have four large screen sized divs, 2 on top, 2 on the bottom, in a wrap creating a single page website with horiz/vert/diag. scroll. Each div is a linked 'page'.
My issue is when a horizontal scroll is triggered from the floating nav menu, originating from a bottom div, I get a quick flash/flicker of the top div contents just above before it scrolls to the correct div. It doesn't happen when I scroll from a top div.
ex.
12
34
When scrolling from 3 or 4 to anywhere else, 3 will quickly flash the contents of 1 before scrolling, 4 shows the contents of 2, but 1 and 2 are fine.
Horrid little representation, I'm sorry-- your help is greatly/desperately appreciated!
Great script and tutorial!
Quick question. I am using this tool for some horizontal scrolling.
Works great, but when jumping to a particular anchor, it always alignes to the very left border of browser.
Cant seem to get things to stop sliding in more towards the center of the container.
Please let me know if you have any advise.
Thanks much!
Great plugin, it's very intuitive.
One problem I'm having though. I'm trying to scroll.to a hash and it works great when the browser is larger than 1000px wide (which is the width of the site). However when I'm down to a smaller width the scroll.to moves the content to the left instead of scrolling in a straight line. As if the x coordinate of the div changes.
Thougths?
Hi and thanks for this very nody bit of code.It's great. I have used it succesfully so far but have just added a jquery accordion and cant get them to work together on the same page.I can get one or the other to work on its own but not both. anyone knows how to get around this and use both scrollTo and accordion on the same page?
hey flesler, I've been having browser problems.. I'm very new to this stuff and I'm no programmer by any means. Everything works perfect in Firefox, however in Safari, every link that is called by link isn't working at all. What do you think could be the problem?
Here's the link concerning my question above. I forgot to attach it.
Hi, I've been looking for something like this for ages now and am thrilled to have found this place, but I can't seem to get scrollTo to work.
If I'm not wrong, it works as a click-and-scroll thing the code I tried was this;
$('#link_div').click(function(){
$.scrollTo('#target_div');
});
This doesn't seem to work though. I'd really appreciate any help you can give me.
And as for LocalScroll, it seems to work fine in Firefox but does nothing in Safari. Is there a way around this? Thanks.
This plugin is awesome, but it needs some updates. Currently it doesn't work with jQuery 1.4.2. As soon as I upgraded jQuery it stopped working with an error message: "val is undefined" around line 158. and adding the statement
if (typeof val !== "undefined") {
}
Like the post on suggest, doesn't work, it just bypasses the error and scroll doesn't work.
Using Firefox 3.6.8 on mac os 10.6.3
NIce code! I've am able to scroll to my selected divs. However I would like the page to scroll but offset the div vertically 200 pixels from the top of the window. I've been trying to go over the examples but I'm a little confused on how my I should change that above code to make the offset work.
Can you help me?
Can anyone post the basic codes for a simple scroll consisting of a link and a local same-page target? Just so I get how this plugin actually functions. I'd really appreciate it.
Sorry I'm posting so much, but I just wanted to say that I've gotten LocalScroll to work in Safari after updating jQ to 1.4.2. ScrollTo on the other hand, still isn't working for me :[
Shouldn't I be able to scroll a image around?
DEMO
AND.. rlic
could you post your sample?
I seem to be having a small problem with ie. (works in opera)
the call is:
$("#resultscontent").scrollTo(myanchor, 800);
which is part of a load function
$(output).load(thishrefcontent, function () {$("#resultscontent").scrollTo(myanchor, 800);});
when myanchor is a number it works. But if myanchor is an anchor, like
myanchor = 'div#longphilosophy';
(where div#longphilosophy is a place in thishrefcontent..)
in ie it goes to the top, not to the anchor.
I know you say something in your troubleshooting section about making div's fixed, but it wasnt quite clear what you meant. could you explain, if it pertains...
Hi, thanks for the short example. After searching on so many jquery plugins, this one is the real help for me.
Hi,
I am using the following script for a vertically/horizontally sliding website which is based on scrollTo. My problem is that I just don´t know how and where to add the hash:true feature. I tried to add it exactly where the queue:true is but it is not working. Any idea what I can do to see the hash link in the browsers address field? Thanks for any help.
Hardy
$(document).ready(function() {
//get all link with class panel
$('a.menu-frame-button').click(function () {
//reset and highlight the clicked link
$('a.menu-frame-button').removeClass('selected');
$(this).addClass('selected');
//grab the current item, to be used in resize function
current = $(this);
//scroll it to the destination
$('#wrapper').scrollTo($(this).attr('href'),{queue:true,duration:1500});
//cancel the link default behavior
return false;
});
//resize all the items according to the new browser size
$(window).resize(function () {
//call the resizePanel function
resizePanel();
});
});
function resizePanel() {
//get the browser width and height
width = $(window).width();
height = $(window).height();
//get the mask width: width * total of items
mask_width = width * $('.item').length;
//set the dimension
$('#wrapper, .item').css({width: width, height: height});
$('#mask').css({width: mask_width, height: height});
//if the item is displayed incorrectly, set it to the corrent pos
$('#wrapper').scrollTo($('a.selected').attr('href'), 0);
}
Hi Ariel,
First off, great work on scrollTo. It's been a pleasure to use.
I making some minor fixes to my current scrollTo implementation and I came across a bit of a bug. I was trying to get the $.scrollTo.max() of my container element and it was only returning "NaN". I looked into the code and found that on line 200, it was trying to return elem[scrollHeight(or Width)], which wasn't supported by the element I was sending to the function.
I fixed this by changing "elem[scroll]" to "$(elem).attr(scroll)". This works perfectly for me in Firefox (haven't tested in IE), but I would assume that using jQuery to target the scrollHeight would work out better all-around.
Again, Thanks for your quality work.
~Hyszczak ]dot[ net
Dear Ariel,
I have used your local scroll script (simply took the example at) on a webpage of mine.
All works perfect and in all browsers (well, did not try older than IE8 or Firefox 3, nor Opera), on both Win and Mac.
I can tweak speed, x and y, etc. However, I am stuck with two problems: after one or two scrolls, clicking the browser 'back' button does change which #foo is displayed in the url bar, but it does not actually scroll back to that #foo, basically my back 9and forward button(s) are useless.. It would be desirable to have them work as well. Help..
Second problem: I want to implement some scrollTo functions, mainly some easing at the end of a scroll, but wherever and however I try to modify my init.js, no go, either the whole scroll effect disappears or the links stop functioning altogether.
Obviously, this is due to my inexperience with this script..
First of all, many-many thanks for taking the time to put this together and share! I greatly appreciate it!
As some of the others, I'm also facing difficulties in making this work with Opera 10.62. Simply nothing happens when I click on the links that would scroll the content. It works fine in other browsers, even with IE6, tough. Is there any solution for this?
Thanks a lot!
Hi Ariel,
I am having trouble getting scrollTo to work properly when the srolling object's margin is changed dynamically.
Initially I thought the offset parameter could be used to handle this use case, but it doesn't seem to do what I expected. Any thoughts?
Hi Ariel, i have a problem with IE7.
It gives the following error:
'f' is null or not an object Line: 11 Char: 1343 Code: 0 URI:
As you can see, it's an error within the ScrollTo.js. It makes IE7 go crazy. How can i fix this?
Thanks!!
Hi, seems ScrollTo or LocalScroll has problems with jQuery 1.4.3 in IE (tested with version 8). The DIV container doesn't scroll any longer - insead it jumps to his position.
Great plugin. I've modified your plugin to support dynamic offsets (needed for an mobile app I'm building -- offsets need to change when orientation changes):
Hey there, I wanted to know why the plugin is currently making me click the link twice in order for it to scroll. It only happens when you first access the site. It loads a hash, then once you click it again and it scrolls.
This is really weird. I've used the same script on a regular vertical scrolling website and it works fine with one click. Your help would be appreciated.
If I have this:
$('.nav-next').click(function() {
$('.thumb-list-container').scrollTo( $('ul.thumb-list li').next(), '800', { axis:'x'});
});
It will only scroll to the next li on the first click. Shouldn't it do it on each click?
Hello Ariel Flesler! I am glad to contact you. I try to fix a website which use your your plugin, and I would appreciate any help. I have made a demo to explain you the "problem".
The demo page is this:
as you can see when the scroll is on the last part (li) it returns "violent" in the first part. The corresponding code for this is in function auto_slide() the function is the following
function auto_slide() {
var current = $('.banner-nav li.active');
var next = current.next();
if (!next || !next.length)
next = $('.banner-nav li:first');
slide(next.find('a').get(0));
if (auto_slide_toggle)
setTimeout(auto_slide, slide_interval);
}
the "next = $('.banner-nav li:first')" returns on the first part. But how can I make it to return without showing the intermediate parts?
maybe redundant - (sloppy) replacement for animate() to leverage jQuery's queue key to avoid its effects queue:
function animate( callback ){
var options = {
duration: duration,
easing: settings.easing,
complete: callback && function(){
callback.call(this, target, settings);
},
queue: false
};
$elem.animate( attr, options );
};
Exactly what i was looking for thank you very much for the article. A memorable example of text.Thanks for taking the time to post such a detailed and informative article.
Hi, great plugin!
I want to use
$('#the_div').scrollTo('50%',900, {axis:'x'});
but where do I put that bit of code??
Oliver B
Ariel - This is indeed some nice work and I'd really like to use it for a project that I'm working on, but I fear that you have fallen victim to a disease that seems to plague more and more developers these days: Releasing Code With No Working Examples.
You've obviously taken great care to provide extended demos showcasing the power of this plug in, but without some barebones, practical examples, developers like me (who are admittedly more than a bit obtuse when learning new concepts upon occasion) have a hard time using it to do even the simplest implementation. Now obviously there are plenty of people who have been able to use it right out of the box without any included examples, but I can guarantee you that there have been plenty of others who have checked it out and tried to use it themselves but eventually gave up out of frustration. A lot of them may not have voiced it via comments for various reasons, but they're out there, probably looking for an alternative plug in that's easier to get up and running.
Please don't take this personally, because you're not alone. This problem is actually fairly wide-spread and it's time that someone made an issue out of it so that developers will stop the madness. Please, people, for the love of God... start putting example files in your code releases!
Thanks for listening. Now I need to take my meds before the orderlies show up and force feed them to me again...
Hey,
I have a question: is it possible to have a scrollTo link on one page linking to a specific position on another page?
For example, on page A I want to have a link that when clicked takes the user to page B and scrolls down to X amount of pixels.
Kindof like but instead of going to the #about we go to a specific pixel "depth" on that page.
Any help appreciated, cheers!
I had the same problem "Carl M. Gregory" had. I don't know how the hell you he unpacked the minified version.
In IE8 I was getting "Line: 161
Error: 'slice' is null or not an object".
The problem is that when I called ScrollTo; the target (first parameter) the target had no items (Zero Length). I had to adjust my original script. Scrollto is fine in this instance. The error is unclear. Maybe Ariel would be kind enough to throw an error when the target is 'empty' rather than continuing normally.
Hi Ariel,
your plugin has inspired us to build this experimental web site: - it works great and powers all of the scrolling behaviour :)
I also get the jittery scrolling when doing scrolling diagonally in IE7 (and Safari on Windows), but use the 'queue' option in these browser versions. I assume that it is rather an implementation issue in this browser/OS combination and not a problem with the plugin or CSS styles.
Thank you very much for your work!
Markus
Is there a way to use a selector + a number of pixels? I need it to scroll a little farther.
For example:
$.scrollTo('#id +300px', 800);
Hi Ariel,
I can't get the ease (or any other animation) to work with ScrollTo. This is my code:
$('a[href=#about]').click(function(){
$('html, body').animate({scrollTop:1000}, 'fast', 'easeout');
return false;
});
I am using the jquery ease 1.3 lib. thanks so much!
Elaine
Hi Ariel
Great script works a treat...but... It's causing page load flicker for me in IE. I use <meta http-equiv="Site-Exit" content="BlendTrans(Duration=.1)" to normally solve this issue, but adding your plugin stops it working. Any ideas?
Kind Regards
Ian
hi, it works but my flickers at the top left hand corner of the window before scrolling? any ideas?
I am having an issue installing scrollTo properly. I've spent over 4 hours on it and can't figure it out...
Here is the page I am working on
It scrolls to the proper div, but I couldn't get the localScroll url hash to work properly and most importantly, when you click the links too fast the animation messes up and waits to finish before starting the next click animation. I tried implementing a stop() on the click function but it didn't work. Please help. Much appreciated.
Hi,
Great plugin, thanks!
I don't see any way to make it stop at the end of the first iteration. Is there any way to do that? I would prefer it did not start again automatically.
Thanks
Jon
How can I highlight the link, but remove the highlight if a user scrolls away from the target?
Ive read Hardy's comment on September 27. He applies a style when the link is clicked, but if a user then scrolled away then the link would still be highlighted.
Here is a (non js) demo of what im trying to do:
The links are absolutely positioned and so stay visible.
i am using the script. one problem. i have links 1-8. it shows 1 first. what if i want it to be centered and show,let's say 4 first? so when you click 1 it will slide left. when you click 7 it will slide right.
can you help?
1 2 3 [4] 5 6 7 8
Anyone know hot to use a mask to hide overlow/horizontal scroll bar? The tutorial will work great for the site I'm working on but I only want one box showing at any given time. I've seen tutorials that use a mask div that is smaller than the container so the extra boxes are hidden but it doesn't seem to work with the horizontal scroll tutorial.
Hi Ariel - your plugin is fantastic, most notably for me because of the xy scrolling.
I am using the xy scrolling for the photography section of my company's website () and want to decrease the load time of that page by using Ajax. I am only days away from formally launching the site to thousands via email, and still need to load more photo galleries. The http requests are already high on that page and so I found this page:
Before I work to implement that, I want to ask whether something is possible. Your demo above uses ajax, but loading a new section simply fades it in (with the section then having the scroll in the x direction). As you can see on my site, I'm scrolling category sub-panels in the x-direction, with whole categories themselves scrolling in the y-direction. Is it possible to load the categories via ajax, and retain the scroll? As an example, if you visited my site above, you would get the entire 'fine-art' section loaded into the page. Clicking 'commercial' would then load via ajax that category's content, and once loaded the panel would scroll down to the content (which could then be scrolled left/right to see the three commercial sub-panels).
I hope this makes sense. Ultimately I will be adding a fourth category, and I'd love the overall categories to be loaded via ajax while retaining the xy scroll rather than a tabbed/fade effect.
(As a sidenote, regardless of whether the above can work, I'll be building a credits page under my About section, listing developers/links that help the Focus97 site to work the way it does. Glad to give you credit, as your plugins are fantastic and your support is shown too be great.)
we have been using localscroll 1.2.7 with ScrollTo 1.4.1 on a site for couple of years. But suddenly it does not work on IE 7 or 8 (it works in IE 9) though. The error is "A security problem occurred". We are using with JQuery 1.3.2. Any idea how that could happen?
FYI, I changed line 185 of the jquery.scrollTo-1.4.2.js file to:
$elem.stop(true,false).animate( attr, duration, settings.easing, callback && function(){
-- adding the .stop(true,false) - which seems to fix a problem I was having. With the default 1.4.2 file, I couldn't manually scroll immediately after the scrollTo function was finshed.. it would flicker and come back to where scrollTo landed then after about 10 to 15 seconds it would let me scroll as normal. I also tried to use the onAfter and call alert() with the default to try to see if that was the problem, I received about 20 alerts, then I was able to scroll. So I thought for some reason it was animating when it should have stopped long ago.. weird problem. I didn't see anyone else mention it, so maybe it's something to do with the jQuery version I'm using: 1.4.4
Just in case someone else runs into this issue, otherwise great plugin. Thank you!
Hey,
Will someone please post a very simple example? I really don't know how to make this work and just pieces of code really don't help me at all...please help....
This is a really nice script. I am building a site where I want to use both the ScrollTo script and JPlayer. JPlayer uses jquery.min.js to function properly. However that scripts seems to cancel out the ScrollTo script. I noticed that when I took out jquery.min.js ScrollTo works fine but then JPlayer does not work.
Is there a way to have ScrollTo work in conjunction with this jquery.min.js script.
Thanks.
Hi all. I am building my portfolio site based off of this script. It's amazing, but has anyone had a problem where clicking on the anchored link pops you to the anchored section, but then the scroll begins to work. The anchored section flashes and then the window begins scrolling over it. Has anyone gotten rid of this problem if they have had it? I'm on a mac using chrome and firefox, it does it on both browsers.
Hi Ariel
Thanks for sharing your amazing work!
I know HTML and CSS quite well but don't know enough JavaScript, so I often resort to implementing JQuery plugins by trial and error, which can be a bit frustrating.
I'm having trouble implementing your ScrollTo script. Your demo page is very complicated for a non-JavaScript expert. I've spent a week going through your pages/demo's and it's been quite a struggle understanding how it all works.
My design is all on one large page. Initially you see the top left "home" portion of the page. The rest of the content is off screen. There are a bunch of links in the "home" area. When you click on them the page scrolls to specific DIVs. It all works fine except I want to separate the movement so that when you go to a DIV it travels first x-axis, then y-axis, but when you click a link to return to the "home" posstion, it travels y-axis then x-axis.
I see you have a lot of requests and comments here, so it would be unfair to expect a personal response. Maybe you don't have time to be so specific (which I totally understand) but for future users it would be amazing if you could break the implementation steps down. eg.:
1: Link these JavaScript files in your header.
2: This is what a typical link looks like.
3: In the link, this is where and how you specify the target that you want to scroll to.
4: This is where you insert the various options for the target (DOM element, absolute position etc)
5: This is where you specify the "scrolling style." (Easing, queue, duration etc)
I thank you anyway for this fantastic script.
S
I decided to upload a test page. Please ignore all the messy CSS, it's the stripped down version of a work in progress:
Hey Ariel, thanks for the wonderful script.
I have one question though, is there any way I can have more than 5 items to scroll horizontally?
Everytime I add a sixth item the line brakes in two. Probably because of that item width:20%. But if I change that it messes up the whole page, please check:
Thanks a lot!
Ariel. Thanks for sharing your wizardry! Have you or anybody else figure out how to have all scroll areas print to a single page? Any printing hints would be appreciated! Thanks!
I'm looking at the code on your demo page and you have an external JS file called init? Is that required for the scrolling? I tried just the external scrollTo-min.js file with a link that looks like and it's not scrolling at all. Then I tried implementing what you have in the title section of your links, and that didn't work either.
Thanks for this great plugin!
One question: on chrome, when scrolling the whole window:
$.scrollTo("#mydiv")
The window first jumps to the top of the page, and then starts scrolling down to the element. This happens regardless of my position on the page.
Any ideas?
Hi!
Thankyou very much for sharing your work, Ariel.
I've been using this plugin for several years now. I just want to report that it has stopped working properly in some recent version of Chrome (I'm now in version 10.0.648.204): the scrolling effect always starts from the top of the page now when using $.(...) (i.e., when isWin===false), even if you are in the middle of it. And thus there's no scrolling when moving to the top: it's an instantaneous shift.
Looks like a Chrome bug to me, but my understanding of this kind of things is limited...
Does this happen to anyone else? I have detected it in my two websites.
OK: I reply to myself. Found that the same version of Chrome was playing nice with the plugin in my workplace's PC. Immediately thought of the last installed plugin, and voilá. Turns out that the culprit was "Screen Capture (by Google)". Somehow it messes with the document node. Or so.
Best regards.
Have you heard of any issues with scrollTo in IE9 or Chrome 4. I got it working just the way I wanted then tried it in both IE9 and Chrome 4. Neither scroll the content. I can hit the page from IE8 or IE7 and it works fine. Any ideas?
Ariel,
How would you use the localScroll to scroll through an iframe that is placed inside of the main page. (but i'm using controls(buttons) from the main page to control the scrolling in the iframe)
Thanks,
Hi Ariel,
First this is a wonderful plug-in that you have created. I am just starting out with JQuery and JavaScript in generally I know the basics. So forgive this silly question.
When using you plug-in it seems to break when ever the window is resized, I can see that the "resizePanel();" however this functions returns as undefined which breaks the layout for the site. Can you please point me in the right direction.
regards
Eish
Thank you for the plugin. But,I tried to use this plugin to scroll down the page using
jquery.scrollTo('max', { duration: 1000 });
It scrolls to the end of window fine. When the window scroll bar is used to scroll it up again, it tries to stay back at the position to which it is scrolled resulting in a jurk. Also upon page refresh, it goes back to the scrolled position. Is there a problem with the way i am using it?
Please suggest.
Hi, i need help with my project.
Im creating a menu using this plugin. Does the 'viewArea' should be set to 'overflow: hidden'? Can it be set to 'overflow: show'?
My 'panels' are menuHome and menuAbout while my 'viewArea' is menuIntersection. When a panel is selected, it should scroll to inside the viewArea. Thanks in advance.
Here's the code.
*i just added '-' because the form says it wont accept DIV tag.
HTML:
<-div
<-div
<-/div>
<-div
about
<-/div>
<-/div>
CSS:
div#menuIntersection{
width: 90px;
height: 90px;
position: absolute;
border: 1px solid #F00;
margin-top: 51px;
margin-left: 56px;
overflow: scroll;
}
div#menuHome{
position: absolute;
width: 90px;
height: 90px;
}
div#menuAbout{
position: absolute;
width: 90px;
height: 90px;
margin-left: 90px;
}
This is the right plugin that I am looking for. Thanks for the great plugin :)
Hi,
How can i detect that the scroll is at his max position?
I have 2 buttons "forward" en "backward" and i want to disable them
when the scroll is at 0 or max (depends on the button).
Did someone has any idea?
Thanks.
Hello, this plugin looks really nice, but how to set it up?
HTML:
...
tr>
td
div
ul
li>img/li>
li>img/li>
/ul>
/div>
/td>
/tr>
...
CSS:
/* Scrolling top banner*/
#top_banner
{
width: 967px;
overflow: hidden;
position: relative;
}
#top_banner ul.elements
{
width: 2000px;
}
ul.elements li
{
width: 900px;
height: 50px;
float: left;
margin: 0px;
padding: 0px;
list-style: none;
}
JS:
$("ul.elements").scrollTo("+=500");
It doesnt do anything, and I just cant find out why, it looks to me like Im missing something :D...
Could you please help me? :)
Thanks
Hi, I just implement your script and goes really well, thanks.
I used the mask div to hide some content. The only problem that I have, is that I cant put the google analytics script just above the body tag, because always appear inside the main (hero-slider) div.
How can I fix this?
Thanks a lot for your time and for your script.
Regards
Hello, first thank you for sharing this plugin and make available, wonderful. I wonder if you can enable those buttons clicks to go back or forward the content before or after. I read all the contents and found no reference about it. Thanks in advance for your attention.
I'm trying to specify the target parameter using a variable. Is this possible? I get "val is undefined" on line 161 of toScroll.js. Thanks!
var scrollToTarget = '#' + (appSections[sectionToLoad].htmlPage).slice(0, appSections[sectionToLoad].htmlPage.indexOf(".html"));
// scrollToTarget = #mysection
$('#content_container').scrollTo(
$(scrollToTarget),
1500,
{axis:'x'}
);
Try this:
$('html,body').animate({scrollTop: 500}, 1000, "easeOutBounce");
It does the same thing as this plugin does.
First off, I've been using this plugin for quite a while now and have always loved it! Looking through the comments I was unable to find an answer to my question, so hopefully this is not a duplicate.
I'm using the $(...).scrollTo() in a standard implementation. I noticed that when you try to interrupt the scrolling effect it jitters until the duration is complete.
I believe it has always done this, however, I'd like to get rid of this minor nuisance. With that, I was wondering if the community or yourself has a solution for this?
I was thinking I could write a handler around the arrow keys and the mouse wheel and then .stop() the animation. I thought I'd ask around to see if there were any other ideas.
Thanks in advance, and thank you for a wonderful plugin!
Hello,
Really love the plugin, works great! I have one problem though, when scrolling to a DOM element other than the top, the scroll jumps/jitters slightly before it animates. here is the example of what i mean:
Any way of fixing this?
The code i have is:
$(document).ready(function () {
$('li.homeb').click(function() {
$('body,html').scrollTo('#homeinfo', 800, {offset:-180});
});
});
I had a "jump effect" on safarai and firefox when I click on one of my element. Do you know where is this coming from ?,
I must say this is a great plugin. But can I invoke this plugin on a page load event instead of onClick? My purpose suits that this plugin is called from inside jQuery(document).ready(function(). Is this doable?
Hi Ariel -
You have really created a wonderful piece of work. I'm a beginning js/jQuery developer and am thrilled by the versatility of your code.
I have a site I'm working on which needs custom scrollbars. I've included the plugin by Kelvin Luck, jScrollPane (latest ver), and your plugin as well. The thing is, they scrollTo shuts off when I have jScrollPane enabled. If I comment jScrollPane out, however, it works just fine. Any ideas on how I might be able to address this? I know jScrollPane has its own scrollTo functionality, but I need yours for the selector power. Would I need to edit jScrollPane's scrollTo functions in the plugin somehow?
Thanks,
Patrick
I'm trying to get this working with a jquery mobile app using the scrollview. once the user flicks down my search results and tries another search, I'd like to reset the scrollview. Do you have any experience with using your plugin with the jquery mobile scrollview?
Guy, i don't understand how i put this in my web site, can you show a basic code to me understand?
because i need to use this plugin on the web site and i didn't found another that do the same.
my e-mail:contato@gbvdesigner.com
please try help the most fast you can.
Thanks
Hi guys. I'm using scrollTo and localScroll on a couple of sites and it's working perfectly in all browsers except for Google Chrome. I'm simply setting all anchor links to scroll on the page using: $(function(){$.localScroll();}); Any ideas why it isn't working? and | http://flesler.blogspot.com/2009/05/jqueryscrollto-142-released.html?showComment=1271674249862 | CC-MAIN-2017-26 | refinedweb | 12,493 | 75 |
Creating styleguides has been always hard work. It can take not only a huge amount of time, especially if you want to make it interactive, and it usually gets out of date as time goes by.
Vue-styleguidist is a nice and easy-to-use tool to create component style guides with interactive documentation in Vue.js. While Storybook for Vue has a manual approach to create interactive docs, vue-styleguidist statically analyzes your source code creating the docs automatically.
🐊 Alligator.io recommends ⤵The Vue.js Master Class from Vue School
Setup vue-styleguidist
In your Vue.js project, install
vue-styleguidist as a dev dependency from npm:
$ npm install --save-dev vue-styleguidist
Add the following two npm scripts to your package.json file:
package.json
{ "scripts": { "styleguide": "vue-styleguidist server", "styleguide:build": "vue-styleguidist build" } }
Finally, you have to setup some webpack config in order to make it work. If you already have a webpack.config.js in your project’s root, vue-styleguidist will load that config.
If that’s not the case, you must create a styleguide.config.js in your project’s root. In there, you could add the webpack config by loading it from another file:
styleguide.config.js
module.exports = { webpackConfig: require('./somewhere/webpack.config.js') };
Or you could even load a different webpack config:
styleguide.config.js
module.exports = { webpackConfig: { module: { rules: [ { test: /\.vue$/, exclude: /node_modules/, loader: "vue-loader" } // For js or css files: { test: /\.js?$/, exclude: /node_modules/, loader: "babel-loader" }, { test: /\.css$/, loader: "style-loader!css-loader" } ] } } };
In any case, remember to install the loaders you’re using, at least
vue-loader.
Note that your project doesn’t have to use webpack. You could use other bundlers like Poi or Parcel. Styleguidist uses webpack internally, but it’s only needed for the required loaders.
Documenting a Component
vue-styleguidist will look for components using the glob pattern
src/components/**/*.vue, but you can configure it using the
components key in the styleguide.config.js file:
styleguide.config.js
module.exports = { components: "src/**/*.vue" };
Using the default convention, let’s create the file
src/components/AppButton.vue and paste the following simple component:
AppButton.vue
<template> <button : <slot></slot> </button> </template> <script> export default { name: "app-button", props: { color: { type: String, default: "black" }, background: { type: String, default: "white" } }, computed: { styles() { return { color: this.color, background: this.background }; } }, methods: { handleClick(e) { this.$emit("click", e); } } }; </script>
The
name component option is required.
Now run
npm run styleguide, and you can navigate to the url displayed in the console, usually. Out of the box, you’ll see the props are already documented on the component doc.
Keep in mind that only the components ins and outs must be documented. In Vue.js, those are props, slots and events.
Props
As you could see, props are documented by default using the
type and
default values, but we can add more options by adding JSDoc comments.
For example, we could add a description to the props:
AppButton.vue
{ props: { /** * Sets the button font color */ color: { type: String, default: "black" }, /** * Sets background color of the button */ background: { type: String, default: "white" } } };
We could also mention that the
background property is available since version 1.2.0 by using
@since:
AppButton.vue
{ props: { /** Sets background color of the button * @since 1.2.0 */ background: { type: String, default: "white" } } };
As another example, we can mark a property as deprecated using
@deprecated. That property will look crossed out:
AppButton.vue
{ props: { /** Sets the button font color * @deprecated Use color instead */ oldColor: String } };
Slots
Slots can be documented by using an html-like comment right before the
<slot> tag, using the
@slot doc tag:
AppButton.vue
<template> <button : <!-- @slot Use this slot to place the button content --> <slot></slot> </button> </template>
You can use as many @slot comments as the number of slots you have, in case you use named slots.
Events
In Vue.js, events are emitted using
this.$emit() function anywhere within methods.
Given that fact, events are documented by adding an
@event comment, and it can be placed anywhere in the method where it’s emitted. They usually go along with
@type in order to define the event payload type.
For clarity, I’d suggest placing them on the method definition:
AppButton.vue
{ methods: { /** Triggered when button is clicked * @event click * @type {Event} */ handleClick(e) { this.$emit("click", e); } } }
If you emit several events, you can place just as many comments:
AppButton.vue
{ methods: { /** Triggered when button is clicked * @event click * @type {Event} */ /** Event for Alligator's example * @event gator * @type {Event} */ handleClick(e) { this.$emit("click", e); this.$emit("gator", e); } } }
Mixins
Mixins are objects which can contain props, methods and other logic to share between components.
Aside from documenting their props and events, they must include a
@mixin doc tag in order to be recognized by vue-styleguidist.
For example, let’s create the following mixin:
sizeMixin.js
/** * @mixin */ export default { props: { /** * Set size of the element */ size: { type: String, default: "14px" } } };
And use it in the AppButton.vue component:
AppButton.vue
import sizeMixin from "./sizeMixin"; export default { name: "app-button", mixins: [sizeMixin], //... }
Then, if you still have the styleguidist server running, you’ll see that the
size prop got merged with the AppButton props.
Extended Docs and Examples
There are a few ways to add more docs to the component:
- Adding a Readme.md file, if your component is within its own folder.
- Adding a .md file with the same file name of the component.
- Adding a
<docs>...</docs>tag on the .vue file.
Since I like the approach and atomicity of single file components, I’m going for the options of using the
<docs> tag. But you could choose the one you feel most comfortable with, the result is the same.
In there, you can add any static information, but the real potential is with examples. You can create an interactive example of the component just by adding the code like you’d do it in a markdown file.
The simplest form is by tagging the code blocks as
jsx:
```jsx <app-buttonPush Me</app-button> ```
And for more complex examples, you could create your own vue instance, tagged as
js:
```js new Vue({ data: () => ({ message: '' }), template: ` <div> <app-button Push Me </app-button> {{message}} </div> ` }) ```
Let’s therefore add a few examples to our
AppButton component:
<docs> This button is amazing, use it responsibly. ## Examples Orange button: ```jsx <app-buttonPush Me</app-button> ``` Ugly button with pink font and blue background: ```jsx <app-button Ugly button </app-button> ``` Button containing custom tags: ```jsx <app-button> Text with <b>bold</b> </app-button> ``` </docs>
Wrapping Up
vue-styleguidist gives us a very easy and automated way to add docs to our components, ending up in a fully featured interactive style guide that other developers and designers can use and play around with.
You can download the complete example from this github repo.
Stay cool 🦄 | https://alligator.io/vuejs/vue-styleguidist/ | CC-MAIN-2019-35 | refinedweb | 1,160 | 66.44 |
okular
#include <ebook_search.h>
Detailed Description
Definition at line 27 of file ebook_search.h.
Constructor & Destructor Documentation
Definition at line 72 of file ebook_search.cpp.
Definition at line 78 of file ebook_search.cpp.
Member Function Documentation
Definition at line 136 of file ebook_search.cpp.
Generates the search index from the opened CHM file.
- Parameters
-
To show the progress, this procedure emits a progressStep() signal periodically with the value showing current progress in percentage (i.e. from 0 to 100) After signal emission, the following event processing function will be called: qApp->processEvents( QEventLoop::ExcludeUserInputEvents ) to make sure the dialogs (if any) are properly updated.
If
- Parameters
-
Definition at line 93 of file ebook_search.cpp.
Returns true if a valid search index is present, and therefore search could be executed.
Definition at line 222 of file ebook_search.cpp.
Loads the search index from the data stream.
- Parameters
-
Definition at line 84 of file ebook_search.cpp.
Executes the search query.
The
- Parameters
-
Note that the function does not clear
- Parameters
-
Definition at line 154 of file ebook_search.c. | https://api.kde.org/4.x-api/kdegraphics-apidocs/okular/html/classEBookSearch.html | CC-MAIN-2019-30 | refinedweb | 176 | 51.95 |
Reducing the YouTube response time by 90%
In this blog post, we are going to cover how the audio from Youtube is being used in SUSI Smart Speaker and how we reduced the response time from ~40 seconds to ~4 seconds for an average music video length.
First Approach
Earlier, we were using MPV player’s inbuilt feature to fetch the YouTube music. However, MPV player was a bulky option and the music server had to be started every time before initiating a music video.
video_process = subprocess.Popen([‘mpv’, ‘–no-video’, ‘’ + video_url[4:], ‘–really-quiet’]) # nosec #pylint-disable type: ignore requests.get(‘’ + video_url) self.video_process = video_process stopAction.run() stopAction.detector.terminate()
Making it Efficient
To reduce the response time, we created a custom Music Server based on Flask,python-vlc and python-pafy which accepts requests from the main client and instructs the System to play the music with just 90% more efficiency.
app = Flask(__name__)
Instance = vlc.Instance(‘–no-video’)
player = Instance.media_player_new()
url = ” @app.route(‘/song’, methods=[‘GET’])
def youtube():
vid = request.args.get(‘vid’)
url = ‘’ + vid
video = pafy.new(url)
streams = video.audiostreams
best = streams[3]
playurl = best.url
Media = Instance.media_new(playurl)
Media.get_mrl()
player.set_media(Media)
player.play()
display_message = {“song”:“started”}
resp = jsonify(display_message)
resp.status_code = 200
return resp
However, shifting to this Server removed the ability to process multiple queries and hence we were unable to pause/play/stop the music until it completed the time duration. We wanted to retain the ability to have ‘play/pause/stop’ actions without implementing multiprocessing or multithreading as it would’ve required extensive testing to successfully implement them without creating deadlocks and would’ve been overkill for a simple feature.
Bringing Back the Lost Functionalities
The first Step we took was to remove the vlc-python module and implement a way to obtain an URL that we use in another asynchronous music player.
@app.route(‘/song’, methods=[‘GET’])
def youtube():
vid = request.args.get(‘vid’)
streams = video.audiostreams
best = streams[3]
playurl = best.url
display_message = {“song”: “started”, “url”: playurl}
resp = jsonify(display_message)
resp.status_code = 200
return resp
The next issue was to actually find a way to run the Music Player asynchronously. We used the `subprocess. Popen` method and cvlc to play the songs asynchronously.
try:
x = requests.get(‘’ + video_url[4:])
data = x.json()
url = data[‘url’]
video_process = subprocess.Popen([‘cvlc’, ‘http’ + url[5:], ‘–no-video’])
self.video_process = video_process
except Exception as e:
logger.error(e);
And this is how we were able to increase the efficiency of the music player while maintaining the functionalities. | https://blog.fossasia.org/tag/susi/ | CC-MAIN-2019-51 | refinedweb | 428 | 51.65 |
Linked List Posts in Pelican
I will eventually write up the ever so exciting story of building a Pelican static blog site, but until then, here's a trick that required a visit to the Pelican IRC channel.1
Pelican provides access to any user defined Markdown meta data field. That means I can define a "link:" field to create a DF-style linked post title automatically. Here's the bit of Jinja code that is in my "Article" and "index" templates.
<h1> {% if article.link %} <a href="{{ article.link }}" rel="bookmark" title="External Link" >{{ article.title }}</a> {% else %} <a href="{{ SITEURL }}/{{ article.url }}" rel="bookmark" title="Permalink to {{ article.title}}" >{{ article.title }}</a> {% endif %} </h1>
For this to work my Markdown header looks like this:
title: Awesome Post tags: blogging about blogging link:
The "{{article.link}}" token inserts the meta data found for the MD header element "link".
I'm still trying to understand how to create RSS feed link-style articles. | http://macdrifter.com/2012/08/linked-list-posts-in-pelican.html | CC-MAIN-2014-41 | refinedweb | 163 | 59.09 |
question about how to Search for custom text in the ListView
By
nacerbaaziz, in AutoIt General Help and Support
Recommended Posts
Similar Content
-.
- By lenclstr746
HELLO GUYS
I'm a work on a background see and click bot project
I can complete it if your help me
(using imagesearch , gdi+ and fastfind)
- By dadalt95
Perform a simple google search!
The script below works fine until fill the google form!
What I can't find is how to submit the form, tried a couple of ways and none of them worked.
#include <IE.au3> $oIE = _IECreate ("") $o_form = _IEFormGetObjByName ($oIE, "f") $o_login = _IEFormElementGetObjByName ($o_form, "q") $username = "80251369" _IEFormElementSetValue ($o_login, $username) $o_numer = _IEGetObjByName($o_form, "btnK") _IEAction ($o_numer, "click")
The code runs without any problem.
I don't know how to proceed!
Thanks in advance! | https://www.autoitscript.com/forum/topic/192393-question-about-how-to-search-for-custom-text-in-the-listview/ | CC-MAIN-2019-04 | refinedweb | 131 | 61.67 |
06 May 2010 17:53 [Source: ICIS news]
LONDON (ICIS news)--Total Petrochemicals has declared force majeure (FM) on polypropylene (PP) deliveries in Europe, the company said in a letter to customers on Thursday, due to a technical fault at an undisclosed facilty.
The letter reported an "unforeseeable and extraordinary" technical problem, related to the supply of propylene, which led to three PP lines being stopped in emergency mode on Wednesday 5 May.
In the letter, Total said it would do its utmost to ensure continuity of supply, but that it would face very difficult times in the coming weeks.
Although the plant was not named, market sources suggested it was the 850,000 tonne/year Feluy facility. However, further details were unavailable and the company declined to comment.
PP availability in ?xml:namespace>
Producers were targeting higher prices again in May, after pushing through hefty hikes in April, leaving net homopolymer prices around €1,250/tonne FD (free delivered) NWE (northwest Europe).
Buyers had taken increases of around 30% in 2010 up to the end of April. Some had expected May to be the top of the current cycle, but buying sources were concerned.
“This will cause chaos,” said one large buyer. “In the current situation we will be letting our customers down. All our products are tight. Availability is the main issue. I will be looking for extra volume with other producers but I don’t know whether I will be able to find any.”
PP | http://www.icis.com/Articles/2010/05/06/9357036/Total-declares-force-majeure-on-Europe-polypropylene.html | CC-MAIN-2014-35 | refinedweb | 248 | 62.58 |
Basic Motor Programming
Topics
Motors
Sleeping
Handouts
Cheat Sheet on todays commands
Lesson
Begin with this sample program. Ask the class what it does. Have the class go through the whole program and explain what each part does.
import josx.platform.rcx.*;
public class Hello
{
public static void main (String[] args)
throws Exception
{
Motor.A.forward();
Motor.C.forward();
Thread.sleep(2000);
Motor.A.stop();
Motor.C.stop();
}
}
Once the students understand the program ask them how it could be changed so that instead of going forward the robot turned for 2 seconds.
Note: I have found it helpful to use the analogy of a wheel chair to explain the concept of turning. So if I was in a wheel chair and the wheel on my left was motor C and on my right was motor A, how do I turn the chair?
Now have the students walk through putting this program on to their robot. Start by having the students open a text editor and copying down this program into their text editor.
NOTE: I heavily recommend the use of some sort of "smart" editor or at least one that colors the text. I use a program called TextPad which is very simmiler to Notepad except it colors text and if properly setup will compile and download RCX programs. Instructions on downloading and setting up TextPad can be found
here
However, programs like NotePad will also work.
Walk the students through compiling, linking and downloading the program onto their RCX. (The instructions for d\ oing this can be found
here
.)
Common Problems:
If you are using
Notepad
it automatically saves the file as a text file unless the "Save as Type" is set to "All Files". If lejos can't find your file try saving it again and make sure "Save as type" is set to "All Files"
If compiling and downloading in
DOS
and it can't find the file make sure that you are in the same directory as the file.
Now that all the students have got that program working explain the program to them line by line.
import josx.platform.rcx.*;
This line ALWAYS needs to be included at the top of ALL programs writen for the RCX. Whenever we use a command in a program line LCD.clear() the RCX has to know what that means. This line tells the program where to find a "library" of all the terms. (If you are using TextPad the lejos command "links" the program it looks at this line to find the "library" and uses it to interpret the code.)
public class Hello
This line tells the program that it's name is Hello and it is of type public which means any other program can ru\ n it. All programs we will be writing in these lessons will be type public.
{
This is a begining bracket. Everything between the { } brackets is part of the program called Hello.
public static void main (String[] args)
This line needs to be in every program. This denotes the "main" method. (we will learn about methods latter) This is what will be run when the program starts.
throws Exception
This line deals with exceptions. For now just type it in exactly as shown here. We will explain exceptions latt\ er.
{
This is another begining bracket. This one begins the main method, the main method ends with the } bracket. <\ BR>
Motor.A.forward();
Motor.C.forward();
This tells the RCX to turn on Motors A and C both in the forward direction.
Thread.sleep(2000);
This tells the program to wait or sleep for 2000 milliseconds.
Motor.A.stop();
Motor.C.stop();
This tells the RCX to stop both motors A and C. Note: this means that not only are they turned off but "breaks" are applied. If you want them to turn off and free spin use the flt() command.
}
}
These two lines end the main method and the Hello class.
Have the students divide into their groups. (Groups of 3 are optimal) Have each group program their robots to go forward, turn at least 360 degrees and go backwards. After they compleat that have each group design a short dance for their robot to be shown off at the end of the day.
The main point of this lesson is to get the students used to programming the robots. It is very easy to rush through motors and cover them very quickly. However, students need to have a firm understanding of the basics before they try the complex programs. | http://www.sci.brooklyn.cuny.edu/~sklar/er/curriculum/lessons/lejos_motors.html | crawl-002 | refinedweb | 759 | 80.72 |
Using.
Update, July 2015: VS 2015 fixes this bug such that StdMaxTest now generates perfect code.
I recommend writing template Min/Max functions that return by value instead of by const reference. In most cases these can be used as drop-in replacements for std::min and std::max, and they cause VC++ to generate much better code. Replacing std::min and std::max won’t always cause a large improvement, but I don’t think it ever hurts.
One of the advantages of working at Valve is having really smart coworkers. This particular code-gen problem was found by Michael Abrash. When he said that the compiler was being difficult I figured it was probably worth investigating.
We don’t need no stinkin’ macros
Macro implementations of Min and Max have four main problems. The first is that if they are not carefully written with lots of parentheses then you can hit non-obvious precedence problems. This can be dealt with by careful placement of parentheses in the macro definitions.
The second problem with macros is multiple evaluations of the arguments. If one of the parameters to a min/max macro has side effects then these multiple evaluations will cause incorrect behavior. In other cases the multiple evaluations might cause bloated code or reduced performance. This problem can be avoided by never passing arguments that have side effects to a macro.
The third problem is that macros don’t care whether your types match. You can pass a float and an int to MIN, or a signed and an unsigned, and the compiler will happily do conversions, sometimes leading to unexpected behavior.
The fourth problem with macros is that they pollute all namespaces. Yucch.
It might be possible to ensure that all of your macros are implemented correctly to avoid precedence problems, but it is very difficult to ensure that all uses of all macros avoid side effects and unplanned conversions, and namespace pollution is unavoidable with macros.
Functions it is
Inline functions avoid these problems, and inline template functions offer the promise of perfectly efficient min/max for all types with none of the problems associated with macros. Here’s the general form for a template max function:
template<class T> inline const T& max(const T& a, const T& b)
{
return b < a ? a : b;
}
This function is designed to work with any type that is comparable with operator<, so T can be an int, double, or your class FooBar. Because the function needs to handle arbitrary types the parameters and return type need to be declared as const references. This allows finding the maximum of objects that are expensive to copy or uncopyable:
auto& largest = std::max(NonCopyable1, NonCopyable2);
However that const reference return type gives Visual C++’s optimizer heartburn when std::max is used with built-in types.
In order to see what sort of code template min/max generate we need to call them. Below we have the world’s simplest test of std::max:
int StdMaxTest(int a, int b)
{
return std::max(a, b);
}
By compiling this in a release build with link-time-code-generation (LTCG) disabled and /FAcs enabled we can get a sense of the code-gen in a simple scenario, without even having to call the function. This technique was described in more detail in How to Report a VC++ Code-Gen Bug. Here’s what the assembly language from the .cod file looks like:
?StdMaxTest@@YAHHH@Z
push ebp
mov ebp, esp
mov ecx, DWORD PTR _a$[ebp]
lea eax, DWORD PTR _b$[ebp]
cmp ecx, DWORD PTR _b$[ebp]
lea edx, DWORD PTR _a$[ebp]
cmovge eax, edx
mov eax, DWORD PTR [eax]
pop ebp
ret 0
The first two and last two instructions are boilerplate prologue and epilogue. I’ve put the code that does the work in bold. There are six instructions and, roughly speaking, they conditionally select the address of the largest value, then load the winner from the specified address.
Now let’s consider an alternative definition of a template max function. This function is identical to std::max except that its return type is a value instead of a reference. Here’s the function and a test caller:
template <class T>
T FastMax(const T& left, const T& right)
{
return left > right ? left : right;
}
int FastMaxTest(int a, int b)
{
return FastMax(a, b);
}
And here’s the generated code:
?FastMaxTest@@YAHHH@Z
push ebp
mov ebp, esp
mov eax, DWORD PTR _b$[ebp]
cmp DWORD PTR _a$[ebp], eax
cmovg eax, DWORD PTR _a$[ebp]
pop ebp
ret 0
The inner section of the function – the part that does the actual work – is three instructions instead of six. Instead of selecting the winning address and then loading the value it just selects the winning value.
All else being equal, smaller and shorter code is better. A shorter dependency chain means higher peak speed, and a smaller footprint means fewer i-cache misses. While lower instruction counts don’t always equal higher speed, if all else is equal (no expensive instructions) they should give equivalent or better speed. The smaller code won’t necessarily be faster, but it will not be slower, and being smaller is a real advantage.
Timing differences
Measuring the performance of three to six instructions is impossible. Given that modern processors can have far more instructions than that in-flight at one time it isn’t even well defined. So, we need a better test.
The simplest timing test I could think of was a loop that scans an array to find the largest value. To compare FastMax and std::max I just need to call them each a bunch of times on a moderately large array and see which one is fastest. In order to avoid distortions from context switches and interrupts I print both the fastest and slowest times but I ignore the slowest times. Here’s one version of the test code:
int MaxManySlow(const int* p, size_t c)
{
int result = p[0];
for (size_t i = 1; i < c; ++i)
result = std::max(result, p[i]);
return result;
}
The results are dramatic. The differences are way more extreme than I had expected. The FastMax code runs three times faster than the std::max code! The code using FastMax took two cycles per iteration, whereas the code using std::max takes six cycles per iteration.
Here is the inner loop generated when using std::max:
SlowLoopTop:
1: cmp ecx,dword ptr [edx]
2: lea eax,[result]
3: cmovl eax,edx
4: add edx,4
5: mov ecx,dword ptr [eax]
6: mov dword ptr [result],ecx
7: dec esi
8: jne SlowLoopTop
Here is the inner loop generated when using FastMax:
FastLoopTop:
1: cmp eax,dword ptr [esi+edx*4]
2: cmovle eax,dword ptr [esi+edx*4]
3: inc edx
4: cmp edx,edi
5: jb FastLoopTop
Remember that the only difference between the source code of these two functions is the return type of the Max function called. If I change FastMax to return a const reference then it generates code identical to std::max.
The problems handling the std::max return type is apparently an optimizer weakness in VC++. I’ve filed a bug to show the problem and I’m hopeful that the VC++ team will address it. Until then I recommend using FastMax instead of std::max (and FastMin instead of std::min).
All testing was done with VC++ 2013, release builds, with the /O2 optimization setting. Testing was done on an Intel Sandybridge CPU on Windows 7. I tested one other Intel and got similar but not identical results. I saw similar results with VC++ 2010, so this is not a new problem.
One final glitch
In order to protect programs against exploits of overruns of stack based buffers VC++ inserts calls to _security_check_cookie() at the end of vulnerable functions if you compile with /GS. A vulnerable function is one that VC++ thinks could have a buffer overrun. Unfortunately, using std::max in our test function triggers the /GS code so MaxManySlow is further penalized by having code to check for impossible buffer overruns. This doesn’t affect the performance of our test because I passed in a large enough array, but if a small array was passed in then this would be an additional bit of overhead. And, the extra code wastes more space in the instruction cache. MaxManySlow is 35 bytes larger than MaxManyFast – 73 bytes versus 38 bytes.
Other types
I’m not interested in testing this with all of the built-in types, but I did test with float. The minimal test functions showed different code-gen for the two template functions, but it wasn’t obvious which was better. When I ran tests on arrays of floats FastMax was about 4.8x faster than std::max. The loop with std::max took 7.33 cycles per iteration, while the loop with FastMax took an impressive 1.5 cycles per iteration.
Test code is attached to the bug and is also (newer version) available here. Run the release build to see the array results, compile the release build and look at MaxTests.cod to see the code-gen of the micro test functions.
I’m surprised I never noticed this before. I guess I got lucky because when I pushed my coworkers from macro MIN/MAX to template Min/Max I accidentally did return by value.
Update
I’ve tested this lightly with gcc and it seems to handle the references without any problems. I hear clang handles it fine also. However, using a single micro-test to compare compiler quality is clearly meaningless, so don’t over extrapolate.
deniskravtsov suggested using overloads of min/max for built-in types to avoid this problem, which is a fascinating idea. However, if you put the overloads of min/max in the global namespace then they will not always be used and if you put them in the std:: namespace then you are probably breaking the language rules. I like using FastMax better.
While testing the overload suggestion I found that if the overload takes its parameters by value then the performance is slightly lower than if it takes them by reference. I don’t know if that is a general trend or if it is peculiar to this one test. It does seem quite odd that the parameter type and return type of a function that is inlined would cause so much trouble! The performance difference from the parameter types is slight so I wouldn’t read too much into it.
Reddit discussion is here and here.
A few comments to my readers…
A surprising number of people said that the problem would go away if std::max was inlined by the compiler. Uhhh – in both of my examples std::max was inlined by the compiler.
There were also several people saying that this would never happen in real code. Well, it was originally found in real code. That code needed high performance, and it took quite a while to find out what was making the optimizer generate imperfect code. The slowdown in that case wasn’t three times, but it was enough to matter. Also, I think MaxManySlow looks like real code – I’m sure I’m not the first person to write that exact loop.
FastMax may indeed generate slower code for heavyweight types – but maybe not. That test will have to wait for another day.
Good find! This makes me wonder if VS is weak in optimizing away references in general – what other opportunities are being missed?
That would be fascinating to investigate. The one factor that probably reduces the frequency of the problems is that references are not frequently used to return built-in-types, and that seems to be what triggers this.
Well as an example: VS2010 fails to inline some embarrassingly inlinable functions marked as inline if we separate the definition from the declaration. We compile with full optimisation and link time code generation, so no excuses.
It even fails to inline with the MS specific __forceinline keyword! I know it’s only a suggestion still… but seriously, if you’ve gone to the effort to use __forceinline you really are expecting a function to be inlined.
That being said, it does seem like Microsoft has significantly picked up their game since VS2010 and it’s only getting better each release. However, this is only based on my personal code and not a 2+ million line code base like my VS2010 experience.
I wouldn’t complain though. Each compiler has its own little idiosyncrasies and if you need total performance then you profile and special case for each target architecture.
In the end Bjarne sums it up pretty well in that C++ has become too “expert friendly”. For me, this doesn’t just include the core language but the difficulty in writing a decent optimising compiler as well.
I strongly recommend creating a repro of the missed optimization opportunities and filing a bug. Either that will help find a workaround, or it will give Microsoft a chance to fix an issue that you care about.
If nothing else it would let us have a concrete discussion.
Thanks for the tip, we use std::min and std::max quite extensively instead of macros, for the reason you’ve mentioned at the beginning, and porting our code to Windows is on the list for the near future.
One thing I would like to point out though, more instructions doesn’t always mean longer runtime! Just looking at the instruction count will rarely give a clue about the runtime of something. I know that you know that, but just in case someone sees this post and is now going through the assembly their compiler generated and tries to get the instruction count down.
Yep, there are definitely cases where more instructions is better — loop unwinding being one obvious example, or replacing a multiply or divide with multiple instructions. In this case the extra instructions are not offering those benefits so the best we can hope for is identical performance with more bloat.
Ah, not sure my comment went through. I’m normally using an overload for POD types in std:
namespace:
{
int min(int i1, int i2)
{
return i1 > i2 ? i1 : i2;
}
}
This allows to use generic code “specialising” for specific cases.
BTW /GS option terminates the process in case things go wrong and you don’t get a crash dump, so need to keep guessing unless you turn it off. MS needs to change this default bahaviour or allow to customise it.
ha-ha, typos “namespace:” and “i1<i2" but you get the idea 8)
Interesting idea. I added some commentary on it to the end of the post.
You are not allowed to extend the std namespace, but you ARE allowed to specialize functions inside of it. That pattern is used to implement std::hash for custom types.
Yeah, that’s what I thought. Unfortunately specializing std::min and std::max doesn’t help since that can’t change the return type. Overloading is the only option, and that can’t legally be done inside of namespace std. So I still prefer FastMax.
Have you looked into somehow using SSE to speed up operations like this?
If finding the maximum value of an array was performance-critical then I’d look at using SSE and I’m sure it would help a lot. The tradeoff, of course, is increased complexity and reduced portability. The actual loop where we found this problem was more complex, and less amenable to SSE optimization.
By the way, how about making FastMax constexpr?
VS 2013 does not support constexpr so making FastMax constexpr wasn’t really practical. I could have downloaded the VS 2013 CTP () but constexpr didn’t seem relevant to this investigation.
@denis I like how your sample function implements a `max` under the name of `std::min`🙂
Aside from that, as per my own comment, observe how this silently changes behaviour of existing, standards-compliant code:
– specified standard library behaviour:
– poisoned standard library behaviour:
That’s outrageous. I can confirm this is true for 64-bit code generation under VS2010 as well:
?DoubleMaxTest@test@@YANNN@Z PROC ; test::DoubleMaxTest
; 42 : return test::Max(a, b);
comisd xmm0, xmm1
movsdx QWORD PTR [rsp+16], xmm1
movsdx QWORD PTR [rsp+8], xmm0
lea rax, QWORD PTR a$[rsp]
ja SHORT $LN7@DoubleMaxT@2
lea rax, QWORD PTR b$[rsp]
$LN7@DoubleMaxT@2:
movsdx xmm0, QWORD PTR [rax]
?DoubleMaxTestByValue@test@@YANNN@Z PROC ; test::DoubleMaxTestByValue
; 58 : return b < a ? a : b;
comisd xmm0, xmm1
ja SHORT $LN4@DoubleMaxT
movapd xmm0, xmm1
$LN4@DoubleMaxT:
If you want to see something really silly, take a look at the generated code when one or both arguments are immediates. For example for std::cout << std::max(3,6) I get:
000000013F802D1C mov dword ptr [rbp+88h],6
000000013F802D26 mov dword ptr [rbp+0C8h],3
000000013F802D30 mov dword ptr [rbp+0D8h],6
000000013F802D3A mov edx,6
000000013F802D3F mov rcx,qword ptr [__imp_std::cout (13F9D33E0h)]
000000013F802D46 call qword ptr [__imp_std::basic_ostream<char,std::char_traits >::operator<< (13F9D33C8h)]
I have no idea why it's storing 6 twice here. Using the by-value version as expected doesn't store anything.
Make sure you are testing an optimized (release) build. When I test std::max(3,6) on a VS 2013 optimized 64-bit build I get “mov eax, 6\nret 0”, which is perfect.
The other tests seem to behave similarly with the 64-bit compiler, so it has the same weakness as the 32-bit compiler.
Indeed, /Ox. We already always use our own version of min/max (templated though and defined pretty much exactly like the standard versions). Some calls are optimized just fine, many though are not. After switching to overloads (but still using the reference return) I don’t see any extraneous stores. So there’s something about the const& return plus the template that confuses the thing.
Unfortunately it doesn’t seem like you can have overloads for common types plus a template version to pick up everything else– not only does the type safety go out the window but it picks the (double,double) overload in far more cases than I’d expect, including a nasty one where someone provided an operator bool() (sigh).
I briefly flirted with using some sort of SFINAE monstrosity to have the template change the return value based on the arguments (I used has_trivial_destructor) but could not get past the need to explicitly specify the type of the argument in order to use the specialization… adding a helper method to do it for me doesn’t help obviously because then that itself has a return value that needs to be defined (and taking a const& to a temporary return value isn’t a great idea). I guess it’s better to not fight this one, and just wait for a fix.
I’m not sure why you are getting different results from me. In my tests I find that the only thing that matters is the return type — template versus not makes no difference. I would be surprised if being a template mattered since a template is just a way of stamping out code and shouldn’t affect code generation.
I’m also unclear if you saying that you used /Ox on the std::max(3,6) test because in my tests that construct produces perfect code in optimized builds. If it is not for you then you should check your build settings.
Anyway, I recommend changing your min/max return type. The reference return type is rarely needed — you can always use std::max or MaxRef() when you need a reference return type.
Note I’m using 2010, not 2013, but other than that I used your setup above with the wonderful /FAcs trick to iterate. Since I expect further optimization to occur during linking I just skimmed around a production binary for the silly store pattern in functions I know make heavy use of min/max on UDTs that essentially just wrap a double.
I think it’s good advice to just flat out change the return type to a value, after all, how often does a “complex” UDT get used in min/max that is also performance-critical?
Certainly a good find. Hope this will improve the compiler in the future.
I got a bit worried with your early recommendation of template min/max as “drop-in replacement”.
I think casual readers may need a bit more explicit warning about the cases where the bahviour changes from standard c++.
(I’d have preferred std::min(a,b), std::min(std::ref(a), std::ref(b)) and std::min(std::cref(a), std::cref(b)) etc. so users would have had control. Alas, that’s history).
I’m also a bit sceptic about this being significant in any real code base. It won’t be noticeable unless you’re executing a large number of similar operations in an algorithm. In which case I’d argue that it is always going to be trivial to optimize your higher-level algorithm. (Like in the case of your benchmark “naive max_element”).
And that’s exactly what the programmer of said algorithm should already be doing, IMO.
Cheers
When would FastMax not be a good replacement? Obviously for noncopyable objects it won’t work, and for expensive to copy objects it may be worse (it depends on what the optimizer does and how you use it) but I’m trusting my readers to realize that.
This issue definitely was significant in a real code base. Michael Abrash was trying to optimize some very performance sensitive code and his (entirely reasonable) use of std::max was causing enough slowdown to matter. It wasn’t causing a 3x slowdown, but it was noticeable. The trouble is that it wasn’t at all obvious that std::max was what was making the compiler get stupid, so it took a while to find the fix.
Thanks for adding that. I gave an example of a semi-contrived case where
behaviour changes in response to @denis. I just said that it’s not a _general_
drop-in replacement (standards conformance wise).
Anyways, in my experience: first I write _intentional_ code (i.e. highlevel
code, using standard abstractions as much as possible) and aim for 1.
correctness 2. expressiveness (in that order; they often jibe really well).
This would be the phase that _might_ sport a `std::max` call or two.
In the next (optional) phase, I optimize when the profiler tells me. This phase
often sees me routinely dissolving “minor” abstractions used (like
std::min/max) because it sacrifices little in terms of expressivess.
Of course, in practice the “big wins” come from other types of changes:
* break the abstraction and use intermediate state or “inside information” to
make the code do the same, while writing more specific detailed steps, e.g.
– “Removing The Varnish”: e.g. fill a vector manually, and sort/index
(lower_/upper_bound ranges) instead of relying on boost’s
bimap/flat_[multi]map/etc.).
– “Crusting on Special Cases”: add manual administrative overhead to take
advantage of special cases
Very little surprises me anymore in that phase. I admit that I don’t often go
down to the generated assembly anymore. I think that’s mainly because I can
interact with profiler data on the source level just fine.
The other side of it perhaps just means this: I’m not a compiler or
(proprietary) library developer🙂
🙂
heh my irony tag got eaten [/end unsollicited rant]
I agree with your workflow, with one possible exception. If I have to remove std::max and replace it with FastMax or with manually going if/else or ?: then I get annoyed. std::max should be a zero-cost abstraction. Until a few months ago I thought that std::max *was* a zero-cost abstraction.
Nice catch and great article Bruce!
I’m surprised by this. It seems like such simple optimization. gcc and clang seem to handle std::max just fine:
I think I prefer the others suggestion of overloading std::max for builtin types. That way whenever MS gets around to fixing this you can just remove your specializations and not have to change any code.
The problems with overloading std::max are you can’t overload it in the std namespace without changing the meaning of some programs, and if you overload it outside of the std namespace it will not always get used. Hence, I prefer FastMax, and it’s easy enough to switch back to std::max in the future.
Note that, when the arguments are equal, std::max returns the left argument (25.4.7). Your first example of a max implementation under the “Functions it is” header does not behave exactly like std::max. To make them match, I believe you’d want the meat of the function to be:
return a < b ? b : a;
FastMax returns the value of the right argument, but since it is returning by value, this shouldn't matter. Nonetheless, I thought it was worth pointing out to avoid confusion.
Personally, I would use FastMax only in places where performance it mattered, and only until the compiler vendor fixes the optimizer for std::max. For all the places where performance isn't critical, I'd still use std::max.
Thanks for the point about the subtleties of the std::max contract. I agree that FastMax should be discarded as soon as possible. Luckily s/FastMax/std::max/ is an easy change to make.
Pingback: Optimizer bug in Visual Studio | musingstudio
Thanks for the article. I was reading the article and @denis comment and I thought I’d try to let the compiler decide what type to return.
@seth comment about the standard is still valid and my version of std::max (AutoMax) misbehave for every scalar type, but that’s, as you said, a temporary solution.
Also, regarding @adrian comment, it looks from my tests the implementation using operator> instead of operator< generates slower code.
I hope the code is correct, it's a quick test I wrote at night and I'm not a template expert🙂
template
struct return_type_impl {
typedef const T& value_type;
};
template
struct return_type_impl {
typedef T value_type;
};
template
struct return_type : public return_type_impl< std::is_scalar::value, T > {
};
template
inline typename return_type::value_type AutoMax(const T& left, const T& right) {
return left < right ? right : left;
}
This code compiles and seems to behave correctly in VS2012 and 2013. It can be further improved using more type_traits (is_trivial) and eventually considering the size of the type (although this would be platform specific and I don't think useful too often).
At the end I decided to only use is_scalar because is easy to implement in VS2010/12/13.
Ops. Always the same mistake. The code should display the < > part now
template < bool B, class T >
struct return_type_impl {
typedef const T& value_type;
};
template < class T >
struct return_type_impl < true, T > {
typedef T value_type;
};
template < class T >
struct return_type : public return_type_impl {
};
template < class T >
inline typename return_type < T > ::value_type AutoMax(const T& left, const T& right) {
return left < right ? right : left;
}
I am reminded of this famous quote:
Some people, when confronted with a problem, think
“I know, I’ll use regular expressions.” Now they have two problems.
For “regular expressions” substitute “template metaprogramming”. I’m intrigued by your solution, but I’m not sure there was ever enough of a problem to justify it. I think the programmer should choose the return type. It’s unfortunate that the default return value for ‘max’ is const-ref and that (for a while) VC++ developers should prefer to return by-value, but I’d rather they learned to call FastMax than AutoMax.
Reference:
For the record, both Clang & GCC generate optimal code here.
Pingback: Rendering experiments framework | Prog stuff
Hi Bruce,
I have not read the entire article, so I apologize in advance if I’m making a mistake.
According to the C++ Standard, max must return the first argument when the arguments are equivalent.
See N3797 [alg.min.max] p9
Your following code snippet return the secound argument when the arguments are equivalent, then, it is not standard compliant.
template inline const T& max(const T& a, const T& b)
{
return b < a ? a : b;
}
According to Alex Stepanov in his book Elements Of Programming, he acknowledges that it has made a mistake, and that mistake has been propagated to the Standard and now, is difficult to fix it.
So your version of Max is more accurate than the Standard version, unfortunately, implementations have to be conformant with the Standard🙂
Regards,
Fernando Pelliccioni
Sorry about my last comment.
I don’t know for what reason I thought you are developer of the C++ Standard Library team at Microsoft.
Ignore my comment.
No worries — it’s still interesting to learn about these details.
Pingback: Self Inflicted Denial of Service in Visual Studio Search | Random ASCII
Hi Bruce,
Long time reader of your blog here.
I came across your article because I have a similar, but opposite issue regarding std::max(). I’m using Visual Studio 2013 Update 4. I’m compiling for x64 with full optimizations on (got the same results with and without LTCG).
Here’s my test program:
#include
const float& max_ref(const float& lhs, const float& rhs) {
return lhs < rhs ? rhs : lhs;
}
float max_val(const float lhs, const float rhs) {
return lhs < rhs ? rhs : lhs;
}
int main(int argc, char* argv[]) {
float x;
std::scanf("%f", &x);
std::printf("%f\n", max_ref(x, 0.0f));
std::printf("%f\n", max_val(x, 0.0f));
return 0;
}
max_ref(), which mimics the implementation of std::max(), compiles to
movss xmm0, DWORD PTR [edx]
comiss xmm0, DWORD PTR [ecx]
cmova ecx, edx
mov eax, ecx
ret 0
while max_val() compiles to
comiss xmm1, xmm0
jbe SHORT $LN4@max_val
movaps xmm0, xmm1
$LN4@max_val:
ret 0
That seems to contradict your conclusions, but I suspect we are using slightly different versions of the compiler. My intuition is that VC++ now recognizes the exact signature of std::max() and generate the proper code for it.
Any thought?
Franz
The code-gen has changed, and the results also vary depending on whether you test with int or float, and whether you look at std::max or a function that calls (and inlines) it.
The best test is not the code generated for std::max, because that should always be inlined. Instead the best test is a simple wrapper function that calls either std::max or FastMax. With that test, for integers, I see shorter code with a branch for 64-bit FastMaxTest versus StdMaxTest and longer code with a branch for 32-bit FastMaxTest versus StdMaxTest.
So, perhaps different, but certainly not optimal yet.
My one recommendation would be looking at the code-gen for a wrapper function such as FastMaxTest or StdMaxTest, not std::max. The code-gen for std::max may be better, but it isn’t relevant since it will always be inlined.
Thanks for the answer. I was indeed checking the generated code of a wrapper function (which is main, in my case).
Any news on VS2015’s handling of std::max?
I just checked with VS 2015 RTM and it appears to handle std::max perfectly. The code-gen for StdMaxTest is exactly as it should be.
Pingback: Zeroing Memory is Hard (VC++ 2015 arrays) | Random ASCII | https://randomascii.wordpress.com/2013/11/24/stdmin-causing-three-times-slowdown-on-vc/ | CC-MAIN-2016-40 | refinedweb | 5,275 | 61.56 |
Get the highlights in your inbox every week.
Using containers to analyze the impact of climate change
Using Linux containers to analyze the impact of climate change and soil on New Zealand crops
Method models climate change scenarios by processing vast amounts of high-resolution soil and weather data.
Subscribe now.
Historically, these models have been used primarily for small area (point-based) simulations where all the variables are well known. For large area studies (landscape scale, e.g., a whole region or national level), the soil and climate data need to be upscaled or downscaled to the resolution of interest, which means increasing uncertainty. There are two major reasons for this: 1) it is hard to create and/or obtain access to high-resolution, geo-referenced, gridded datasets; and 2) the most common installation of crop modeling software is in an end user's desktop or workstation that's usually running one of the supported versions of Microsoft Windows (system modelers tend to prefer the GUI capabilities of the tools to prepare and run simulations, which are then restricted to the computational power of the hardware used).
New Zealand has several Crown Research Institutes that provide scientific research across many different areas of importance to the country's economy, including Landcare Research, the National Institute of Water and Atmospheric Research (NIWA), and the New Zealand Institute for Plant & Food Research. In a joint project, these organizations contributed datasets related to the country's soil, terrain, climate, and crop models. We wanted to create an analysis framework that uses APSIM to run enough simulations to cover relevant time-scales for climate change questions (>100 years' worth of climate change data) across all of New Zealand at a spatial resolution of approximately 25km2. We're talking several million simulations, each one taking at least 10 minutes to complete on a single CPU core. If we were to use a standard desktop, it would probably have been faster to just wait outside and see what happens.
Enter HPC
High-performance computing (HPC) is the use of parallel processing for running programs efficiently, reliably, and quickly. Typically this means making use of batch processing across multiple hosts, with each individual process dealing with just a little bit of data, using a job scheduler to orchestrate them.Parallel computing can mean either distributed computing, where each processing thread needs to communicate with others between tasks (especially intermediate results), or it can be "embarrassingly parallel" where there is no such need. When dealing with the latter, the overall performance grows linearly the more capacity there is available.
Crop modeling is, luckily, an embarrassingly parallel problem: it does not matter how much data or how many variables you have, each variable that changes means one full simulation that needs to run. And because simulations are independent from each other, you can run as many simulations as you have CPUs.
Solve for dependency hell
APSIM is a complex piece of software. Its codebase is comprised of modules that have been written in multiple different programming languages and tightly integrated over the past three decades. The application achieves portability between the Windows and GNU/Linux operating systems by leveraging the Mono Project framework, but the number of external dependencies and workarounds that are required to run it in a Linux environment make the implementation non-trivial.
The build and install documentation is scarce, and the instructions that do exist target Ubuntu Desktop editions. Several required dependencies are undocumented, and the build process sometimes relies on the binfmt_misc kernel module to allow direct execution of .exe files linked to the Mono libraries (instead of calling mono file.exe), but it does so inconsistently (this has since been fixed upstream). To add to the confusion, some .exe files are Mono assemblies, and some are native (libc) binaries (this is done to avoid differences in the names of the executables between operating system platforms). Finally, Linux builds are created on-demand "in-house" by the developers, but there are no publicly accessible automated builds due to lack of interest from external users.
All of this may work within a single organization, but it makes APSIM challenging to adopt in other environments. HPC clusters tend to standardize on one Linux distribution (e.g., Red Hat Enterprise Linux, CentOS, Ubuntu, etc.) and job schedulers (e.g., PBS, HTCondor, Torque, SGE, Platform LSF, SLURM, etc.) and can implement disparate storage and network architectures, network configurations, user authentication and authorization policies, etc. As such, what software is available, what versions, and how they are integrated are highly environment-specific. Projects like OpenHPC aim to provide some sanity to this situation, but the reality is that most HPC clusters are bespoke in nature, tailored to the needs of the organization.
A simple way to work around these issues is to introduce containerization technologies. This should not come as a surprise (it's in the title of this article, after all). Containers permit creating a standalone, self-sufficient artifact that can be run without changes in any environment that supports running them. But containers also provide additional advantages from a "reproducible research" perspective: Software containers can be created in a reproducible way, and once created, the resulting container images are both portable and immutable.
Reproducibility: Once a container definition file is written following best practices (for instance, making sure that the software versions installed are explicitly defined), the same resulting container image can be created in a deterministic fashion.
Portability: When an administrator creates a container image, they can compile, install, and configure all the software that will be required and include any external dependencies or libraries needed to run them, all the way down the stack to the Linux distribution itself. During this process, there is no need to target the execution environment for anything other than the hardware. Once created, a container image can be distributed as a standalone artifact. This cleanly separates the build and install stages of a particular software from the runtime stage when that software is executed.
Immutability: After it's built, a container image is immutable. That is, it is not possible to change its contents and persist them without creating a new image.
These properties enable capturing the exact state of the software stack used during the processing and distributing it alongside the raw data to replicate the analysis in a different environment, even when the Linux distribution used in that environment does not match the distribution used inside the container image.
Docker
While operating-system-level virtualization is not a new technology, it was primarily because of Docker that it became increasingly popular. Docker provides a way to develop, deploy, and run software containers in a simple fashion.
The first iteration of an APSIM container image was implemented in Docker, replicating the build environment partially documented by the developers. This was done as a proof of concept on the feasibility of containerizing and running the application. A second iteration introduced multi-stage builds: a method of creating container images that allows separating the build phase from the installation phase. This separation is important because it reduces the final size of the resulting container images, which will not include any dependencies that are required only during build time. Docker containers are not particularly suitable for multi-tenant HPC environments. There are three primary things to consider:
1. Data ownership
Container images do not typically store the configuration needed to integrate with enterprise authentication directories (e.g., Active Directory, LDAP, etc.) because this would reduce portability. Instead, user information is usually hardcoded explicitly in the image directly (and when it's not, root is used by default). When the container starts, the contained process will run with this hardcoded identity (and remember, root is used by default). The result is that the output data created by the containerized process is owned by a user that potentially only exists inside the container image. NOT by the user who started the container (also, did I mention that root is used by default?).
A possible workaround for this problem is to override the runtime user when the container starts (using the docker run -u… flag). But this introduces added complexity for the user, who must now learn about user identities (UIDs), POSIX ownership and permissions, the correct syntax for the docker run command, as well as find the correct values for their UID, group identifier (GID), and any additional groups they may need. All of this for someone who just wants to get some science done.
It is also worth noting that this method will not work every time. Not all applications are happy running as an arbitrary user or a user not present in the system's database (e.g., /etc/passwd file). These are edge cases, but they exist.
2. Access to persistent storage
Container images include only the files needed for the application to run. They typically do not include the input or raw data to be processed by the application. By default, when a container image is instantiated (i.e., when the container is started), the filesystem presented to the containerized application will show only those files and directories present in the container image. To access the input or raw data, the end user must explicitly map the desired mount points from the host server to paths within the filesystem in the container (typically using bind mounts). With Docker, these "volume mounts" are impossible to pre-configure globally, and the mapping must be done on a per-container basis when the containers are started. This not only increases the complexity of the commands needed to run an application, but it also introduces another undesired effect…
3. Compute host security
The ability to start a process as an arbitrary user and the ability to map arbitrary files or directories from the host server into the filesystem of a running container are two of several powerful capabilities that Docker provides to operators. But they are possible because, in the security model adopted by Docker, the daemon that runs the containers must be started on the host with root privileges. In consequence, end users that have access to the Docker daemon end up having the equivalent of root access to the host. This introduces security concerns since it violates the Principle of Least Privilege. Malicious actors can perform actions that exceed the scope of their initial authorization, but end users may also inadvertently corrupt or destroy data, even without malicious intent.
A possible solution to this problem is to implement user namespaces. But in practice, these are cumbersome to maintain, particularly in corporate environments where user identities are centralized in enterprise directories.
Singularity
To tackle these problems, the third iteration of APSIM containers was implemented using Singularity. Released in 2016, Singularity Community is an open source container platform designed specifically for scientific and HPC environments. "A user inside a Singularity container is the same user as outside the container" is one of Singularity's defining characteristics. It allows an end user to run a command inside of a container image as him or herself. Conversely, it does not allow impersonating other users when starting a container.
Another advantage of Singularity's approach is the way container images are stored on disk. With Docker, container images are stored in multiple separate "layers," which the Docker daemon needs to overlay and flatten during the container's runtime. When multiple container images reuse the same layer, only one copy of that layer is needed to re-create the runtime container's filesystem. This results in more efficient use of storage, but it does add a bit of complexity when it comes to distributing and inspecting container images, so Docker provides special commands to do so. With Singularity, the entire execution environment is contained within a single, executable file. This introduces duplication when multiple images have similar contents, but it makes the distribution of those images trivial since it can now be done with traditional file transfer methods, protocols, and tools.
The Docker container recipe files (i.e., the Dockerfile and related assets) can be used to re-create the container image as it was built for the project. Singularity allows importing and running Docker containers natively, so the same files can be used for both engines.
A day in the life
To illustrate the above with a practical example, let's put you in the shoes of a computational scientist. So not to single out anyone in particular, imagine that you want to use ToolA, which processes input files and creates output with statistics about them. Before asking the sysadmin to help you out, you decide to test the tool on your local desktop to see if works.
ToolA has a simple syntax. It's a single binary that takes one or more filenames as command line arguments and accepts a -o {json|yaml} flag to alter how the results are formatted. The outputs are stored in the same path as the input files are. For example:
$ ./ToolA file1 file2
$ ls
file1 file1.out file2 file2.out ToolA
You have several thousand files to process, but even though ToolA uses multi-threading to process files independently, you don't have a thousand CPU cores in this machine. You must use your cluster's job scheduler. The simplest way to do this at scale is to launch as many jobs as files you need to process, using one CPU thread each. You test the new approach:
$ export PATH=$(pwd):${PATH}
$ cd ~/input/files/to/process/samples
$ ls -l | wc -l
38
$ # we will set this to the actual qsub command when we run in the cluster
$ qsub=""
$ for myfiles in *; do $qsub ToolA $myfiles; done
...
$ ls -l | wc -l
75
Excellent. Time to bug the sysadmin and get ToolA installed in the cluster.
It turns out that ToolA is easy to install in Ubuntu Bionic because it is already in the repos, but a nightmare to compile in CentOS 7, which our HPC cluster uses. So the sysadmin decides to create a Docker container image and push it to the company's registry. He also adds you to the docker group after begging you not to misbehave.
You look up the syntax of the Docker commands and decide to do a few test runs before submitting thousands of jobs that could potentially fail.
$ cd ~/input/files/to/process/samples
$ rm -f *.out
$ ls -l | wc -l
38
$ docker run -d registry.example.com/ToolA:latest file1
e61d12292d69556eabe2a44c16cbd27486b2527e2ce4f95438e504afb7b02810
$ ls -l | wc -l
38
$ ls *out
$
Ah, of course, you forgot to mount the files. Let's try again.
$ docker run -d -v $(pwd):/mnt registry.example.com/ToolA:latest /mnt/file1
653e785339099e374b57ae3dac5996a98e5e4f393ee0e4adbb795a3935060acb
$ ls -l | wc -l
38
$ ls *out
$
$ docker logs 653e785339
ToolA: /mnt/file1: Permission denied
You ask the sysadmin for help, and he tells you that SELinux is blocking the process from accessing the files and that you're missing a flag in your docker run. You don't know what SELinux is, but you remember it mentioned somewhere in the docs, so you look it up and try again:
$ docker run -d -v $(pwd):/mnt:z registry.example.com/ToolA:latest /mnt/file1
8ebfcbcb31bea0696e0a7c38881ae7ea95fa501519c9623e1846d8185972dc3b
$ ls *out
$
$ docker logs 8ebfcbcb31
ToolA: /mnt/file1: Permission denied
You go back to the sysadmin, who tells you that the container uses myuser with UID 1000 by default, but your files are readable only to you, and your UID is different. So you do what you know is bad practice, but you're fed up: you run chmod 777 file1 before trying again. You're also getting tired of having to copy and paste hashes, so you add another flag to your docker run:
$ docker run -d --name=test -v $(pwd):/mnt:z registry.example.com/ToolA:latest /mnt/file1
0b61185ef4a78dce988bb30d87e86fafd1a7bbfb2d5aea2b6a583d7ffbceca16
$ ls *out
$
$ docker logs test
ToolA: cannot create regular file '/mnt/file1.out': Permission denied
Alas, at least this time you get a different error. Progress! Your friendly sysadmin tells you that the process in the container won't have write permissions on your directory because the identities don't match, and you need more flags on your command line.
$ docker run -d -u $(id -u):$(id -g) --name=test -v $(pwd):/mnt:z registry.example.com/ToolA:latest /mnt/file1
docker: Error response from daemon: Conflict. The container name "/test" is already in use by container "0b61185ef4a78dce988bb30d87e86fafd1a7bbfb2d5aea2b6a583d7ffbceca16". You have to remove (or rename) that container to be able to reuse that name.
See 'docker run --help'.
$ docker rm test
$ docker run -d -u $(id -u):$(id -g) --name=test -v $(pwd):/mnt:z registry.example.com/ToolA:latest /mnt/file1
06d5b3d52e1167cde50c2e704d3190ba4b03f6854672cd3ca91043ad23c1fe09
$ ls *out
file1.out
$
Success! Now we just need to wrap our command with the one used by the job scheduler and wrap all of that again with our for loop.
$ cd ~/input/files/to/process
$ ls -l | wc -l
934752984
$ for myfiles in *; do qsub -q short_jobs -N "toola_${myfiles}" docker run -d -u $(id -u):$(id -g) --name="toola_${myfiles}" -v $(pwd):/mnt:z registry.example.com/ToolA:latest /mnt/${myfiles}; done
Now that was a bit clunky, wasn't it? Let's look at how using Singularity simplifies it.
$ cd ~
$ singularity pull --name ToolA.simg docker://registry.example.com/ToolA:latest
$ ls
input ToolA.simg
$ ./ToolA.simg
Usage: ToolA [-o {json|yaml}] <file1> [file2...fileN]
$ cd ~/input/files/to/process
$ for myfiles in *; do qsub -q short_jobs -N "toola_${myfiles}" ~/ToolA.simg ${myfiles}; done
Need I say more?
This works because, by default, Singularity containers run as the user that started them. There are no background daemons, so privilege escalation is not allowed. Singularity also bind-mounts a few directories by default ($PWD, $HOME, /tmp, /proc, /sys, and /dev). An administrator can configure additional ones that are also mounted by default on a global (i.e., host) basis, and the end user can (optionally) also bind arbitrary ones at runtime. Of course, standard Unix permissions apply, so this still doesn't allow unrestricted access to host files.
But what about climate change?
Oh! Of course. Back on topic. We decided to break down the bulk of simulations that we need to run on a per-project basis. Each project can then focus on a specific crop, a specific geographical area, or different crop management techniques. After all of the simulations for a specific project are completed, they are collated into a MariaDB database and visualized using an RStudio Shiny web app.
shinyappfrontui_nz.png
Prototype Shiny app screenshot shows a nationwide run of climate change's impact on maize silage comparing current and end-of-century scenarios.
The app allows us to compare two different scenarios (reference vs. alternative) that the user can construct by choosing from a combination of variables related to the climate (including the current climate and the climate-change projections for mid-century and end of the century), the soil, and specific management techniques (like irrigation or fertilizer use). The results are displayed as raster values or differences (averages, or coefficients of variation of results per pixel) and their distribution across the area of interest.
The screenshot above shows an example of a prototype nationwide run across "arable lands" where we compare the silage maize biomass for a baseline (1985-2005) vs. future climate change (2085-2100) for the most extreme emissions scenario. In this example, we do not take into account any changes in management techniques, such as adapting sowing dates. We see that most negative effects on yield in the Southern Hemisphere occur in northern areas, while the extreme south shows positive responses. Of course, we would recommend (and you would expect) that farmers start adapting to warm temperatures starting earlier in the year and react accordingly (e.g., sowing earlier, which would reduce the negative impacts and enhance the positive ones).
Next steps
With the framework in place, all that remains is the heavy lifting. Run ALL the simulations! Of course, that is easier said than done. Our in-house cluster is a shared resource where we must compete for capacity with several other projects and teams.
Additional work is planned to further generalize how we distribute jobs across compute resources so we can leverage capacity wherever we can get it (including the public cloud if the project receives sufficient additional funding). This would mean becoming job scheduler-agnostic and solve the data gravity problem.
Work is also underway to further refine the UI and UX aspects of the web application until we are comfortable it can be published to policymakers and other interested parties.
If you are interested in our work from a scientific point of view, please contact me and I will put you in touch with the project leader. For all other inquiries, you can also contact me and I will do my best to help.
Eric Burgueño will present Using containers to analyse the impact of climate change and soil on New Zealand crops at linux.conf.au, January 21-25 in Christchurch, New Zealand.
3 Comments
Hi Eric, nice read! You may consider using job array () instead of spawning thousands off individual jobs ;)
Cheers! And yes, we do. The actual submission command we use is quite more complex than described and definitely uses job arrays. But for the purposes of the article I had to use the simplest syntax that can illustrate the problem.
That's the best I've read today really it's awesome | https://opensource.com/article/19/1/using-containers-analyze-climate-change | CC-MAIN-2019-39 | refinedweb | 3,584 | 52.29 |
and Microsoft Visual Studio Express 2012 for Windows 8. Express 2012 for Windows 8.
The Visual Studio Express 2012 for Windows 8 start screen appears.
(Going forward, we'll refer to Visual Studio Express 2012 for Windows 8.png and smalllogo.png)to display in the start screen.
- An image (storelogo.png) to represent your app in the Windows Store.
- A splash screen (splashscreen; notice that deploying the app adds its tile to the last group on the Start screen. To run the app again, tap or click its tile on the start. This file contains a ResourceDictionary that has a reference to the StandardStyles.xaml ResourceDictionary located in the Common folder. StandardStyles.xaml provides a set of default styles that gives your app the Windows 8 look and feel.
<Application x: > { /// if (!rootFrame.Navigate(typeof(MainPage), args.Arguments)) { throw new Exception("Failed to create initial page"); } } //(); } } }
In the MainPage.xaml file you define the UI for your app. You can add elements directly using XAML markup, or you can use the design tools provided by Visual Studio. The Basic Page template creates a new class called
LayoutAwarePage. The
LayoutAwarePage class extends the base Page class and provides methods for navigation, state management, and view management. The Basic Page template also includes some simple content, like a back button and page title.
<common:LayoutAwarePage x: <Page.Resources> <!--> </Storyboard> </VisualState> <!-- The back button and title have different styles when snapped --> <VisualState x: <Storyboard> > </Grid> </common:LayoutAwarePage>
MainPage.xaml.cs/.vb is the code-behind page for MainPage.xaml. Here you add your app logic and event handlers. The Basic Page template includes 2 methods where you can save and load the page state.
using System; using System.Collections.Generic; using Windows.UI.Xaml.Controls; // The Basic Page item template is documented at namespace HelloWorld { /// <summary> /// A basic page that provides characteristics common to most applications. /// </summary> public sealed partial class MainPage : HelloWorld.Common.LayoutAwarePage { public MainPage() { this.InitializeComponent(); } /// > protected override void LoadState(Object navigationParameter, Dictionary<String, Object> pageState) { } /// <summary> /// Preserves state associated with this page in case the application is suspended or the /// page is discarded from the navigation cache. Values must conform to the serialization /// requirements of <see cref="SuspensionManager.SessionState"/>. /// </summary> /// <param name="pageState">An empty dictionary to be populated with serializable state.</param> protected override void SaveState(Dictionary<String, Object> pageState) { } } } before the
<VisualStateManager.VisualStateGroups>tag, add this XAML. It contains a StackPanel with a TextBlock that asks the user's name, a TextBox element to accept the user's name, a Button, and another TextBlock element.
<StackPanel Grid. standard styles
Earlier in this tutorial, we pointed out that the App.xaml file contains a reference to the StandardStyles.xaml ResourceDictionary:
Right now, all the text is very small and difficult to read. You can easily apply the standard Local Resource > BasicTextStyle.
BasicTextStyle is a resource defined in the StandardStyles.xaml ResourceDictionary.
In the XAML design surface, the appearance of the text changes. In the XAML editor, the XAML for the TextBlock is updated.
- Repeat the process to assign the BasicText="{StaticResource BasicText ResourceDictionary in App BasicTextStyle is still applied from the previous step, so that's the style you'll modify a copy of.
- In the Create Style Resource dialog, enter "BigGreenTextStyle" as the resource key, and select the option to define the resource in the application.
- Click OK. The new style is created in App.xaml and the TextBlock is updated to use the new style resource.
- Click the property marker next to the Style property to open the menu again.
- In the menu, select Edit Resource. App.xaml opens in the editor.
Note Nothing is shown in the XAML designer for App.xaml.
- In the "BigGreenTextStyle" resource, change the Foreground value to "Green" and the FontSize value to "36".
<Style x: <Setter Property="Foreground" Value="Green"/> <Setter Property="FontSize" Value="36"/> .
Build date: 3/14/2013 | http://msdn.microsoft.com/en-in/library/windows/apps/hh986965.aspx | CC-MAIN-2013-20 | refinedweb | 651 | 51.85 |
From: Ed Brey (brey_at_[hidden])
Date: 2001-04-11 08:46:02
From: "Geurt Vos" <G.Vos_at_[hidden]>
> >
> about MSVC, most people will simply leave it at warning
> level 3, which also doesn't issue this conversion warning...
>
> [sidenote:
> I never use MSVC at level 4, because it results in way too many
> warning in MS's code, and also generates quite some redundant
> warnings, such as "unreferenced inline function has been removed"
> ]
The problem with not using level 4 is that you throw away quite a few
good warnings. For all my projects, I use warning level 4 and include a
header that turns off all bogus warnings; that is, warnings that are
triggered by artifacts present in good code. Additionally, around
include directives for Microsoft headers, which trigger many warnings
that are valuable when applied to end user or boost code, I push to
warning level 3, since I'm not interested in seeing warnings regarding
Microsoft code (I let it fall into the black magic domain).
Boost code, however, is written at a higher level, and ought not be
treated as black magic. It should compile without warning under the
tightest reasonable warning level to prevent bugs, using notation where
necessary to indicate that something like a loss of significant digits
is truly a feature and not a bug (like using [sic] in text).
I'm not sure how many people use warning level 3 versus 4, but it
doesn't really matter. To help the boost code be as error-free as
possible, the more help we can solicit from the compiler the better.
Here's an example of how to one can practically compile under warning
level 4:
// Disable warnings about artifacts that are present in good code.
#pragma warning(disable: 4097)
#pragma warning(disable: 4127)
...
// Disable warnings that occur during compilation of system headers.
#pragma warning(push, 3)
#include <windows.h>
#include <vector>
...
#pragma warning(pop) // Allow warning level 4 for our code.
// Reach your hand into the screen to feel the warm fuzzy feeling
// boost is compiling at warning level 4 (minus bogus warnings).
#include <boost/smart_ptr.hpp>
...
There is a catch with this approach: When boost files include system
headers that have not been previously included by the translation unit,
any of those system headers can trigger non-bogus warnings.
Unfortunately, the C++ standard headers shipped with VC and the STLport
4 headers do this. Fortunately, the solution is simple: include such
headers between the push and pop.
Longer term, I think it would be nice to have boost be completely
warning level friendly, by doing these things:
- Provide a macro that can cause the config.hpp header to disable all
never-useful warnings.
- In each boost file, wrap each sequence of system include directives
like this:
#include <boost/system_header_include_prolog.hpp>
#include <vector>
#include <list>
#include <boost/system_header_include_epilog.hpp>
where the prolog and epilog do what is appropriate for the system. For
VC, this means a warning push and pop.
The end result is that in many cases the user work happily at warning
level 4, with just 3 extra lines of code.
#define BOOST_DISABLE_OVERZEALOUS_WARNINGS
#include <boost/system_header_include_prolog.hpp>
#include <vector>
#include <boost/system_header_include_epilog.hpp>
#include <boost/smart_ptr.hpp>
If there is interest in this, I'd be happy to lay out a more detailed
implementation, including the documented list of overzealous warnings
and documentation of guidelines for library writers.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2001/04/10950.php | CC-MAIN-2020-45 | refinedweb | 590 | 55.13 |
This lesson teaches you to
You should also read
When an app wants to access a file shared by another app, the requesting app (the client)
usually sends a request to the app sharing the files (the server). In most cases, the request
starts an
Activity in the server app that displays the files it can share.
The user picks a file, after which the server app returns the file's content URI to the
client app.
This lesson shows you how a client app requests a file from a server app, receives the file's content URI from the server app, and opens the file using the content URI.
Send a Request for the File
To request a file from the server app, the client app calls
startActivityForResult with an
Intent containing the action such as
ACTION_PICK and a MIME type that the client app
can handle.
For example, the following code snippet demonstrates how to send an
Intent to a server app in order to start the
Activity described in Sharing a File:
public class MainActivity extends Activity { private Intent mRequestFileIntent; private ParcelFileDescriptor mInputPFD; ... @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mRequestFileIntent = new Intent(Intent.ACTION_PICK); mRequestFileIntent.setType("image/jpg"); ... } ... protected void requestFile() { /** * When the user requests a file, send an Intent to the * server app. * files. */ startActivityForResult(mRequestFileIntent, 0); ... } ... }
Access the Requested File
The server app sends the file's content URI back to the client app in an
Intent. app receives. Since this URI doesn't contain a directory path, the client app can't discover and open any other files in the server app. Only the client app gets access to the file, and only for the permissions granted by the server app. The permissions are temporary, so once the client app's task stack is finished, the file is no longer accessible outside the server app.
The next snippet demonstrates how the client app handles the
Intent sent from the server app, and how the client app gets the
FileDescriptor using the content URI:
/* * When the Activity of the app that hosts files sets a result and calls * finish(), this method is invoked. The returned Intent contains the * content URI of a selected file. The result code indicates if the * selection worked or not. */ @Override public void onActivityResult(int requestCode, int resultCode, Intent returnIntent) { // If the selection didn't work if (resultCode != RESULT_OK) { // Exit without doing anything else return; } else { // Get the file's content URI from the incoming Intent Uri returnUri = returnIntent.getData(); /* * Try to open the file for "read" access using the * returned URI. If the file isn't found, write to the * error log and return. */ try { /* * Get the content resolver instance for this context, and use it * to get a ParcelFileDescriptor for the file. */ mInputPFD = getContentResolver().openFileDescriptor(returnUri, "r"); } catch (FileNotFoundException e) { e.printStackTrace(); Log.e("MainActivity", "File not found."); return; } // Get a regular file descriptor for the file FileDescriptor fd = mInputPFD.getFileDescriptor(); ... } }
The method
openFileDescriptor()
returns a
ParcelFileDescriptor for the file. From this object, the client
app gets a
FileDescriptor object, which it can then use to read the file. | http://developer.android.com/training/secure-file-sharing/request-file.html | CC-MAIN-2016-18 | refinedweb | 527 | 51.99 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.